blog: Import posts from Tumblr
parent
716b4c5af5
commit
db9197fa2d
|
@ -0,0 +1,188 @@
|
||||||
|
+++
|
||||||
|
title = 'AT&T Internet'
|
||||||
|
date = 2020-08-18T22:26:00-05:00
|
||||||
|
+++
|
||||||
|
|
||||||
|
AT&T Technician arrived at 08:40 on Monday 17 August 2020 to install *Internet
|
||||||
|
1000* service. It took approximately 5 hours for him to complete the hookup,
|
||||||
|
because he had to run fiber line on the utility poles in the back yard,
|
||||||
|
presumably from the nearest fiber pedestal, trimming trees and shrubs in the
|
||||||
|
process, and had to return to the office to fetch different equipment.
|
||||||
|
Apparently, mine was the first house he installed with the capability for 10
|
||||||
|
gigabit/sec bandwidth, and he was unprepared. He completed his work and left
|
||||||
|
around 13:45.
|
||||||
|
|
||||||
|
## First Tests
|
||||||
|
|
||||||
|
I performed the first tests of the AT&T Internet connection by connecting
|
||||||
|
_Toad_ directly to the provided "modem." This device, an Arris BGW210-700, or
|
||||||
|
one of other similar devices, seems to be mandatory for all AT&T
|
||||||
|
Internet/U-Verse customers. As far as I can tell, it is a typical Ethernet
|
||||||
|
router, with support for 802.1x/EAP wired port authentication.
|
||||||
|
|
||||||
|
With this setup, I got a pretty disappointing Speed Test score:
|
||||||
|
http://www.dslreports.com/speedtest/65265252. Additionally, I am not sure the
|
||||||
|
"IP Passthrough" functionality of the device is going to be sufficient for my
|
||||||
|
needs. Although it makes it appear to the downstream device as though it is
|
||||||
|
directly connected to the Internet, the intermediate router still seems to be
|
||||||
|
managing the traffic. I have seen a few vague reports that certain services
|
||||||
|
"don't work" when the router is set up in this mode.
|
||||||
|
|
||||||
|
## Bypassing the Residential Gateway
|
||||||
|
|
||||||
|
With the poor network performance presumably caused by the AT&T-provided
|
||||||
|
_Residential Gateway_, and my suspicions that it would cause me trouble with
|
||||||
|
my self-hosted services (particularly the IPsec VPN), I decided to research a
|
||||||
|
way to bypass it.
|
||||||
|
|
||||||
|
### Initial Research: EAP Proxy
|
||||||
|
|
||||||
|
The first method I found for trying to bypass the AT&T-provided _Residential
|
||||||
|
Gateway_ (RG) was to use an "EAP Proxy" running on the USG:
|
||||||
|
https://medium.com/@mrtcve/at-t-gigabit-fiber-modem-bypass-using-unifi-usg-updated-c628f7f458cf.
|
||||||
|
The proxy would receive the EAPOL frames from the _Optical Network Terminator_
|
||||||
|
(ONT) and forward them to the RG, which is connected to the _LAN 2_ port on
|
||||||
|
the USG. It would then forward the responses from the RG to the ONT, thus the
|
||||||
|
ONT would think the USG itself was performing the authentication.
|
||||||
|
|
||||||
|
I was initially turned off from this method, because it seemed like it
|
||||||
|
_required_ connecting the RG directly to the _LAN 2_ port, without a switch in
|
||||||
|
between. Since I am already using the _LAN 2_ port for the auxiliary VLANs on
|
||||||
|
my network (test, guest, Home Assistant, management), I decided to revisit it
|
||||||
|
only if I could find no other option.
|
||||||
|
|
||||||
|
### Attempt 1: Device Swapping
|
||||||
|
|
||||||
|
The next bypass mechanism I found was this "true bridge mode" post on
|
||||||
|
DSLReports:
|
||||||
|
https://www.dslreports.com/forum/r29903721-AT-T-Residential-Gateway-Bypass-True-bridge-mode.
|
||||||
|
This method uses the RG temporarily to perform the 802.1x port
|
||||||
|
authentication/authorization, and then swap it out for another router. The
|
||||||
|
described procedure uses port-based VLANs on a NetGear ProSafe GS108E. By
|
||||||
|
changing the VLAN assignment of the ports where the AT&T RG and the desired
|
||||||
|
router are connected, the idea is to "trick" the _Optical Network Terminator_
|
||||||
|
(ONT), such that it does not notice the device on the other end has changed,
|
||||||
|
and thus does not require it to reauthenticate.
|
||||||
|
|
||||||
|
I attempted this method using my UniFi Switch 48, assigning 3 ports to a
|
||||||
|
special "AT&T" VLAN. I planned to bring up the ONT and the RG together on the
|
||||||
|
same VLAN, then once the authentication was complete, kill the RG and bring up
|
||||||
|
the USG. Unfortunately, the RG never managed to authenticate. The
|
||||||
|
"Broadband" light blinked red most of the time, and never acquired an IP
|
||||||
|
address. I suspect the issue was that the switch was filtering the 802.1x
|
||||||
|
traffic, since this is a layer 2 protocol that is intended to authenticate a
|
||||||
|
_device_ to a _switch_, rather than a _device_ to another _device_.
|
||||||
|
|
||||||
|
Some musings in the DSLReports thread suggested that it should also be
|
||||||
|
possible to use a "dumb switch" to work around this problem. If the switch is
|
||||||
|
dumb enough (as is apparently the GS108E), it will forward EAPOL frames
|
||||||
|
exactly the same as it would any other frame. I have an extremely cheap
|
||||||
|
5-port Tenda gigabit switch, so I tried using it instead of the UniFi switch.
|
||||||
|
In this scenario, the RG _did_ manage to authenticate to the ONT, and come
|
||||||
|
online completely. Unfortunately, even with MAC address spoofing, I was not
|
||||||
|
able to "swap out" the RG. When I connected _Toad_ to the dumb switch and
|
||||||
|
disconnected the RG, _Toad_ never received a DHCP response, and using a static
|
||||||
|
IP address did not work. (In hindsight, I realized that this was probably
|
||||||
|
because I needed to use an 802.1Q tag to mark the outbound frames with VLAN ID
|
||||||
|
0.)
|
||||||
|
|
||||||
|
This method seemed really fragile to me, so even if I had gotten it to work, I
|
||||||
|
probably would not have been happy with it. I considered using some trick
|
||||||
|
like this "2 dumb switch relay" method
|
||||||
|
(https://www.dslreports.com/forum/r32061385-) to automatically disconnect the
|
||||||
|
USG and reconnect the RG whenever some issue caused the port to become
|
||||||
|
deauthenticated. If this had been my only option, I probably would have tried
|
||||||
|
harder, but instead, I decided to investigate some other options.
|
||||||
|
|
||||||
|
### Attempt 2: wpa_supplicant
|
||||||
|
|
||||||
|
During my initial research for a bypass method, I noted some posts regarding
|
||||||
|
using `wpa_supplicant` on a UniFi Dream Machine to perform the 802.1x
|
||||||
|
authentication directly. Since I had had no luck with the first couple of
|
||||||
|
options, I decided to research this option further and see if there was an
|
||||||
|
equivalent option for the UniFi Security Gateway. Sure enough, a quick Duck
|
||||||
|
Duck Go revealed a blog post specifically about that:
|
||||||
|
https://wells.ee/journal/2020-03-01-bypassing-att-fiber-modem-unifi-usg/
|
||||||
|
(Aside: the title of this person's blog, _It Kinda Works_ initially gave me
|
||||||
|
pause, as I interpreted that to refer to this specific technique. Luckily, I
|
||||||
|
decided to read it anyway :).
|
||||||
|
|
||||||
|
I knew wpa_supplicant could be used to perform 802.1x authentication for wired
|
||||||
|
interfaces, the same as it does for WPA2-Enterprise networks on wireless
|
||||||
|
interfaces, so I read the post hoping for some insight into how to set that up
|
||||||
|
on the USG. Sadly, it seemed like this may be difficult, since it involves
|
||||||
|
"obtaining" the X.509 certificates and private keys from the RG. The author
|
||||||
|
of this post advocates using eBay to obtain them, but I was not willing to
|
||||||
|
pursue that avenue. Instead, I decided to research a mechanism for extracting
|
||||||
|
them from the RG itself.
|
||||||
|
|
||||||
|
A quick DDG search for _bgw210 certificate_ led me to this blog post:
|
||||||
|
https://www.dupuis.xyz/bgw210-700-root-and-certs/. The author does a
|
||||||
|
magnificent job describing the vulnerability in the RG that allows this, how
|
||||||
|
an exploit was developed, and how to use it to gain root access to the
|
||||||
|
operating system. The process itself was **absolutely trivial**. In only a
|
||||||
|
few minutes, I was able to exploit the vulnerability, gain root access to the
|
||||||
|
OS, and extract the `mfg.dat` file.
|
||||||
|
|
||||||
|
The only tool I could find to decode the`mfg.dat` file and extract the
|
||||||
|
certificate and private key from it was a closed-source program called
|
||||||
|
(naturally) `mfg_dat_decode`:
|
||||||
|
https://www.devicelocksmith.com/2018/12/eap-tls-credentials-decoder-for-nvg-and.html?m=1.
|
||||||
|
Fortunately, the author provided a native Linux executable, so although I
|
||||||
|
would rather have inspected the code to see what it does and how it works, I
|
||||||
|
could at least use it without any hassle. Again, the process was so trivial,
|
||||||
|
I had access to everything I needed to set up my USG to perform the EAP-TLS
|
||||||
|
authentication itself, completely and permanently eliminating the AT&T RG from
|
||||||
|
my network.
|
||||||
|
|
||||||
|
I first tested this method using `wpa_supplicant` on _Toad_, and it worked
|
||||||
|
flawlessly. Later, when I had some spare time, I went ahead and deployed it
|
||||||
|
on the USG. With some minor variation (I put the certificates and private key
|
||||||
|
in `/config/auth`, the `wpa_supplicant` binary in `/config/scripts`, the
|
||||||
|
`wpa_supplicant.conf` file in `/config`, and wrote my own
|
||||||
|
`/config/scripts/post-config.d/wpa_supplicant.sh` because the one suggested
|
||||||
|
was written by someone who had extremely limited knowledge of shell
|
||||||
|
scripting), I was able to set it up as described in the _Bypassing the AT&T
|
||||||
|
Fiber modem…_ blog post. The process was actually rather simple:
|
||||||
|
|
||||||
|
1. Install the `wpa_supplicant` binary
|
||||||
|
2. Copy the CA certificate, client certificate, and private key files
|
||||||
|
3. Configure `wpa_supplicant` to use these to authenticate the port using EAP-TLS
|
||||||
|
4. Ensure `wpa_supplicant` starts automatically using a script in `/config/scripts/post-config.d`
|
||||||
|
5. Change the MAC address of the WAN port on the USG to that of the RG
|
||||||
|
6. Set the WAN port to use tagged VLAN ID 0
|
||||||
|
7. Reboot the USG
|
||||||
|
|
||||||
|
The only thing that bothers me about this method is using the precompiled
|
||||||
|
`wpa_supplicant` binary. I definitely want to come up with a process for
|
||||||
|
building it myself, preferably automatically in Jenkins, so I can keep it
|
||||||
|
up-to-date.
|
||||||
|
|
||||||
|
Fortunately, the certificate I extracted from the RG does not expire until
|
||||||
|
2040! Hopefully sometime in 20 years, something changes…
|
||||||
|
|
||||||
|
Without the RG in the path, the next speed test was *much* better:
|
||||||
|
http://www.dslreports.com/speedtest/65268235.
|
||||||
|
|
||||||
|
## IPv6 Support
|
||||||
|
|
||||||
|
During my initial test, _Toad_ was assigned a couple of IPv6 addresses
|
||||||
|
(presumably 1 via SLAAC and 1 by DHCPv6). Once I eliminated the RG and
|
||||||
|
connected the USG directly, no IPv6 addresses were assigned either to the USG
|
||||||
|
or to its clients. Based on various posts on the web, I believe this is a
|
||||||
|
"known issue" with AT&T. Apparently, there is a 2-week period after changing
|
||||||
|
devices during which IPv6 address assignment is unavailable. This seems to be
|
||||||
|
because the DHCPv6 lease is assigned with a 14-day lifetime, and only a single
|
||||||
|
lease is allowed per customer connection. I will again try to get IPv6
|
||||||
|
working in a couple of weeks, then.
|
||||||
|
|
||||||
|
## Self-Hosting
|
||||||
|
|
||||||
|
So far, after switching to AT&T Internet, I have not noticed any issues with
|
||||||
|
my self-hosted services. I have confirmed that all of my websites (including
|
||||||
|
Nextcloud, Bitwarden, Gitea, etc.) are all working, as is my Wireguard VPN.
|
||||||
|
|
||||||
|
Before I signed up with AT&T, I read the Terms of Service and the Acceptable
|
||||||
|
Use Policy. Obtuse legalese notwithstanding, I did not see anything that
|
||||||
|
forbade self-hosted services, so I imagine there will not be any issues going
|
||||||
|
forward, especially with the crappy RG out of the way.
|
|
@ -0,0 +1,16 @@
|
||||||
|
+++
|
||||||
|
title = 'Cancel Time Warner/Spectrum'
|
||||||
|
date = 2020-08-21T18:23:00-05:00
|
||||||
|
+++
|
||||||
|
|
||||||
|
Called Spectrum to cancel Internet service. As I expected, the representative tried *very* hard to get me to change my mind. I told her I was frustrated that
|
||||||
|
|
||||||
|
1. My bill has gone up $10/month every year, and it is now $109/month for 300 Mbit/sec service
|
||||||
|
2. The same service isn't even available any more to new customers, but 400Mbit/sec is available for less
|
||||||
|
3. AT&T is offering 1000Mbit/sec (symmetrical) for $59/month
|
||||||
|
|
||||||
|
She was very concerned because I have been a Time Warner customer for 13 years! She spent quite a long time searching for options. She finally offered me 400Mbit/sec service for $69/month. I rejected this proposal, since AT&T is so much cheaper. I also asked for a reimbursement for the extra $10/month the last 5 months; the price changed from $99/month to $109/month in April. She said that would not be possible, so I said there is no way I would continue being a Spectrum customer in that case.
|
||||||
|
|
||||||
|
Service is supposed to be terminated 3 September 2020.
|
||||||
|
|
||||||
|
I a bit sad to switch away, having had the same account since 2007. I never really had any trouble with Time Warner's service, and I really hope AT&T can match that reliability.
|
|
@ -0,0 +1,220 @@
|
||||||
|
+++
|
||||||
|
title = 'Enable PowerShell Remoting with CredSSP using Group Policy'
|
||||||
|
date = 2012-06-07T04:02:00Z
|
||||||
|
+++
|
||||||
|
|
||||||
|
Windows PowerShell 2.0 has significantly improved the command-line experience for Windows administration, both for servers and clients. What makes it even better, though, is PowerShell Remoting, which uses Windows Remote Management (WinRM) to send commands between PowerShell sessions on different computers. WinRM is an implementation of WS-Management, an open, standardized SOAP-based web services protocol. In many ways, PowerShell Remoting is similar to SSH, although arguably less mature.
|
||||||
|
|
||||||
|
# Manual Configuration
|
||||||
|
|
||||||
|
## Enable PowerShell Remoting Manually
|
||||||
|
|
||||||
|
Enabling PowerShell 2.0 Remoting is simple, just run the following command from an elevated PowerShell session:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
Enable-PSRemoting -Force
|
||||||
|
```
|
||||||
|
|
||||||
|
Once that's done, you can start using it to execute PowerShell commands from a remote host:
|
||||||
|
|
||||||
|
```PowerShell
|
||||||
|
Invoke-Command -ComputerName $remotehost -Command { Write-Host "Hello, world!" }
|
||||||
|
```
|
||||||
|
|
||||||
|
Or, you can open an interactive session on the remote computer:
|
||||||
|
|
||||||
|
```PowerShell
|
||||||
|
Enter-PSSession -ComputerName $remotehost
|
||||||
|
```
|
||||||
|
|
||||||
|
## Enable CredSSP Manually
|
||||||
|
|
||||||
|
CredSSP is a Security Support Provider introduced with Windows Vista that enables credential delegation. In other words, it allows the remote host to access the credentials that were used to authenticate the user, and pass them on to a third host. For example, when using either basic or Kerberos authentication (the default) when connecting to a remote PowerShell session, the user would not have access to a separate file server. When using CredSSP, however, the session credentials can be passed through to the file server.
|
||||||
|
|
||||||
|
To enable CredSSP, both the client and the server must be configured to allow CredSSP. To enable CredSSP on the client side, run the following PowerShell command from an elevated session:
|
||||||
|
|
||||||
|
```PowerShell
|
||||||
|
Enable-WSManCredSSP -Role Client -DelegateComputer $remotehost
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: The `DelegateComputer` parameter specifies a list of remote hosts to which the client should be allowed to connect. It can accept wildcards, such as `*` for all hosts, or `*.mydomain.local` for any host on the `mydomain.local` DNS domain. If you specify a domain, however, you must always use the server's FQDN when connecting to it.
|
||||||
|
|
||||||
|
To enable CredSSP on the server side, run the following PowerShell 2.0 command from an elevated session:
|
||||||
|
|
||||||
|
```PowerShell
|
||||||
|
Enable-WSManCredSSP -Role Server
|
||||||
|
```
|
||||||
|
|
||||||
|
To connect to a remote host with PowerShell Remoting using CredSSP authentication, you need to specify the `Credential` and `Authentication` parameters:
|
||||||
|
|
||||||
|
```PowerShell
|
||||||
|
Enter-PSSession -ComputerName $remotehost -Credential (Get-Credential) -Authentication CredSSP
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: You must specify a fully-qualified username (such as username@domain.tld or DOMAIN\username) when prompted for credentials.
|
||||||
|
|
||||||
|
The unfortunate drawback of using CredSSP is that the current implementation of the CredSSP provider for WinRM does not support delegating default credentials (i.e. the current user's credentials). Go vote for [Microsoft Connect Suggestion #498377](https://connect.microsoft.com/PowerShell/feedback/details/498377/credssp-should-allow-delegation-of-default-current-credentials) if this bothers you; hopefully Microsoft will fix it in a future release. As such, it is best to get a `PSCredential` object once and store it in a variable for reuse:
|
||||||
|
|
||||||
|
```PowerShell
|
||||||
|
$cred = Get-Credential $env:USERNAME@$env:USERDNSDOMAIN
|
||||||
|
```
|
||||||
|
|
||||||
|
# Group Policy Configuration
|
||||||
|
|
||||||
|
Enabling PowerShell Remoting and CredSSP manually is fine for only one or two hosts, but what if it needs to be done for every machine on a network? Luckily, Group Policy is able to make all the same configuration changes the `Enable-PSRemoting` and `Enable-WSManCredSSP` cmdlets do.
|
||||||
|
|
||||||
|
There are several configuration pieces that must be set in order for everything to work correctly:
|
||||||
|
|
||||||
|
* The *Windows Remote Management* service
|
||||||
|
* Windows Firewall exceptions
|
||||||
|
* Credential delegation
|
||||||
|
* WinRM Client parameters
|
||||||
|
* WinRM Service parameters
|
||||||
|
|
||||||
|
In addition, some Active Directory objects may need to have permissions changed.
|
||||||
|
|
||||||
|
It is probably best to group these settings into one or two separate GPOs, one for servers and one for clients, to keep them separate from the rest of the Group Policy settings that may already exist on the network.
|
||||||
|
|
||||||
|
## Server Settings
|
||||||
|
|
||||||
|
To enable PowerShell Remoting on the server side, create a new GPO and link it an organizational unit containing the computer objects for the server machines. Open the GPO with the Group Policy editor and set the following options:
|
||||||
|
|
||||||
|
### Windows Remote Management Service
|
||||||
|
|
||||||
|
1. Navigate to *Computer Configuration > Windows Settings > Security Settings > System Services*
|
||||||
|
2. Locate the *Windows Remote Management (WS-Management)* service and double-click it
|
||||||
|
3. Tick the check box nexte to *Define this policy setting* and select *Automatic*. Click "OK"
|
||||||
|
|
||||||
|
### Windows Firewall Exceptions
|
||||||
|
|
||||||
|
1. Navigate to *Computer Configuration > Windows Settings > Security Settings> Windows Firewall with Advanced Security > Windows Firewall with Advanced Security - LDAP://{GPO-DistinguishedName} > Inbound Rules*
|
||||||
|
2. Right-click the pane at the right and choose *New Rule...*
|
||||||
|
3. Select *Predefined* and choose `Windows Remote Management` from the drop-down list. Click "Next"
|
||||||
|
4. Remove the tick next to `Windows Remote Management - Compatibility Mode (HTTP-In)`, but leave the one for `Windows Remote Management (HTTP-In)`. The "Compatibility Mode" rule provides an upgrade path for systems using WinRM prior to version 2.0 and should not be enabled unless there is a specific need for it. Click "Next"
|
||||||
|
5. Select *Allow the connection* and click "Finish"
|
||||||
|
|
||||||
|
### WinRM Service Parameters
|
||||||
|
|
||||||
|
1. Navigate to *Computer Settings > Administrative Templates > Windows Components > Windows Remote Management (WinRM) > WinRM Service*
|
||||||
|
2. Double-click *Allow automatic configuration of listeners*
|
||||||
|
3. Select *Enabled*
|
||||||
|
4. In the box labeled *IPv4 filter*, enter a comma-separated list of IP address ranges to specify to which IP addresses the WinRM service should bind on the server. For example, `192.168.1.0-192.168.1.255` would allow the WinRM service to bind to network adapters with an IP address in that range, but no other adapter.
|
||||||
|
5. Do the same for *IPv6 filter*, using IPv6 addresses instead, or leave it blank to disable WinRM over IPv6
|
||||||
|
6. Click "OK"
|
||||||
|
7. Double-click *Allow CredSSP authentication*
|
||||||
|
8. Select *Enabled*
|
||||||
|
9. Click "OK"
|
||||||
|
|
||||||
|
## Client Settings
|
||||||
|
|
||||||
|
To enable PowerShell remoting on the client side, create a new GPO and link it to an organizational unit containing the computer objects for the client machines. Open the GPO with the Group Policy editor and set the following options:
|
||||||
|
|
||||||
|
### Credential Delegation
|
||||||
|
|
||||||
|
1. Navigate to *Computer Settings > Administrative Templates > System > Credentials Delegation*
|
||||||
|
2. Double-click *Allow Delegating Fresh Credentials*
|
||||||
|
3. Select *Enabled*
|
||||||
|
4. Click "Show..."
|
||||||
|
5. Enter a list of service principal names representing hosts to which clients should be allowed to delegate credentials. Wildcards are allowed in the host name portion of the SPN. For example:
|
||||||
|
* `WSMAN/Server01` — Allows delegation only to the server named `Server01`, and only using its single-label name
|
||||||
|
* `WSMAN/Server01.mydomain.local` — Allows delegation only to the server named `Server01`, and only using its fully-qualified domain name
|
||||||
|
* `WSMAN/*.mydomain.local` — Allows delegation to any host on the `mydomain.local` DNS domain, using their fully-qualified domain names only
|
||||||
|
* `WSMAN/*` — Allows delegation to any host by any name
|
||||||
|
6. Click "OK"
|
||||||
|
7. Click "OK"
|
||||||
|
|
||||||
|
### WinRM Client Parameters
|
||||||
|
|
||||||
|
1. Navigate to *Computer Settings > Administrative Templates > Windows Components > Windows Remote Management (WinRM) > WinRM Client*
|
||||||
|
2. Double-click *Allow CredSSP authentication*
|
||||||
|
3. Select *Enabled*
|
||||||
|
4. Click "OK"
|
||||||
|
5. Double-click *Trusted Hosts*
|
||||||
|
6. Select *Enabled*
|
||||||
|
7. In the box labeled *TrustedHostList*, enter a comma-separated list of hosts the client should trust. Wildcards are allowed, and there is a special `<local>` value meaning trust all single-label names. For example:
|
||||||
|
* `Server01` — Trust only the server named `Server01`, and only using its single-label name
|
||||||
|
* `server01.mydomain.local` — Trust only the server named `Server01`, and only using its fully-qualified domain name
|
||||||
|
* `*.mydomain.local` — Trust any host on the `mydomain.local` DNS domain, using their fully-qualified domain names only
|
||||||
|
* `<local>` — Trust any host by single-label name
|
||||||
|
* `*` — Trust any host by any name
|
||||||
|
8. Click "OK"
|
||||||
|
|
||||||
|
# Troubleshooting
|
||||||
|
|
||||||
|
Here are some common error messages and some troubleshooting tips for each:
|
||||||
|
|
||||||
|
## Operation timed out
|
||||||
|
|
||||||
|
Enter-PSSession : Connecting to remote server failed with the following error me
|
||||||
|
ssage : The WinRM client cannot complete the operation within the time specified
|
||||||
|
. Check if the machine name is valid and is reachable over the network and firew
|
||||||
|
all exception for Windows Remote Management service is enabled. For more informa
|
||||||
|
tion, see the about_Remote_Troubleshooting Help topic.
|
||||||
|
|
||||||
|
* Can you ping the machine using the same name you used for the `ComputerName` parameter?
|
||||||
|
* If the settings are defined in Group Policy, has the machine performed a policy refresh? Force one by running `gpupdate /target:computer` with elevated privileges
|
||||||
|
* Does the machine have the *Windows Remote Management (HTTP-In)* rules enabled in Windows Firewall?
|
||||||
|
* Is the *Windows Remote Management (WS-Management)* service running on the machine?
|
||||||
|
|
||||||
|
## Policy does not allow delegation of user credentials
|
||||||
|
|
||||||
|
Enter-PSSession : Connecting to remote server failed with the following error me
|
||||||
|
ssage : The WinRM client cannot process the request. A computer policy does not
|
||||||
|
allow the delegation of the user credentials to the target computer. Use gpedit.
|
||||||
|
msc and look at the following policy: Computer Configuration -> Administrative T
|
||||||
|
emplates -> System -> Credentials Delegation -> Allow Delegating Fresh Credentia
|
||||||
|
ls. Verify that it is enabled and configured with an SPN appropriate for the ta
|
||||||
|
rget computer. For example, for a target computer name "myserver.domain.com", th
|
||||||
|
e SPN can be one of the following: WSMAN/myserver.domain.com or WSMAN/*.domain.c
|
||||||
|
om. For more information, see the about_Remote_Troubleshooting Help topic.
|
||||||
|
|
||||||
|
* Make sure the name specified in the `ComputerName` parameter matches the SPN specified in the GPO. If the policy specifies a wildcard with a domain name, for example, make sure the `ComputerName` parameter is the fully-qualified domain name of the remote host, not just its single-label name
|
||||||
|
|
||||||
|
## The target computer is not trusted
|
||||||
|
|
||||||
|
Enter-PSSession : Connecting to remote server failed with the following error me
|
||||||
|
ssage : The WinRM client cannot process the request. A computer policy does not
|
||||||
|
allow the delegation of the user credentials to the target computer because the
|
||||||
|
computer is not trusted. The identity of the target computer can be verified if
|
||||||
|
you configure the WSMAN service to use a valid certificate using the following co
|
||||||
|
mmand: winrm set winrm/config/service '@{CertificateThumbprint="<thumbprint>"}'
|
||||||
|
Or you can check the Event Viewer for an event that specifies that the followin
|
||||||
|
g SPN could not be created: WSMAN/<computerfqdn>. If you find this event, you ca
|
||||||
|
n manually create the SPN using setspn.exe . If the SPN exists, but CredSSP can
|
||||||
|
not use Kerberos to validate the identity of the target computer and you still w
|
||||||
|
ant to allow the delegation of the user credentials to the target computer, use
|
||||||
|
gpedit.msc and look at the following policy: Computer Configuration -> Administr
|
||||||
|
ative Templates -> System -> Credentials Delegation -> Allow Fresh Credentials w
|
||||||
|
ith NTLM-only Server Authentication. Verify that it is enabled and configured w
|
||||||
|
ith an SPN appropriate for the target computer. For example, for a target comput
|
||||||
|
er name "myserver.domain.com", the SPN can be one of the following: WSMAN/myserv
|
||||||
|
er.domain.com or WSMAN/*.domain.com. Try the request again after these changes.
|
||||||
|
For more information, see the about_Remote_Troubleshooting Help topic.
|
||||||
|
|
||||||
|
* Make sure the remote host has a *Service Principal Name* starting with `WSMAN` and matching the value specified in the `ComputerName` parameter. To list a host's service principal names, run `setspn -l <computername>` with elevated privileges on a domain controller. If a proper SPN does not exist, try restarting the *Windows Remote Management (WS-Management)* service, and check the *System* event log for event ID 10154. If that event exists, you will need to modify permissions in Active Directory in order for hosts to be able to register their SPNs correctly (see below)
|
||||||
|
* Make sure you are specifying a fully-qualified user name in the `PSCredential` object passed to the `Credential` parameter (i.e. `DOMAIN\username` or `username@domain.local`)
|
||||||
|
|
||||||
|
# Modifying Active Directory Permissions
|
||||||
|
|
||||||
|
**Note**: Perform these steps **ONLY** if you receive the "target computer is not trusted" error, Windows Remote Managment logs event ID 10154 in the System event log, and `setspn -l` does not list any `WSMAN/...` SPNs for the remote host!
|
||||||
|
|
||||||
|
1. Open ADSI Edit
|
||||||
|
2. Click *Action > Connect to...*
|
||||||
|
3. Under *Connection Point*, select *Select a well known Naming Context* and choose `Default naming context`
|
||||||
|
4. Under *Computer*, select *Default (Domain or server that you logged in to)*
|
||||||
|
5. If your domain controllers support it (i.e. you are running Active Directory Certificate Services), tick *Use SSL-based Encryption*
|
||||||
|
6. Expand the objects in the tree at the left until you find the container containing the computer object for the server exhibiting the issue, such as `CN=Computers`
|
||||||
|
7. Right-click on the container object and choose *Properties*
|
||||||
|
8. Click the *Security* tab
|
||||||
|
9. Click "Advanced"
|
||||||
|
10. Click "Add..."
|
||||||
|
11. In the box labeled *Enter the name of the object to select*, enter `NETWORK SERVICE`
|
||||||
|
12. In the drop-down list labeled *Apply to*, select `Descendant Computer objects`
|
||||||
|
13. Scroll all the way to the bottom of the *Permissions* list and tick the box in the *Allow* column for `Validated write to service principal name`
|
||||||
|
14. Tick *Apply these permissions to objects and/or containers within this container only*
|
||||||
|
15. Click "OK"
|
||||||
|
16. Click "OK"
|
||||||
|
17. Click "OK"
|
||||||
|
18. Repeat steps 6-17 for any container with computer objects for hosts on which PowerShell Remoting is enabled
|
||||||
|
19. Restart the *Windows Remote Management (WS-Management)* service on the affected hosts
|
||||||
|
18. Run `setspn -l <computername>` with elevated privileges on a domain controller to verify that the SPN was correctly created </computername></computername></computerfqdn></thumbprint></local></local>
|
|
@ -0,0 +1,55 @@
|
||||||
|
+++
|
||||||
|
title = 'Growl Notifications in Outlook 2010'
|
||||||
|
date = 2012-03-31T10:49:00Z
|
||||||
|
+++
|
||||||
|
|
||||||
|
One particular UI feature that I have always felt like Windows was missing when compared to Linux was a universal notification mechanism. On Linux there is *libnotify*, and most desktop applications that send notifications can use it. On Windows, the closest thing we have are “notification balloons.” More often than not, though, applications either don’t use them or use them poorly (including Windows itself, as admitted by Microsoft in their official [Notifications](http://msdn.microsoft.com/en-us/library/windows/desktop/aa511497.aspx#concepts) documentation).
|
||||||
|
|
||||||
|
Enter [Growl for Windows](http://www.growlforwindows.com/). [Growl](http://growl.info/) has (apparently) been around for a while on OS X, but is now gaining popularity on Windows. I’ve used it (and a similar utility called [Snarl](http://snarl.fullphat.net/)) a couple of times in the past, but only recently has it become stable and usable enough that I can leave it permanently installed and running on my Windows computers. Even still, there are a couple of notification features that are missing, most notably new mail notifications from Outlook 2010.
|
||||||
|
|
||||||
|
I spent a bit of time today whipping up a quick VBA script for Outlook in order to remedy that problem. Here’s how to make it work:
|
||||||
|
|
||||||
|
1. Launch Outlook 2010 and hit `Alt`+`F11` on the keyboard to bring up the *Microsoft Visual Basic for Application* IDE.
|
||||||
|
2. Click *Insert > Module*
|
||||||
|
3. Enter this code in the blank window on the right:
|
||||||
|
|
||||||
|
Option Explicit
|
||||||
|
Const GrowlNotifyCmd As String = _
|
||||||
|
"""C:\Program Files (x86)\Growl for Windows\growlnotify.exe"""
|
||||||
|
|
||||||
|
Sub GrowlNotify(Item As Outlook.MailItem)
|
||||||
|
Dim MailIcon As String
|
||||||
|
MailIcon = Environ("LocalAppData") + "\Microsoft\Outlook\Mail.png"
|
||||||
|
|
||||||
|
' Register the application with Growl
|
||||||
|
Shell (GrowlNotifyCmd + " /a:Outlook /r:""New Message"" " + _
|
||||||
|
"/ai:" + MailIcon + " .")
|
||||||
|
|
||||||
|
' Send a New Message notification
|
||||||
|
Shell (GrowlNotifyCmd + " /a:Outlook /n:""New Message"" " + _
|
||||||
|
"/t:""New mail from " + Item.SenderName + """ " + _
|
||||||
|
"""" + Item.Subject + """")
|
||||||
|
End Sub
|
||||||
|
|
||||||
|
4. Click *File > Save VbaProject.OTM*
|
||||||
|
5. Close the *Microsoft Visual Basic for Applications* window
|
||||||
|
6. In the main Outlook window, make sure the *Home* ribbon tab is visible
|
||||||
|
7. Click *Rules > Manage Rules & Alerts*
|
||||||
|
8. Click the *New Rule...* button
|
||||||
|
9. Click *Apply rule on messages I receive* (under *Start from a blank rule*)
|
||||||
|
10. Click *Next >*
|
||||||
|
11. Click *Next >*; Click *Yes* when Outlook prompts to confirm the rule will be applied to every message
|
||||||
|
12. Scroll down in the *Step 1: Select action(s)* box and tick *run a script*
|
||||||
|
13. In the *Step 2: Edit the rule description (click an underlined value)* box, click *a script*
|
||||||
|
14. Select *Project1.GrowlNotify* from the *Scripts* box and click *OK*
|
||||||
|
15. Click *Next >*
|
||||||
|
16. Click *Next >*
|
||||||
|
17. In the *Step 1: Specify a name for this rule* box, enter `Growl Notification`
|
||||||
|
18. Click *Finish*
|
||||||
|
19. Click *OK*
|
||||||
|
|
||||||
|
In order for the notifications to have an icon, you'll need to download an image and save it on your computer. I like the [Mail Icon](http://www.veryicon.com/icons/system/sleek-xp-basic/mail-40.html) from the [Sleek XP Basic](http://www.veryicon.com/icons/system/sleek-xp-basic/) set by [deleket](http://deleket.deviantart.com/), but you can pick any icon square icon at least 64x64 px. Save it to `C:\Users\<username>\AppData\Local\Microsoft\Outlook\Mail.png`, or adjust the path in the `MailIcon` constant in the script above.
|
||||||
|
|
||||||
|
|
||||||
|
Edit:
|
||||||
|
After a day or so, this stopped working for me. It turned out to be related to Outlook's macro security settings. In order to get it to run again, I had to add a digital signature to my macro project. You can read more about that at [Digitally sign your macro project](http://office.microsoft.com/en-us/starter-help/digitally-sign-your-macro-project-HA010354312.aspx)
|
|
@ -0,0 +1,25 @@
|
||||||
|
+++
|
||||||
|
title = 'In PHP, false is true!'
|
||||||
|
date = 2011-12-14T00:20:00Z
|
||||||
|
+++
|
||||||
|
|
||||||
|
One of the many reasons we all need to move on:
|
||||||
|
|
||||||
|
```php
|
||||||
|
<?php
|
||||||
|
|
||||||
|
$test = false;
|
||||||
|
if ($test == 0) {
|
||||||
|
$test = 0;
|
||||||
|
if ($test == "php sucks") {
|
||||||
|
$test = "php sucks";
|
||||||
|
if ($test == true) {
|
||||||
|
print "false == true";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
?>
|
||||||
|
```
|
||||||
|
|
||||||
|
Yes, this actually does what you think it does. Try it!
|
|
@ -0,0 +1,89 @@
|
||||||
|
+++
|
||||||
|
title = "Let's Encrypt Certificates: DNS Blocked"
|
||||||
|
date = 2020-09-23T23:40:00-05:00
|
||||||
|
+++
|
||||||
|
|
||||||
|
The *certs* Jenkins job has been failing for a while, ever since I blocked
|
||||||
|
outbound DNS traffic to the Internet. The problem is `lego` queries DNS for
|
||||||
|
each domain in the certificate request repeatedly until it sees the
|
||||||
|
`_acme-challenge` TXT record it created. With DNS traffic blocked, it is never
|
||||||
|
able to contact the configured DNS servers (was Cloudflare, now Quad9) so it
|
||||||
|
just waits until its timeout expires.
|
||||||
|
|
||||||
|
## Attempt 1: `_acme-challenge` CNAME
|
||||||
|
|
||||||
|
At first, I thought the problem was simply that `lego` just needed a DNS
|
||||||
|
server. I couldn't remember why I configured it to use a third-party server,
|
||||||
|
so I just disabled that. By default, it uses the same name servers as the
|
||||||
|
operating system. Unfortunately, I quickly remembered the reason I needed to
|
||||||
|
use an external DNS server: the internal name servers have different records
|
||||||
|
for _pyrocufflink.blue_.
|
||||||
|
|
||||||
|
I remembered reading about using CNAME records to "redirect" ACME challenges to another domain, so I thought I would try that for _pyrocufflink.blue_:
|
||||||
|
|
||||||
|
```
|
||||||
|
_acme-challenge CNAME 5 _acme-challenge.o-ak4p9kqlmt5uuc.com
|
||||||
|
```
|
||||||
|
|
||||||
|
This _should_ tell Let's Encrypt to look for its TXT record in the
|
||||||
|
_o-ak4p9kqlmt5uuc.com_ domain instead of the _pyrocufflink.blue_ domain.
|
||||||
|
Unfortunately, it seems that `lego` does not support this, even with
|
||||||
|
`LEGO_EXPERIMENTAL_CNAME_SUPPORT=true`, for Namecheap.
|
||||||
|
|
||||||
|
In any case, I later discovered that this would not have helped.
|
||||||
|
|
||||||
|
## Attempt 2: DNS-over-HTTPS Proxy
|
||||||
|
|
||||||
|
Since I couldn't get `lego` to work with the CNAME trick, I decided to try
|
||||||
|
using a DNS-over-HTTPS (DoH) proxy to tunnel DNS queries to an external name
|
||||||
|
server. I looked at `dnscrypt-proxy` and `cloudflared`, as these were the only
|
||||||
|
two implementations of DNS-to-DoH proxies I could find. `cloudflared` is
|
||||||
|
simple and requires no configuration, but it's a 40 megabyte binary.
|
||||||
|
`dnscrypt-proxy`, on the other hand is a bit smaller (10 MB), but more
|
||||||
|
complicated to run. It requires a configuration file and at least one
|
||||||
|
reference to a list of public resolvers, which it must fetch and load when it
|
||||||
|
starts up.
|
||||||
|
|
||||||
|
I made some modifications to the CI pipeline to support starting and stopping
|
||||||
|
the DoH proxy, and configured `lego` to send its queries there instead.
|
||||||
|
Unfortunately, this didn't work, either. It turns out `lego` only uses the
|
||||||
|
configured name server to find the `NS` records for the domain in question.
|
||||||
|
Once it gets the names of the authoritative name servers, it sends queries to
|
||||||
|
them _directly_, NOT through the configured server.
|
||||||
|
|
||||||
|
I was able to determine this by watching the network traffic with `tshark` for
|
||||||
|
both "normal" DNS and DoH-proxied DNS:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
tshark -i any port domain
|
||||||
|
```
|
||||||
|
|
||||||
|
```sh
|
||||||
|
tshark -i lo -d tcp.port==5053,dns -d udp.port==5053,dns port 5053
|
||||||
|
```
|
||||||
|
|
||||||
|
(port 5053 is where `dnscrypt-proxy` is listening)
|
||||||
|
|
||||||
|
I could see `lego` making TXT and NS record requests to `dnscrypt-proxy`, and
|
||||||
|
then switching to making TXT requred requests to external servers. I am not
|
||||||
|
sure why it bothers making the initial TXT request, since it does not seem to
|
||||||
|
care about the result, whether it is correct or not.
|
||||||
|
|
||||||
|
## Temporary Solution
|
||||||
|
|
||||||
|
I am not sure exactly where to go from here. It seems `lego` is simply
|
||||||
|
incompatible with strict DNS. I will most likely need to find an alternate
|
||||||
|
ACME client that:
|
||||||
|
|
||||||
|
1. Supports Namecheap API
|
||||||
|
2. Works without access to the authoritative name servers
|
||||||
|
3. Is simple enough to install that it can be run from a Jenkins job
|
||||||
|
|
||||||
|
Alternatively, I may investigate
|
||||||
|
[acme-dns](https://github.com/joohoi/acme-dns). I may be able to combine CNAME
|
||||||
|
records in the target domains pointing to a (sub-)domain hosted by _acme-dns_
|
||||||
|
to get `lego` to work correctly. I would just have to make sure that the
|
||||||
|
server is accessible both internally and externally.
|
||||||
|
|
||||||
|
In the meantime, I have added firewall rules to allow outbound DNS **to
|
||||||
|
Namecheap servers only**.
|
|
@ -0,0 +1,181 @@
|
||||||
|
+++
|
||||||
|
title = 'Minimalist Gentoo Builds, Revisited'
|
||||||
|
date = 2014-07-13T04:45:00Z
|
||||||
|
+++
|
||||||
|
|
||||||
|
Last year (wow, time files), I posted a guide for building minimalist Gentoo systems for embedded devices like the Raspberry Pi using [crossdev](http://www.gentoo.org/proj/en/base/embedded/handbook/?part=1&chap=2#doc_chap2). The process I outlined there, while functional, was unweildy and confusing, and fell apart if you wanted to build multiple systems with different configurations. I have improved the process somewhat, making it easier to follow and eliminate the need to rebuild the toolchain for each system you want to build.
|
||||||
|
|
||||||
|
Since my last post, [Minimalist Gentoo for the Raspberry Pi](@/blog/minimalist-gentoo-for-the-raspberry-pi.md), the package set has changed somewhat:
|
||||||
|
|
||||||
|
* app-shells/bash
|
||||||
|
* app-arch/bzip2
|
||||||
|
* sys-apps/coreutils
|
||||||
|
* sys-apps/file
|
||||||
|
* sys-apps/findutils
|
||||||
|
* sys-apps/gawk
|
||||||
|
* sys-apps/grep
|
||||||
|
* app-arch/gzip
|
||||||
|
* sys-apps/kbd
|
||||||
|
* sys-apps/kmod
|
||||||
|
* sys-apps/less
|
||||||
|
* sys-apps/openrc
|
||||||
|
* sys-apps/net-tools
|
||||||
|
* sys-process/procps
|
||||||
|
* sys-apps/sed
|
||||||
|
* sys-apps/shadow
|
||||||
|
|
||||||
|
# Staging Areas
|
||||||
|
|
||||||
|
As before, cross-compiling takes place in multiple stages. This time, there are only two:
|
||||||
|
|
||||||
|
* build root
|
||||||
|
* deployment root
|
||||||
|
|
||||||
|
Again, we'll be using the `buildpkg` *FEATURES* flag, so each package only has to be built once.
|
||||||
|
|
||||||
|
## Build Root
|
||||||
|
|
||||||
|
The build root is where everything gets installed as the system is being built. This includes all the packages we want, plus their runtime dependencies, and their build dependencies as well.
|
||||||
|
|
||||||
|
## Deployment root
|
||||||
|
|
||||||
|
While the build root could theoretically be copied as-is to the final filesystem, it's better to use the binary packages built in the build root and install them in an alternate location. In this way, only the desired packages and their runtime dependencies are installed on the final system, not build dependencies.
|
||||||
|
|
||||||
|
# Crossdev
|
||||||
|
|
||||||
|
If you already have a crossdev toolchain you'll probably want to remove it and start over:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
crossdev -C armv6j-hardfloat-linux-gnueabi
|
||||||
|
```
|
||||||
|
|
||||||
|
If you are prompted to recursively remove the directory, say yes.
|
||||||
|
|
||||||
|
Create a new toolchain (I still recommend the stable versions):
|
||||||
|
|
||||||
|
```sh
|
||||||
|
crossdev -S -t armv6j-hardfloat-linux-gnueabi
|
||||||
|
```
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
|
||||||
|
The Portage configuration generated by crossdev is broken, so the easiest thing to do is start from scratch.
|
||||||
|
|
||||||
|
First, create the Portage configuration directory structure in your build root (I am using `/var/tmp/rpi-build` as my build root, but it can be whatever you want):
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mkdir -p /var/tmp/rpi-build/etc/portage/profile
|
||||||
|
```
|
||||||
|
|
||||||
|
Next, create a `make.defaults` file in the `profile` directory and put the following content in it:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
ARCH=arm
|
||||||
|
ELIBC=glibc
|
||||||
|
ACCEPT_KEYWORDS="arm"
|
||||||
|
FEATURES="-news buildpkg"
|
||||||
|
USE="arm bindist minimal"
|
||||||
|
```
|
||||||
|
|
||||||
|
This is necessary because Portage EAPI 5 made `ARCH` and `ELIBC` "profile-only" variables, so they can't go in `make.conf` anymore. The contents of `make.conf` are still important though, so create it next:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
CFLAGS="-Os -mfpu=vfp -mfloat-abi=hard -march=armv6zk -mtune=arm1176jzf-s --sysroot=/var/tmp/rpi-build"
|
||||||
|
CXXFLAGS="${CFLAGS}"
|
||||||
|
LDFLAGS="--sysroot=/var/tmp/rpi-build -Wl,--sysroot=/var/tmp/rpi-build -L=/usr/lib"
|
||||||
|
```
|
||||||
|
|
||||||
|
# Add any other variables you like here, such as MAKEOPTS, USE, etc.
|
||||||
|
|
||||||
|
**NOTE**: The `CFLAGS` in this example are for a Raspberry Pi. If you are building for something else, like a Beagle Bone, make sure you adjust them accordingly. DO NOT, however, change or remove the `--sysroot` flag, as it is the key to making the whole thing work.
|
||||||
|
|
||||||
|
You can also create `package.use`, `package.keywords`, etc directories in `/var/tmp/rpi-build/etc/portage`, just like you would with a regular system.
|
||||||
|
|
||||||
|
Now, tell Portage to use this configuration instead of the default:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
export ROOT=/var/tmp/rpi-build
|
||||||
|
export PORTAGE_CONFIGROOT=$ROOT
|
||||||
|
```
|
||||||
|
|
||||||
|
# Bootstraping the Build Root
|
||||||
|
|
||||||
|
In order to use the `--sysroot` flag for the compiler/linker, glibc and kernel-headers need to be installed in the target directory. We've created a chicken-and-egg problem by putting `--sysroot` in `make.conf`; if we try to build glibc in the sysroot, it will try to use the glibc already there (which of course doesn't exist). Thus, we have to temporarily override CFLAGS and LDFLAGS to get things going:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
CLFAGS= LDFLAGS= armv6j-hardfloat-linux-gnueabi-emerge --oneshot --noreplace virtual/libc virtual/os-headers
|
||||||
|
```
|
||||||
|
|
||||||
|
This will reset the compiler and linker flags back to their defaults. While there's nothing wrong with that, the result will not be as highly optimized as it could be. You may want to specify a more complete CFLAGS instead.
|
||||||
|
|
||||||
|
# Building Packages
|
||||||
|
|
||||||
|
Now that the build root is also a sysroot, building subsequent packages is a cinch:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
armv6j-hardfloat-linux-gnueabi-emerge --usepkg --noreplace --changed-use bash bzip2 ...
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
All of these packages should build succesfully. If you add others, you may run into problems. For example, I wanted to build *radvd*, which depends on *libdaemon*. The version of libtool shipped with *libdaemon* is too dumb to understand the sysroot flag, so it fails trying to link with libraries in in the non-existant directory `=/usr/lib`. There are a few work-arounds, but the best thing to do is file a [bug](https://bugs.gentoo.org/) and help get the package fixed.
|
||||||
|
|
||||||
|
# Deploying
|
||||||
|
|
||||||
|
Now that everything has been built and the binary packages have been created, it's time to deploy to the final system (SD card, etc.)
|
||||||
|
|
||||||
|
Make sure you follow the necessary instructions for creating and formatting your device's root filesystem (for the Raspberry Pi, see the [Raspberry Pi page on Gentoo Wiki](http://wiki.gentoo.org/wiki/Raspberry_Pi#Preparing_the_SD_card)). I'll assume you've mounted the root filesystem at `/mnt/raspberrypi`
|
||||||
|
|
||||||
|
```sh
|
||||||
|
export ROOT=/mnt/raspberrypi
|
||||||
|
```
|
||||||
|
|
||||||
|
Create the empty top-level directories:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mkdir $ROOT/{boot,dev,proc,root,sys,tmp}
|
||||||
|
```
|
||||||
|
|
||||||
|
Install the binary packages that were created before:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
armv6j-hardfloat-linux-gnueabi-emerge --usepkg --ask bash bzip2 ...
|
||||||
|
```
|
||||||
|
|
||||||
|
Copy the GCC runtime library:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cp /usr/lib/gcc/armv7a-hardfloat-linux-gnueabi/4.7.3/libgcc_s.so.1 $ROOT/lib
|
||||||
|
```
|
||||||
|
|
||||||
|
# Other Bits
|
||||||
|
|
||||||
|
The remaining pieces, like clock and timezone settings and the Kernel, are still relevant from my previous post. Check out the "Finishing Up" section.
|
||||||
|
|
||||||
|
# Embuilder
|
||||||
|
|
||||||
|
I've written a tool called [embuilder](https://bitbucket.org/AdmiralNemo/embuilder) that handles all of this for you. It's not quite finished, and I've not written any documentation at all. It's designed to be able to work with multiple different projects by reading settings from a single configuration file. When I get time, I'll write more about it. For now, here's an example configuration file and usage:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
# Project settings
|
||||||
|
[embuilder]
|
||||||
|
name = MyRPi
|
||||||
|
|
||||||
|
# Portage configuration
|
||||||
|
[portage]
|
||||||
|
arch = arm
|
||||||
|
ctarget = armv6j-hardfloat-linux-gnueabi
|
||||||
|
# optional - directory containing extra portage configuration (not make.conf, though)
|
||||||
|
;configroot=configroot
|
||||||
|
|
||||||
|
# A group of packages to install in addition to the basics
|
||||||
|
[tools]
|
||||||
|
packages = iproute2 nano dhcpcd eudev dropbear
|
||||||
|
```
|
||||||
|
|
||||||
|
Save the file somewhere. I keep each of my embedded system projects in a separate Mercurial repository. Then, pass the name of the configuration file and the path to the deployment root to the `embuilder` command:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
embuilder myrpi.ini /mnt/raspberrypi
|
||||||
|
```
|
||||||
|
|
||||||
|
That's it. Embuilder will take care of creating the build root, building binary packages, and deploying everything to the deployment root. It has some other neat features, like post-install scripts, overlay files, etc. that I will cover soon.
|
|
@ -0,0 +1,302 @@
|
||||||
|
+++
|
||||||
|
title = 'Minimalist Gentoo for the Raspberry Pi'
|
||||||
|
date = 2012-12-17T02:55:00Z
|
||||||
|
+++
|
||||||
|
|
||||||
|
I've spent the last several days working on a minimalist build of Gentoo Linux for my [Raspberry Pi](http://www.raspberrypi.org/). By minimalist, I mean only the absolute smallest set of packages required to boot and log in. I intend to build another [MPD Appliance](https://plus.google.com/u/0/111156619169863248480/posts/TB7pB6xuU5Y), or something similar, with it, so I don't need a full-blown Gentoo installation. The bare minimum packages are:
|
||||||
|
|
||||||
|
* busybox
|
||||||
|
* coreutils
|
||||||
|
* grep
|
||||||
|
* findutils
|
||||||
|
* net-tools
|
||||||
|
* e2fsprogs
|
||||||
|
* dosfstools
|
||||||
|
* module-init-tools
|
||||||
|
* sed
|
||||||
|
* file
|
||||||
|
* less
|
||||||
|
* kbd
|
||||||
|
* shadow
|
||||||
|
* gzip
|
||||||
|
* bzip2
|
||||||
|
* procps
|
||||||
|
|
||||||
|
In addition, I installed the following to make my life easier:
|
||||||
|
|
||||||
|
* bash
|
||||||
|
* iproute2
|
||||||
|
* ntp
|
||||||
|
* vim
|
||||||
|
|
||||||
|
# Staging Areas
|
||||||
|
|
||||||
|
Cross-compiling the system will take place in three stages:
|
||||||
|
|
||||||
|
* Sysroot
|
||||||
|
* Build root
|
||||||
|
* Deployment root
|
||||||
|
|
||||||
|
We'll take advantage of Portage's `buildpkg` *FEATURES* flag so that we don't have to compile everything three times.
|
||||||
|
|
||||||
|
## Sysroot
|
||||||
|
|
||||||
|
This stage is where the toolchain will be built and installed. Because of how crossdev works, build-time dependencies of all of our required software will also have to be installed here. The toolchain will look for headers and shared objects in the directory hierarchy under the sysroot when when compiling and linking. Unfortunately, that means all the dependencies will end up being installed here as well as in the build root.
|
||||||
|
|
||||||
|
## Build Root
|
||||||
|
|
||||||
|
This stage is where we'll actually build all the software we want to install on the Raspberry Pi. This intermediate stage is necessary because some software has runtime dependencies on toolchain components like glibc, and we can't install those in the sysroot because it will break the cross-compiling toolchain.
|
||||||
|
|
||||||
|
## Deployment Root
|
||||||
|
|
||||||
|
This stage is the final destination for the packages we built in the build root: the SD card (or QEMU disk image, if you don't have a Raspberry Pi yet). While it isn't strictly necessary to separate the build and deployment roots, it can make it easier to correct problems that may arise, and speed things up if you're building for more than one device.
|
||||||
|
|
||||||
|
# Crossdev
|
||||||
|
|
||||||
|
The first thing you'll need to do is set up [Crossdev](http://www.gentoo.org/proj/en/base/embedded/cross-development.xml). The Raspberry Pi's System-on-a-Chip is a [Broadcom BCM2835](http://www.broadcom.com/products/BCM2835), which contains an [ARM1176JZF-S](http://infocenter.arm.com/help/topic/com.arm.doc.ddi0301h/DDI0301H_arm1176jzfs_r0p7_trm.pdf) CPU. I'll be using the GNU standard C library, so the toolchain tuple will be `armv6j-hardfloat-linux-gnueabi`.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
crossdev -S -t armv6j-hardfloat-linux-gnueabi
|
||||||
|
```
|
||||||
|
|
||||||
|
Make sure to use the `-S` option for crossdev; the current unstable versions of the toolchain do not work together and GCC fails to build.
|
||||||
|
|
||||||
|
This will create the "sysroot" stage in `/usr/armv6j-hardfloat-linux-gnueabi`, where we'll install the build-time dependencies for the software we want on the Pi.
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
|
||||||
|
Now that we have a working toolchain for cross compiling, we need to configure Portage to build the software how we want. We'll create a directory structure just like that of `/etc/portage` and fill it with a few important files.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mkdir -p configroot/etc/portage/
|
||||||
|
cp /usr/armv6j-hardfloat-linux-gnueabi/etc/portage/make.conf configroot/etc/portage
|
||||||
|
```
|
||||||
|
|
||||||
|
You will probably want to modify the `make.conf` file crossdev produces. For example, I made these changes:
|
||||||
|
|
||||||
|
* Remove `~arm` from *ACCEPT_KEYWORDS*
|
||||||
|
* Add the GCC flags to *MARCH_TUNE* for the Raspberry Pi, per the [RPi Wiki](http://elinux.org/RPi_Software#ARM). Note, `-Ofast` doesn't work very well, and I had trouble compiling several packages with it. Using `-Os` or `-O4` work just fine, though.
|
||||||
|
* Add `cxx`, `unicode`, and `ipv6` *USE* flags. I also removed the `make-symlinks` *USE* flag because I will also install Bash and a few other packages that collide with Busybox when that's set.
|
||||||
|
|
||||||
|
Next, you'll need to pick a profile. Crossdev defaults to the *embedded* profile, which is fine. You could also use the *arch/arm/armv6j* profile, but then you'll need to add some extra variables in `configroot/etc/portage/profile/make.defaults`, like `KERNEL` and `USERLAND`.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
ln -s /usr/portage/profiles/embedded configroot/etc/portage/make.profile
|
||||||
|
```
|
||||||
|
|
||||||
|
You may also want to make `package.use`, `package.mask`, and/or `package.keywords` directories and populate them to your liking. I had to add `=sys-apps/coreutils-8.20` to `package.mask` due to [GNU Bug #12741](http://debbugs.gnu.org/cgi/bugreport.cgi?bug=12741), for example.
|
||||||
|
|
||||||
|
Finally, we need to prevent Portage from installing any of the toolchain components in the SYSROOT (they're already there) while we build the rest of the dependencies. You'll need a `package.provided` directory in `configroot/etc/portage/profile`, and it should contain the full atom for each of the four packages built by crossdev (binutils, gcc, glibc, and linux-headers):
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mkdir -p configroot/etc/portage/profile/package.provided
|
||||||
|
touch configroot/etc/portage/profile/package.provided/crossdev
|
||||||
|
```
|
||||||
|
|
||||||
|
To find the versions of the cross toolchain, use `equery`:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
equery list 'cross-armv6j-hardfloat-linux-gnueabi/*'
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, put each atom (with the proper category, not the cross category) and version on a separate line in `package.provided/crossdev`. Something like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
sys-devel/binutils-2.22-r1
|
||||||
|
sys-devel/gcc-4.5.4
|
||||||
|
sys-libs/glibc-2.15-r3
|
||||||
|
sys-kernel/linux-headers-3.6
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, Portage won't pull in any of those packages when building the dependency tree for our packages.
|
||||||
|
|
||||||
|
Now, set the PORTAGE_CONFIG environment variable to tell Portage to use the settings in this directory, instead of the one in /usr/armv6j-hardfloat-linux-gnueabi:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
export PORTAGE_CONFIGROOT=${PWD}/configroot
|
||||||
|
```
|
||||||
|
|
||||||
|
# Install Build Dependencies
|
||||||
|
|
||||||
|
No that we've got our cross toolchain, sysroot, and configuration ready to go, it is time to install the build dependencies for our packages. Put the list of packages you want to install in a variable (i.e. `install_pkgs="busybox coreutils …"`), and then install just their dependencies:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
armv6j-hardfloat-linux-gnueabi-emerge --onlydeps --buildpkg --oneshot --ask $install_pkgs
|
||||||
|
```
|
||||||
|
|
||||||
|
# Installing in the Build Root
|
||||||
|
|
||||||
|
Once all the build dependencies are installed, it is time to start the build root stage. The build root will be almost identical to the deployment root, so we'll everything in the build root first, so Portage will build a binary package and speed up the final step.
|
||||||
|
|
||||||
|
Before installing anything, you need to remove the `package.provided/crossdev` file you created earlier. Since we're no longer installing things in the sysroot, we do want any toolchain components to be installed, if necessary.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
rm configroot/etc/portage/profile/package.provided/crossdev
|
||||||
|
```
|
||||||
|
|
||||||
|
Next, set the `ROOT` environment variable to the absolute path of your build root directory:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
export ROOT=/home/dustin/raspberrypi
|
||||||
|
```
|
||||||
|
|
||||||
|
Remember, the build root is **not** your Raspberry Pi's SD card, so don't use that path just yet.
|
||||||
|
|
||||||
|
## Installing baselayout
|
||||||
|
|
||||||
|
Baselayout needs to be installed in two passes. Baselayout needs to be the first package installed on the system, or it will fail to create directories and symbolic links correctly. To ensure it gets installed before anything else, we explicitly install it, without dependencies:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
armv6j-hardfloat-linux-gnueabi-emerge --nodeps --buildpkg --ask baselayout
|
||||||
|
```
|
||||||
|
|
||||||
|
This basically creates an empty directory structure and some symlinks for compatibility. Once that's done, we'll go ahead and install the rest of the base system:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
armv6j-hardfloat-linux-gnueabi-emerge --onlydeps --buildpkg --usepkg --ask baselayout
|
||||||
|
```
|
||||||
|
|
||||||
|
This will pull in the rest of the core system packages, including sysvinit, OpenRC, etc.
|
||||||
|
|
||||||
|
## Installing Selected Packages
|
||||||
|
|
||||||
|
Once baselayout is installed, it is time to install the rest of the core packages:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
armv6j-hardfloat-linux-gnueabi-emerge --buildpkg --usepkg --ask $install_pkgs
|
||||||
|
```
|
||||||
|
|
||||||
|
You'll notice that most of the packages being installed at this point are binaries. That's because we've already compiled them in the sysroot, so we don't need to do it again.
|
||||||
|
|
||||||
|
# Installing in the Deployment Root
|
||||||
|
|
||||||
|
Finally! Now we actually get to install stuff on the SD card!
|
||||||
|
|
||||||
|
Make sure you've partitioned and formatted the SD card correctly. See the [Raspberry Pi Gentoo Wiki Page](http://wiki.gentoo.org/wiki/Raspberry_Pi#Preparing_the_SD_card) for details.
|
||||||
|
|
||||||
|
Mount the SD card partition you've designated as the root partition, and then reset the `ROOT` environment variable to point to it:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mkdir /mnt/raspberrypi
|
||||||
|
mount /dev/mmcblk0p2 /mnt/raspberrypi
|
||||||
|
export ROOT=/mnt/raspberrypi/
|
||||||
|
```
|
||||||
|
|
||||||
|
Before installing anything, there are a few empty directories we need to make manually. They aren't created by any package, but are critical to boot the system:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mkdir $ROOT/{boot,dev,proc,root,sys,tmp}
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, install the packages. Everything should just be binary merges at this point, so it won't take too long.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
armv6j-hardfloat-linux-gnueabi-emerge --usepkg --ask $install_pkgs
|
||||||
|
```
|
||||||
|
|
||||||
|
# Finishing Up
|
||||||
|
|
||||||
|
## libgcc_s.so.1
|
||||||
|
|
||||||
|
On ARM, Bash (and possibly other packages) depend on the libgcc runtime. This confused me for a while, because on my other minimalist system (which runs on an Atom 230), I didn't need to install GCC and Bash worked fine. Fortunately, all you need is `libgcc_s.so.1` to make it happy, not the whole GCC installation. You can copy the one from the cross toolchain:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cp /usr/lib/gcc/armv6j-hardfloat-linux-gnueabi/4.5.4/libgcc_s.so.1 $ROOT/lib/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Time Zone
|
||||||
|
|
||||||
|
You need to set the time zone, just as you would on a full system:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
echo 'America/Chicago' > $ROOT/etc/timezone
|
||||||
|
ln -snf /usr/share/zoneinfo/America/Chicago $ROOT/etc/localtime
|
||||||
|
```
|
||||||
|
|
||||||
|
## Root Password
|
||||||
|
|
||||||
|
Setting the root password can be tricky. Although `passwd` has a `--root` option, it doesn't seem to work in any situation I've tried. Normally, I'd recommend blanking the password and forcing it to be set at first log in, but since the Raspberry Pi has no idea what time it is initially, password expiration doesn't work. Thus, you'll just have to blank the password and hope you remember to set it to something secure on your own.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sed -i 's/^root:.*/root::::::::/' $ROOT/etc/shadow
|
||||||
|
```
|
||||||
|
|
||||||
|
## Services
|
||||||
|
|
||||||
|
### swclock
|
||||||
|
|
||||||
|
Since the Raspberry Pi has no real-time clock, the *hwclock* service just complains. We'll remove it and add the *swclock* service instead. While not an accurate way of keeping time (setting the clock based on the mtime of a file created at last shutdown), it will hopefully at least get the clock in the right decade.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
rm $ROOT/etc/runlevels/boot/hwclock
|
||||||
|
ln -s /etc/init.d/swclock $ROOT/etc/runlevels/boot/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Network
|
||||||
|
|
||||||
|
If you have a Model B device and intend to use the Ethernet port, you can have it start at boot:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
ln -s net.lo $ROOT/etc/init.d/net.eth0
|
||||||
|
ln -s /etc/init.d/net.eth0 $ROOT/etc/runlevels/default
|
||||||
|
```
|
||||||
|
|
||||||
|
### NTP
|
||||||
|
|
||||||
|
If you installed NTP, you'll want it to start at boot as well, so the time on the device is accurate:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
ln -s /etc/init.d/ntp-client $ROOT/etc/runlevels/default
|
||||||
|
ln -s /etc/init.d/ntpd $ROOT/etc/runlevels/default
|
||||||
|
```
|
||||||
|
|
||||||
|
## Firmware, Kernel, and Modules
|
||||||
|
|
||||||
|
Clone the Raspberry Pi firmware project on [Github](https://github.com/raspberrypi/firmware). This will get you the latest GPU firmware and bootloader, as well as a precompiled Linux kernel with modules. You can always compile your own kernel later, if you want.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
git clone git://github.com/raspberrypi/firmware.git
|
||||||
|
```
|
||||||
|
|
||||||
|
Mount the first partition of your SD card and copy the firmware there:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mount /dev/mmcblk0p1 /mnt/raspberrypi/boot
|
||||||
|
cp firmware/boot/* /mnt/raspberrypi/boot
|
||||||
|
```
|
||||||
|
|
||||||
|
Copy the pre-compiled kerne modules to the `/lib/` directory on your SD card's root partition:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cp -a firmware/modules $ROOT/lib/
|
||||||
|
```
|
||||||
|
|
||||||
|
## /etc/inittab
|
||||||
|
|
||||||
|
You may want to make a couple of changes to `/etc/inittab`. First, I don't like the new *agetty* behavior of clearing the screen before displaying the login prompt, at least on the first TTY; it makes it difficult to see error messages during the boot process. To change it, add `--noclear` to the `c1` definition:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sed -i 's/^c1\(.*\)agetty 38400\(.*\)/c1\1agetty --noclear 38400\2/' $ROOT/etc/inittab
|
||||||
|
```
|
||||||
|
|
||||||
|
Also, the default inittab sets up a serial console on `/dev/ttyS0`, but that port doesn't exist on a Raspberry Pi. You can either comment out that line, or change it to use the UART port on the Pi:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sed -i 's/ttyS0/ttyAMA0/g' $ROOT/etc/inittab
|
||||||
|
```
|
||||||
|
|
||||||
|
## /etc/fstab
|
||||||
|
|
||||||
|
Finally, you need to make sure the `fstab` file in the deployment root is correct. For the Raspberry Pi, the SD card's block device will always be `/dev/mmcblk0`. Each partition will be numbered, starting with 1, and prefixed with a "p". The first partition would be `/dev/mmcblk0p1`, etc. Make sure you set the "type" column to `vfat` for the boot partition.
|
||||||
|
|
||||||
|
# That's It...
|
||||||
|
|
||||||
|
...but don't get in a hurry! Make sure you sync all filesystem changes before you remove the SD card from your computer, since SD cards report writes as complete before actually committing them to the flash.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sync ; sync ; sync
|
||||||
|
umount /mnt/raspberrypi/boot
|
||||||
|
umount /mnt/raspberrypi
|
||||||
|
```
|
||||||
|
|
||||||
|
Now you can safely remove the SD card and pop it in your Raspberry Pi. Congratulations, and good luck!
|
|
@ -0,0 +1,27 @@
|
||||||
|
+++
|
||||||
|
title = 'Render reStructuredText Directly to Firefox'
|
||||||
|
date = 2013-01-23T18:13:00Z
|
||||||
|
+++
|
||||||
|
|
||||||
|
[reStructuredText](http://docutils.sourceforge.net/rst.html) is awesome.
|
||||||
|
Anytime I need to write something in plain text, I mark it up using rST. It
|
||||||
|
looks nice in plain text form, and can be rendered to HTML for improved
|
||||||
|
presentation.
|
||||||
|
|
||||||
|
Sometimes, as I am working on a document, I'd like to see what it looks like
|
||||||
|
once rendered to HTML, to make sure I am thinking the same way the computer is.
|
||||||
|
[Docutils](http://docutils.sourceforge.net/) ships with a nice little script
|
||||||
|
called `rst2html` that will render a rST document as HTML, either to standard
|
||||||
|
output or another file. What would be really nice is to be able to immediately
|
||||||
|
preview the resulting HTML document in Firefox without the intermediate file.
|
||||||
|
Unfortunately, Firefox doesn't read HTML from standard input, so `rst2html.py
|
||||||
|
document.rst | firefox` doesn't work.
|
||||||
|
|
||||||
|
I've come up with this workaround, however, that works just fine. Using
|
||||||
|
`base64` and `xargs`, I construct a [Data
|
||||||
|
URI](http://en.wikipedia.org/wiki/Data_URI_scheme) and instruct Firefox to open
|
||||||
|
that:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
rst2html.py document.rst | base64 -w 0 | xargs -i firefox "data:text/html;charset=utf8;base64,{}"
|
||||||
|
```
|
|
@ -0,0 +1,74 @@
|
||||||
|
+++
|
||||||
|
title = 'Samba "ea support" Tips'
|
||||||
|
date = 2012-08-05T05:18:00Z
|
||||||
|
+++
|
||||||
|
|
||||||
|
At home, I use [Samba](http://www.samba.org/) as one of the methods for
|
||||||
|
exposing the data on my file server to the rest of my network (the others being
|
||||||
|
SFTP, FTPS, and NFS). In addition, I use [Folder
|
||||||
|
Redirection](http://technet.microsoft.com/en-us/library/cc732275.aspx) and
|
||||||
|
[Offline
|
||||||
|
Files](http://technet.microsoft.com/en-us/library/gg277982%28v=ws.10%29.aspx)
|
||||||
|
to keep my _Documents_, _Pictures_, and _Videos_ folders in sync on all of my
|
||||||
|
machines. In order to maintain the nifty appearance of those special folders,
|
||||||
|
the file attributes for the `desktop.ini` file in each one must be preserved.
|
||||||
|
Samba can do this in one of two ways:
|
||||||
|
|
||||||
|
* Mapping DOS file attributes to UNIX permissions
|
||||||
|
* Storing the DOS attributes in [extended file
|
||||||
|
attributes](http://www.samba.org/samba/docs/man/manpages-3/smb.conf.5.html#STOREDOSATTRIBUTES)
|
||||||
|
|
||||||
|
I prefer the latter method since I also access the files from Linux machines,
|
||||||
|
so changing the permissions of files is not appropriate.
|
||||||
|
|
||||||
|
To enable storing of DOS attributes in extended file attributes, the following
|
||||||
|
lines must be added to each share definition in `smb.conf`:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
ea support = yes
|
||||||
|
map hidden = no
|
||||||
|
map system = no
|
||||||
|
map archive = no
|
||||||
|
map readonly = no
|
||||||
|
store dos attributes = yes
|
||||||
|
```
|
||||||
|
|
||||||
|
In order for it to work, though, the filesytem containing the files must
|
||||||
|
support extended attributes. My file server uses XFS, which needs no special
|
||||||
|
mount options. Ext3/4 need the `user_xattr` mount option set.
|
||||||
|
|
||||||
|
On occassion, I have noticed that sometimes, Samba seems to ignore the extended
|
||||||
|
attribute values. Setting file attributes from Windows does nothing (i.e. the
|
||||||
|
changes are not saved), and setting the `user.DOSATTRIB` extended attribute
|
||||||
|
manually with `setfattr` has no effect. In all cases that I have encountered,
|
||||||
|
this is because Samba encounters a file or directory from which it cannot read
|
||||||
|
the extended attributes. For me, this has been because I had mounted a
|
||||||
|
different filesystem that did not support extended attributes on a subdirectory
|
||||||
|
of a share. Apparently, once Samba encounters one file it cannot read, it stops
|
||||||
|
processing extended attributes altogether.
|
||||||
|
|
||||||
|
The `user.DOSATTRIB` extended attribute contains a bit field indicating the
|
||||||
|
state of each DOS attribute:
|
||||||
|
|
||||||
|
```
|
||||||
|
Read-Only = 0x1
|
||||||
|
Hidden = 0x2
|
||||||
|
System = 0x4
|
||||||
|
Archive = 0x20
|
||||||
|
```
|
||||||
|
|
||||||
|
Use the `getfattr` command to view the current attributes:
|
||||||
|
|
||||||
|
```
|
||||||
|
dustin@rigel ~/Documents $ getfattr -n user.DOSATTRIB desktop.ini
|
||||||
|
# file: desktop.ini
|
||||||
|
user.DOSATTRIB="0x6"
|
||||||
|
```
|
||||||
|
|
||||||
|
Use the `setfattr` command to manually set a file's attributes:
|
||||||
|
|
||||||
|
```
|
||||||
|
dustin@rigel ~/Documents $ setfattr -n user.DOSATTRIB -v '"0x6"' desktop.ini
|
||||||
|
```
|
||||||
|
|
||||||
|
(Note the escaping of the quotes in the value; this is needed to force the extended attribute to contain a string instead of an integer)
|
|
@ -0,0 +1,38 @@
|
||||||
|
+++
|
||||||
|
title = 'The Quest: Introduction'
|
||||||
|
date = 2011-12-28T04:56:00Z
|
||||||
|
+++
|
||||||
|
|
||||||
|
For as long as I can remember, I have been on a quest to find the perfect media
|
||||||
|
player for the PC. Obviously, the perfect player for a real audio system is a
|
||||||
|
well constructed turntable (like the one I just bought: [Audio-Technica
|
||||||
|
LP-120-USB](http://www.audio-technica.com/cms/turntables/583f30b3a8662772/index.html),
|
||||||
|
or the classic [Technics
|
||||||
|
SL-1200MK2](http://www.panasonic.com/consumer_electronics/technics_dj/prod_specs_sl1200mk2.asp)).
|
||||||
|
Since I spend most of my time at a computer, and I'm not always in the mood to
|
||||||
|
flip records, a good digital media player is something I really need.
|
||||||
|
|
||||||
|
Several years ago, the options for media players were pretty slim. The first
|
||||||
|
"media player" I remember using was the _Play_ button on the
|
||||||
|
[Caddy](http://en.wikipedia.org/wiki/Caddy_%28hardware%29)-based CD-ROM drive
|
||||||
|
on one of my family's first MS-DOS PCs. Old disc drives had a four-conductor
|
||||||
|
cable that would run from the drive directly to the sound card for analog audio
|
||||||
|
playback without software. As software became more sophisticated, applications
|
||||||
|
were able to control CD playback. The first application I remember using came
|
||||||
|
with the sound card and was supposed to resemble a stereo system (it had three
|
||||||
|
"components", a pre-amp, that controlled the volume, a player that controlled
|
||||||
|
tracks, and a pointless "amplifier" window that did nothing). I also remember
|
||||||
|
using Windows Media Player (probably v6.x) and Winamp (v1.x and 2.x).
|
||||||
|
|
||||||
|
I never owned a Windows XP computer (thankfully), but went from Windows 98 to
|
||||||
|
Linux. In my early Linux days, I used a few players, including XMMS, before
|
||||||
|
settling on Amarok, which I used until only a couple of years ago. Amarok was
|
||||||
|
eventually rewritten into a totally different application using version 4 of
|
||||||
|
the KDE libraries. Once the KDE 3 libraries were removed from Gentoo Linux, I
|
||||||
|
was forced to begin my full-time search for the perfect media player. To this
|
||||||
|
day, I consider Amarok 1.4 to be the best media player around.
|
||||||
|
|
||||||
|
My standards are pretty high nowadays, and I have a pretty stringent set of
|
||||||
|
requirements. I've made a little contest out of my search, scoring various
|
||||||
|
players in several categories. I'll post the full list of requirements in a few
|
||||||
|
days, and I'll start posting reviews of the players I test as time goes on.
|
|
@ -0,0 +1,161 @@
|
||||||
|
+++
|
||||||
|
title = 'The Quest: Requirements'
|
||||||
|
date = 2012-01-04T02:46:00Z
|
||||||
|
+++
|
||||||
|
|
||||||
|
A few days ago, I introduced [the quest](http://dustin.hatch.name/post/14901999875/the-quest-introduction) I've been on for many years: finding the perfect digital media player software. I'm trying to approach my search in an objective way, the same way I would pick a piece of software for any professional purpose.
|
||||||
|
|
||||||
|
The first stage in choosing a product (software or otherwise) is to compile a list of requirements that that product must meet. The list should include all the features and functions the product must have, but can also include any non-critical ones, too. A weight is assigned to each requirement so that the scores will take into account which ones are crucial, and which are just nice.
|
||||||
|
|
||||||
|
Here are the requirements I've come up with for the "perfect" media player. I used a weighting scale of 1-10, because I like the metric system.
|
||||||
|
|
||||||
|
Free Software
|
||||||
|
=============
|
||||||
|
|
||||||
|
**Weight**: 10
|
||||||
|
|
||||||
|
The media player must be released under a free software (i.e. open source) license. Examples include GNU GPL, MPL, EPL, MIT, BSD, etc. Software that is free of monetary cost but not open source (i.e. free as in beer, not free as in speech) does not qualify. This is an “all or nothing” requirement, meaning the only possible scores are 10 and 0.
|
||||||
|
|
||||||
|
Album Shuffle
|
||||||
|
=============
|
||||||
|
|
||||||
|
**Weight**: 10
|
||||||
|
|
||||||
|
The software must be able to choose a random album from the library or playlist, play every song on the album, and then choose another album. Preferably, the list of albums from which to choose can be limited somehow (by a playlist or filter).
|
||||||
|
|
||||||
|
Additionally, the playback should be able to be altered by the user without interrupting this behavior. For example, if the user manually chooses to play a particular album before the automatically selected album is finished playing, the user’s album should be played through and a third album should then be chosen.
|
||||||
|
|
||||||
|
Ideally, the random algorithm should choose albums “without replacement,” meaning that once an album has been chosen, it cannot be chosen again until all albums have been played once.
|
||||||
|
|
||||||
|
Codec Support
|
||||||
|
=============
|
||||||
|
|
||||||
|
**Weight**: 8
|
||||||
|
|
||||||
|
Media are available in many formats, and having to use a different application to play them all is cumbersome and annoying. Converting files from one format to another for daily use is time consuming. To maintain maximum compatibility, at least the following codecs must be supported:
|
||||||
|
|
||||||
|
1. MP3
|
||||||
|
2. Ogg Vorbis
|
||||||
|
3. FLAC
|
||||||
|
4. Ogg FLAC
|
||||||
|
5. AAC
|
||||||
|
|
||||||
|
Players using an independently maintained, pluggable decoding system like GStreamer receive higher scores than players that implement the decoding themselves.
|
||||||
|
|
||||||
|
Cross Platform
|
||||||
|
==============
|
||||||
|
|
||||||
|
**Weight**: 8
|
||||||
|
|
||||||
|
Support for Microsoft Windows Vista and later and Linux are required. Players written for one platform and ported to the other with reduced functionality will naturally score lower. Support for other platforms (BSD, Mac OS X, etc.) does not affect the score unless both required platforms are equally supported.
|
||||||
|
|
||||||
|
Last.fm Scrobbling
|
||||||
|
==================
|
||||||
|
|
||||||
|
**Weight**: 10
|
||||||
|
|
||||||
|
Recording song plays to a Last.fm profile is required. Support for various features (including API 2.0, Now playing, and token authorization) will increase the player’s score. This functionality can be provided by a well-supported plugin or extension without impacting the score. Lower scores will be given to players providing scrobbling with outdated or unmaintained plugins, however.
|
||||||
|
|
||||||
|
Last.fm Streaming
|
||||||
|
=================
|
||||||
|
|
||||||
|
**Weight**: 4
|
||||||
|
|
||||||
|
Streaming radio from Last.fm is an optional feature. Last.fm subscribers like to get as much for their monthly fee as possible, but having to use a separate application to listen to the streams is a major deterrent in many cases.
|
||||||
|
|
||||||
|
Equalizer
|
||||||
|
=========
|
||||||
|
|
||||||
|
**Weight**: 8
|
||||||
|
|
||||||
|
It is pretty rare to have an acoustically-perfect system for listening to digital media, so an equalizer can have tremendous value for PC media players. The finer the control (i.e. more bands), the higher the score. Having a collection of preset configurations is a benefit, but not required, since they are rarely useful.
|
||||||
|
|
||||||
|
Media Library
|
||||||
|
=============
|
||||||
|
|
||||||
|
**Weight**: 9
|
||||||
|
|
||||||
|
For smaller collections of media, opening files directly to play them is fine, but collections numbering in the thousands need a more abstract interface. A good media library should allow selecting media by artist, album, genre, and year. In addition, a powerful search is necessary to allow quick location of a particular work. The library should have a clean, usable UI that scales with enormous libraries. Displaying album art in the library browser can boost the player’s score.
|
||||||
|
|
||||||
|
The library should be agnostic to the physical storage location of the media. It should also allow the listener to specify multiple storage locations for media. Additionally, the library should update itself whenever changes are made to the files it tracks. This should be done outside the UI thread so as not to impact usability. Ideally, the library should use the operating system’s native change notification mechanism to keep itself up-to-date when possible.
|
||||||
|
|
||||||
|
Remote Control
|
||||||
|
==============
|
||||||
|
|
||||||
|
**Weight**: 6
|
||||||
|
|
||||||
|
Often, one finds one’s self needing to control media playback (pause, skip, volume control, etc.) when not in front of the PC. The player should allow control from other applications and systems over a local network or the Internet. Using a standardized protocol such as UPnP/DLNA is preferred but not required.
|
||||||
|
|
||||||
|
System Performance
|
||||||
|
==================
|
||||||
|
|
||||||
|
**Weight**: 6
|
||||||
|
|
||||||
|
This requirement is two-fold: the player must not negatively impact the performance of the rest of the system, and the player must itself remain responsive at all times. This must remain true even in cases of tremendous library size, or when the media is stored remotely (i.e. on a file server). If resampling is required for playback, the player must not consume inappropriate system resources to handle it.
|
||||||
|
|
||||||
|
Desktop Notifications
|
||||||
|
=====================
|
||||||
|
|
||||||
|
**Weight**: 4
|
||||||
|
|
||||||
|
Players should support, but not enforce, displaying desktop notifications (using Growl, libnotify, or Windows “bubbles”) for certain playback events (such as begin, pause, stop, track change, etc.). Desktop notifications that cannot be disabled will result in a low score. Desktop notifications using a non-standard or proprietary notification system will result in a low score.
|
||||||
|
|
||||||
|
Gapless Playback
|
||||||
|
================
|
||||||
|
|
||||||
|
**Weight**: 8
|
||||||
|
|
||||||
|
Concept albums are commonly produced with tracks that lead directly into one another, as are some others. In these cases, a pause, however slight, is inappropriate and detracts from the enjoyment of the media. The listener should not be able to tell when a new track begins. Players that are able to compensate for poorly cut media (i.e. files with silence inserted at the end) will score higher.
|
||||||
|
|
||||||
|
Visualizations
|
||||||
|
==============
|
||||||
|
|
||||||
|
**Weight**: 2
|
||||||
|
|
||||||
|
Visualizations are cool, but serve no real purpose. A spectrum analyzer and an oscilloscope are pretty much required if visualization support is available, but other “color splash” effects are neat as well.
|
||||||
|
|
||||||
|
Lyrics Display
|
||||||
|
==============
|
||||||
|
|
||||||
|
**Weight**: 3
|
||||||
|
|
||||||
|
Searching the world wide web for song lyrics is incredibly easy, but having them displayed directly in the media player is even easier. Higher scores for players capable of looking up lyrics in various online databases.
|
||||||
|
|
||||||
|
Metadata Editing
|
||||||
|
================
|
||||||
|
|
||||||
|
**Weight**: 5
|
||||||
|
|
||||||
|
Simple metadata editing is pretty important for those cases when some minor change needs made to a particular file. If the player also supports bulk tag editing (i.e. multiple files at once), it will score higher in this category.
|
||||||
|
|
||||||
|
Streaming Playback
|
||||||
|
==================
|
||||||
|
|
||||||
|
**Weight**: 3
|
||||||
|
|
||||||
|
Irrespective of the player’s support for Last.fm radio streaming, it should support streaming from various other sources, such as UPnP/DLNA, DAAP, and Shoutcast/Icecast.
|
||||||
|
|
||||||
|
Advanced Playback Control
|
||||||
|
=========================
|
||||||
|
|
||||||
|
**Weight**: 5
|
||||||
|
|
||||||
|
Fine-tuned control of media playback is necessary for listeners susceptible to mood swings or attention diversion. Key features include “stop after current” and a play queue (after which playback resumes from the library. Other less important features include “stop after x” where x is any arbitrary item and a sleep timer (stop after n time units).
|
||||||
|
|
||||||
|
CD Playback and Ripping
|
||||||
|
=======================
|
||||||
|
|
||||||
|
**Weight**: 2
|
||||||
|
|
||||||
|
Playing a CD is occasionally needed, though it is seldom a part of normal daily listening. Ripping CDs is also trivial and could easily be handled by a separate utility.
|
||||||
|
|
||||||
|
Usability
|
||||||
|
=========
|
||||||
|
|
||||||
|
**Weight**: 10
|
||||||
|
|
||||||
|
The main user interface must be clean and intuitive. Elements should be movable and resizable within reason. Keyboard shortcuts should be available and editable, but the default configuration should use well-known bindings (such as spacebar to toggle play/pause). Multimedia keys should be supported, including volume control.
|
||||||
|
|
||||||
|
Windows 7 taskbar features (including playback control buttons, progress indication, etc.) are not required, but can improve the player’s score.
|
||||||
|
|
||||||
|
So there they are, the requirements for the perfect media player. Not an easy test to pass, unfortunately. I'll be periodically posting how some of the player's I've used measure up in the next several posts.
|
|
@ -0,0 +1,66 @@
|
||||||
|
+++
|
||||||
|
title = 'Using PeerBlock lists on Linux'
|
||||||
|
date = 2012-10-18T16:49:00Z
|
||||||
|
+++
|
||||||
|
|
||||||
|
On Windows, [PeerBlock](http://www.peerblock.com/) is a firewall of sorts that blocks inbound and outbound communication with hosts based on their inclusion in one of several lists. It is commonly used to block parties that participate in anti-peer-to-peer activities (i.e. suing people for sharing content, even legitimately), advertisements, malware, etc. It blocks more traffic than one might expect, too.
|
||||||
|
|
||||||
|
Unfortunately, there isn't a really good alternative for Linux. The original [PeerGuardian](http://sourceforge.net/projects/peerguardian/) for Linux appears to have been revived, but when I tired to use it, it didn't work and was incredibly unstable. The documentation was also pretty terrible. There are a few others, such as *moblock* and *iplist* that appear to have been dead for quite some time. Some other solutions exist, such as simple scripts that convert block lists into iptables rules, but depending on the size of the list, these would consume so many system resources it would be impossible to use the computer.
|
||||||
|
|
||||||
|
Fortunately, the Linux kernel itself actually includes all the capabilities necessary to implement a large blacklist as part of the netfilter framework. We'll use the kernel packet filter and a relatively new feature called "IP sets" to create a high-performance index of the block lists.
|
||||||
|
|
||||||
|
For this to work, you'll need to make sure your kernel supports iptables and IP sets:
|
||||||
|
|
||||||
|
CONFIG_IP_NF_IPTABLES=y
|
||||||
|
CONFIG_NETFILTER_XT_SET=y
|
||||||
|
CONFIG_IP_SET=y
|
||||||
|
CONFIG_IP_SET_HASH_NET=y
|
||||||
|
|
||||||
|
You could also compile these features as modules and insert them at runtime:
|
||||||
|
|
||||||
|
for mod in ip_tables ip_set xt_set ip_set_hash; do modprobe $mod; done
|
||||||
|
|
||||||
|
You will also need the `iptables` and `ipset` utilities, as well as several standard command line tools like `curl`, `cut`, `gawk`, `grep`, and `gunzip`.
|
||||||
|
|
||||||
|
Next, you'll need to create an IP set to hold the data from a blocklist. For modern kernels, the `hash:net` type works the best:
|
||||||
|
|
||||||
|
ipset create LEVEL1 hash:net maxelem 262144
|
||||||
|
|
||||||
|
The identifier `LEVEL1` is the name of the set, which will be used later in the firewall rule. Notice the `maxelem` property which sets the maximum number of elements in the set. The default is 65536. You'll need to increase it if you want to use a large block list. This example is for the [Bluetack Level 1](http://www.iblocklist.com/list.php?list=bt_level1) list, which lists over 250,000 address ranges (totaling over 800 million addresses).
|
||||||
|
|
||||||
|
Now that we've got the set created, it's time to populate it. Again, I'm using the Bluetack Level 1 list, but you can use any list you want. You can get several lists from [I-BlockList](http://www.iblocklist.com/). The `p2p` format lists are free, so that's what I'll use in this example.
|
||||||
|
|
||||||
|
curl -L "http://list.iblocklist.com/?list=bt_level1&fileformat=p2p&archiveformat=gz" |
|
||||||
|
gunzip |
|
||||||
|
cut -d: -f2 |
|
||||||
|
grep -E "^[-0-9.]+$" |
|
||||||
|
gawk '{print "add LEVEL1 "$1}' |
|
||||||
|
ipset restore -exist
|
||||||
|
|
||||||
|
You can now view (some of) the contents of your IP set to make sure it worked:
|
||||||
|
|
||||||
|
ipset list LEVEL1 | head
|
||||||
|
|
||||||
|
You should see something like this:
|
||||||
|
|
||||||
|
Name: LEVEL1
|
||||||
|
Type: hash:net
|
||||||
|
Header: family inet hashsize 131072 maxelem 262144
|
||||||
|
Size in memory: 5868152
|
||||||
|
References: 0
|
||||||
|
Members:
|
||||||
|
213.17.157.224/28
|
||||||
|
61.95.132.192/27
|
||||||
|
184.73.76.96/30
|
||||||
|
81.58.24.80/29
|
||||||
|
|
||||||
|
Now it's time to tell the firewall to block hosts on these networks. To do that, we'll use two iptables rules:
|
||||||
|
|
||||||
|
iptables -I INPUT -m set --match-set LEVEL1 src -j DROP
|
||||||
|
iptables -I OUTPUT -m set --match-set LEVEL1 dst -j DROP
|
||||||
|
|
||||||
|
Now you have a kernel-level firewall configuration doing exactly what PeerBlock does.
|
||||||
|
|
||||||
|
Make sure you save your iptables configuration using your distribution's recommended method. On Gentoo, it's as simple as `rc-service iptables save`.
|
||||||
|
|
||||||
|
You'll probably want to have your block list updated automatically. To do that, create a new IP set with a different name (such as `LEVEL1-updated`) and populate it with the same pipeline as before. Then, use the `ipset swap LEVEL1 LEVEL1-updated` command to replace the original set with the updated set. Then delete the temporary set with `ipset destroy LEVEL1-updated`.
|
|
@ -0,0 +1,26 @@
|
||||||
|
+++
|
||||||
|
title = 'Vim Key Remapping on Windows'
|
||||||
|
date = 2011-12-21T01:47:00Z
|
||||||
|
+++
|
||||||
|
|
||||||
|
I've been on a quest to get to know Vim over the past few weeks. I'm making a
|
||||||
|
little bit of progress, with the help of some of the guys at work, and I've got
|
||||||
|
myself a nice personal configuration, which I've put in a [Mercurial
|
||||||
|
Repository](http://code.dustin.hatch.name/vimfiles) for portability.
|
||||||
|
|
||||||
|
Yesterday, I finally got around to installing gVim on my Windows computer, and
|
||||||
|
I immediately ran into a snag. I've already trained myself to use `jj` instead
|
||||||
|
of reaching all the way to the Escape key to exit insert mode, but on my
|
||||||
|
Windows computer, that didn't work. Instead of switching back to normal mode,
|
||||||
|
typing `jj` just printed the text `<esc><right>` in the document.
|
||||||
|
|
||||||
|
After a bunch of mucking about and uninstalling and re-installing Vim, I
|
||||||
|
discovered that the issue is not present if I let the installer create the
|
||||||
|
default `_vimrc` file in `%ProgramFiles(x86)%\Vim`. Further testing revealed
|
||||||
|
that the command `set nocompatible` was needed in order for key mapping to work
|
||||||
|
correctly, and probably fix other problems that I've yet to encounter.
|
||||||
|
|
||||||
|
I guess there's really no harm in letting the default `_vimrc` file exist. I
|
||||||
|
didn't install it at first because I wasn't sure where it would be placed (I
|
||||||
|
thought it would put it in `%USERPROFILE%`, thus conflicting with my personal
|
||||||
|
configuration).
|
Loading…
Reference in New Issue