Author Archives: Mike Sisk

About Mike Sisk

Principal Engineer in platform services at Cerner. Recovering entrepreneur and startup founder. Previously: Citigroup, ERCOT, Rackspace, Autodesk.

Getting Started with Ansible, Part 0

Ansible is a popular open-source tool to automate common system administration tasks.

There’s a few other tools in the automation space; examples include Chef, Puppet, and SaltStack. These are called configuration management tools and they can be used to automate everything from mainframes to tiny IoT devices.

Ansible is arguably the most popular of these tools right now. There’s several reason for this. One is that it’s written in Python, a computer language exploding in popularity. Another is that the code you write to automate things (called “playbooks”) are written in a simple declarative file formatted in YAML. Even if you don’t know programming you can still get stuff done in Ansible. And finally, unlike most other configuration management tools, Ansible doesn’t require an always running agent; they just need a standard OpenSSH port open and accepting connections.

If you’ve ever worked in a large enterprise shop or had to deal with compliance issues, one less agent to manage on every node is a big deal.

For this simple tutorial I’m going to setup and run Ansible on my MacBook and the remote machines are six small Raspberry Pi single-board computers running Linux on my desk.

If you don’t have a Raspberry Pi or two to play with you can use another computer on your network, a cloud server you spin up just to play with, or even a Linux image running in Docker.

Ok, let’s get started on the MacBook.

First, let’s make a directory to hold our Ansible project:

[~/repo] $ mkdir pi_ansible
[~/repo] $ cd pi_ansible

We’re going to use a Python virtual environment to hold all the dependencies we need for Ansible. This creates the virtual environment in our Ansible project directory and sticks all the Python bits and packages inside the env directory:

[~/repo/pi_ansible] $ python3 -m venv env

Ok, now we need to activate the virtual environment. To do that just source the activate script that was created when the virtual environment was installed during the previous step:

[~/repo/pi_ansible] $ source env/bin/activate

Depending on your shell and setup, you’ll probably see your prompt to change to indicate you’ve activated the virtual environment. You can check it’s setup correctly by doing a which pip3 to make sure it’s using the one inside the virtual environment:

(env) [~/repo/pi_ansible] $ which pip3
/Users/mike/repo/pi_ansible/env/bin/pip3

Now we’re ready to install Ansible.

With Ansible version 2.10 there’s a slight change; previously you’d just install one Ansible package and all the things would get installed.

Now it’s split into two packages: ansible-base, which is the base that just includes the bits needed for Ansible to run, and ansible, which includes playbooks and modules for many common operations.

The reason for this change is many large sites have unique needs and don’t need all the playbooks and stuff. If you’re installing on thousands of servers all that extra stuff adds up.

For this example we’ll install both packages:

(env) [~/repo/pi_ansible] $ pip3 install ansible-base
Collecting ansible-base
Using cached ansible-base-2.10.2.tar.gz (6.0 MB)
Collecting jinja2
Using cached Jinja2-2.11.2-py2.py3-none-any.whl (125 kB)
Collecting PyYAML
...
(env) [~/repo/pi_ansible] $ pip3 install ansible
Collecting ansible
Using cached ansible-2.10.1.tar.gz (25.9 MB)
Requirement already satisfied: ansible-base<2.11,>=2.10.2 in ./env/lib/python3.9/site-packages (from ansible) (2.10.2)
...

Let’s check the version and make sure Ansible is installed:

(env) [~/repo/pi_ansible] $ ansible --version
ansible 2.10.2
config file = None
configured module search path = ['/Users/mike/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/mike/repo/pi_ansible/env/lib/python3.9/site-packages/ansible
executable location = /Users/mike/repo/pi_ansible/env/bin/ansible
python version = 3.9.0 (v3.9.0:9cf6752276, Oct 5 2020, 11:29:23) [Clang 6.0 (clang-600.0.57)]

This is how the output looks on my Mac. If you’re on a different platform it might be different. Just make sure Ansible runs and returns a version number.

There’s a few more things to do before we take our first step.

Ansible works with an inventory. In large environments you’ll likely have this dynamic and pull information from some database that knows about your setup.

But for smaller environments you’ll just keep an inventory in a file in the Ansible directory. That’s what we’re going to do.

My Raspberry Pi cluster is connected to the network via WiFi and I don’t have them in DNS, so we’re just going to put them in a file by IP address. Let’s call this file hosts.txt and this is what it looks like:

[all]
192.168.1.138
192.168.1.148
192.168.1.117
192.168.1.101
192.168.1.126
192.168.1.114

The first line is a label we’ll just put all the systems under [all]. You could divide this up with labels such as [webservers] or [databases] but for now let’s keep it simple.

Next we need to tell Ansible where to look for the inventory. By default Ansible will look for the inventory in several default locations like /etc/ansible.cfg, but no one really uses it that way. It’s best to just put the inventory in the same project you use for the playbooks and other configuration files.

The primary configuration file is called ansible.cfg in the project directory. Let’s create it:

[defaults]
inventory = hosts.txt

Let’s run ansible in AdHoc mode with the ping module and see how it works.

(env) [~/repo/pi_ansible] $ ansible all -m ping
192.168.1.148 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: mike@192.168.1.148: Permission denied (publickey,password).",
"unreachable": true
}
192.168.1.138 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: mike@192.168.1.138: Permission denied (publickey,password).",
"unreachable": true
}
192.168.1.101 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: mike@192.168.1.101: Permission denied (publickey,password).",
"unreachable": true
}
192.168.1.117 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: mike@192.168.1.117: Permission denied (publickey,password).",
"unreachable": true
}
192.168.1.126 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: mike@192.168.1.126: Permission denied (publickey,password).",
"unreachable": true
}
192.168.1.114 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: mike@192.168.1.114: Permission denied (publickey,password).",
"unreachable": true
}

That’s not right. If you look at the output you can see ansible trying to login to the servers with my current MacBook user, but I don’t have that account on the Raspberry Pi systems.

We need to add a remote_user configuration to ansible.cfg and set it to the pi user:

[defaults]
inventory = hosts.txt
remote_user = pi

Let’s try it again:

(env) [~/repo/pi_ansible] $ ansible all -m ping
192.168.1.138 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: pi@192.168.1.138: Permission denied (publickey,password).",
"unreachable": true
}
192.168.1.148 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: pi@192.168.1.148: Permission denied (publickey,password).",
"unreachable": true
}
192.168.1.117 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: pi@192.168.1.117: Permission denied (publickey,password).",
"unreachable": true
}
192.168.1.126 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: pi@192.168.1.126: Permission denied (publickey,password).",
"unreachable": true
}
192.168.1.101 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: pi@192.168.1.101: Permission denied (publickey,password).",
"unreachable": true
}
192.168.1.114 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: pi@192.168.1.114: Permission denied (publickey,password).",
"unreachable": true
}

Ansible is trying to login with the remote user now, but it’s still not working.

The problem is that the pi user is setup with a password. That’s not going to work with automation.

What we need to do is copy our ssh public_key to each of these servers so ansible can login without having to use the password.

Note: this assumes you have a ssh key generated on your system. If you need to do that, go here to learn more: https://git-scm.com/book/en/v2/Git-on-the-Server-Generating-Your-SSH-Public-Key

I’m going to use a little shell magic to loop though the inventory and copy the ssh keys to the remote systems.

(env) [~/repo/pi_ansible] $ for x in $(awk NR-1 hosts.txt); do ssh-copy-id pi@$x; done
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/Users/mike/.ssh/id_ed25519.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
pi@192.168.1.138's password: 

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'pi@192.168.1.138'"
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/Users/mike/.ssh/id_ed25519.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
pi@192.168.1.148's password: 

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'pi@192.168.1.148'"
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/Users/mike/.ssh/id_ed25519.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
pi@192.168.1.117's password: 

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'pi@192.168.1.117'"
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/Users/mike/.ssh/id_ed25519.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
pi@192.168.1.101's password: 

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'pi@192.168.1.101'"
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/Users/mike/.ssh/id_ed25519.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
pi@192.168.1.126's password: 

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'pi@192.168.1.126'"
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/Users/mike/.ssh/id_ed25519.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
pi@192.168.1.114's password: 

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'pi@192.168.1.114'"
and check to make sure that only the key(s) you wanted were added.

The shell line reads the IP addresses from hosts.txt (the awk thing is to skip the first line, which is a label), and runs the ssh-copy-id command which copies the ssh public key to a remote machine. You still need to enter your password on each machine, but this will be the last time.

Ok, one more time:

(env) [~/repo/pi_ansible] $ ansible all -m ping
[DEPRECATION WARNING]: Distribution debian 10.4 on host 192.168.1.101 should use /usr/bin/python3, but is
using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible release will
default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more information.
This feature will be removed in version 2.12. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
192.168.1.101 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
[DEPRECATION WARNING]: Distribution debian 10.4 on host 192.168.1.126 should use /usr/bin/python3, but is
using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible release will
default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more information.
This feature will be removed in version 2.12. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
192.168.1.126 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
[DEPRECATION WARNING]: Distribution debian 10.4 on host 192.168.1.117 should use /usr/bin/python3, but is
using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible release will
default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more information.
This feature will be removed in version 2.12. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
192.168.1.117 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
[DEPRECATION WARNING]: Distribution debian 10.6 on host 192.168.1.138 should use /usr/bin/python3, but is
using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible release will
default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more information.
This feature will be removed in version 2.12. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
192.168.1.138 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
[DEPRECATION WARNING]: Distribution debian 10.4 on host 192.168.1.148 should use /usr/bin/python3, but is
using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible release will
default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more information.
This feature will be removed in version 2.12. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
192.168.1.148 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
[DEPRECATION WARNING]: Distribution debian 10.4 on host 192.168.1.114 should use /usr/bin/python3, but is
using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible release will
default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more information.
This feature will be removed in version 2.12. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
192.168.1.114 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}

It’s working! But what’s with that huge [DEPRECATION WARNING]?

This is saying Ansible found both Python2 and Python3 on the remote servers and it’s using Python2. But that’s going to change in a future release and anything that depends on Python2 is going to break.

This doesn’t apply to us. Let’s enable the future option and use Python3 by adding one more line to ansible.cfg:

[defaults]
inventory = hosts.txt
remote_user = pi
interpreter_python = auto

Let’s run it one last time:

(env) [~/repo/pi_ansible] $ ansible all -m ping

192.168.1.138 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.1.126 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.1.148 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.1.117 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.1.101 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.1.114 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}

All right! Ansible is now working and reporting SUCCESS with the ping module.

We got Ansible installed and configured at a basic level and you’ve seen a successful run along with a few failures.

In a future post we’ll talk about what modules are, what that fact thing being reported is all about, and how to do some actual automation.

More Rural Internet

A late entrant into the rural home internet game: T-Mobile Home Internet.

Looks good in theory: $50 a month, “truly” unlimited, up to 50 Mbit/sec. It’s GPS locked to your address so you can’t use it as a mobile hotspot.

So far it’s not living up to the promise. There’s a T-Mobile tower under 4 miles from me with a clear line of sight. But the signal strength is poor; only two bars from my office and four for a good signal in the front of the house.

I’m getting about 12 Mbit/sec with the good signal and about 4 Mbit/sec with the poor. A lot of ping jitter and evidence of traffic deprioritization due to network congestion.

There’s no external antenna connector on it either, so I’m stuck with whatever signal I can get inside the house.

I installed the modem in the dining room at the front of the house and hooked up a TP-Link power line adapter to run an ethernet connection back to my office area where I hooked it up to a Google WiFi base station.

It works, but the quality of the connection is variable. During the weekday it’s not too bad; I can get nearly 20 Mbit/sec. But after 5PM – and on weekends – it drops off sharply. And ping times are all over the place; ranging from 35ms to a high of 530ms.

Rural Internet

One of the joys of living out in the rural hither lands is a lack of broadband internet.

We have a satellite connection with Viasat, and while it’s fine for casual surfing it’s not usable for work stuff — with it connecting to a satellite in geosynchronous orbit (about 10% the distance to the moon; light takes a noticeable bit of time to go that far and back) the latency is way too high for video conferencing or VPN.

Until SpaceX gets their Starlink system released (which uses satellites in low-earth orbit so the latency is much reduced) I’m stuck with cellular internet.

I was using the system I had in the RV, but my yearly plan ran out and AT&T decided a price increase of 3000% was appropriate during a pandemic. So that’s out. But I did find a service that provides a suitable unlimited data plan for a bit over $100 a month on cellular networks. Which is nice since I use about 10GB a day for normal work stuff — it adds up over the month.

One problem is that I’m about 5 miles away from my local cell tower and we have a lot of trees and hills in the way so my signal strength is low and my connection isn’t reliable.

But the other day I was out on the deck with a telescope and noticed I had a line of sight to this tower if I looked though a gap in our trees and at the right location I can see the very top of the cell tower peeking over the distant ridge. That’ll work with the proper equipment.

So Amazon delivered some goodies. I got several log periodic dipole array (LPDA) antenna. These are highly directional wide bandwidth antenna tuned to 4G LTE frequencies. The trick with these is the installation — these work best in something called 2×2 MIMO to establish two data streams with the tower. They’re on the same frequency so for the spatial multiplexing to be effective the antennas must be isolated and configured to provide a low correlation coefficient. The easiest way to do this in a 2×2 system is to use orthogonal polarization; i.e. each antenna is rotated 90-degree from the other along a parallel axis pointing toward the distant tower. Then the antennas need to be between .5 and 2 λ (wavelength) apart so they don’t interfere with each other.

I’ll have to determine what band I’m using on the tower and do some math to get that measurement, but that’s not too hard and should around a foot or so.

After getting it hooked up and the antenna mounted it was time to do a little testing.

Measuring the signal, the RSRP (signal strength) went from -106 dBm to -86 dBm (marginal to excellent) and the RSRQ (signal quality) went from -16 dBm to -10 dBm (poor to excellent). My actual internet speeds didn’t change much (being around 4 Mbps up and down; good enough for out here) but the ping times are down and the connection should be more stable.

I couldn’t figure out a good way to get the thick cables inside the house without doing a lot of drilling, so I just ran the antenna leads under the deck and into a rubbermaid container out on the deck right outside from my office. In the container I put the LTE modem, a wifi router, a battery packup, and a Raspberry Pi for monitoring. There was already an electrical outlet on the deck for power and even though it’s all outside, my desk is close enough I get a full-strength wifi signal and I didn’t have to make any permanent alterations to the house. I hope I’ll only need this for a year or so until SpaceX gets Starlink operational as that should be a better fit for my internet needs.

Zero Turn

The accident that cost me a finger happened when I was converting our tractor from mowing mode to regular tractor mode by removing the mow deck and putting the backhoe and loader back on. Since we have a tractor why not use it for everything to save some money on not having a dedicated mower?

Ok, that ended up not really saving us anything.

So today John Deere delivered our new zero turn mower. We also got a fork set for the loader on the tractor to help us move stuff around the farm. Looks like taking the bucket off the loader to put the forks on will be a lot safer than taking the backhoe on and off so hopefully my remains digits will stay that way.

I’ve not had a zero turn mower before — it’s a lot harder to control than I anticipated, more so since we don’t have a lot of flat ground and you often have to bias one side a bit to keep it tracking straight. Looks like I’ll need some time to practice — luckily we have a lot of land to practice on. Unfortunately since my accident I’ve not done any mowing and the grass is about 2-feet high at the moment; which is a bit more than this residential class mower is comfortable with. Originally I wanted a diesel zero turn, but $20k to just mow grass wasn’t an option.

Update on Finger Injury

Ok, now that’s it a bit in the past I can talk a little more about that finger injury.

It wasn’t a perfectly clean amputation; the doctor had to do some trimming to clean it up.

To do that the doctor used a familiar tool. I’m not sure what the medical term for it is, but I have one very similar.

I call it a “bolt cutter”.

One of the nurses brings this in and says it’s all they have. The doctor picks up this comically large pair of stainless steel bolt cutters and says it’ll do.

Now I’m drugged up and have a nerve blocker so I can’t feel anything. And I’m not watching the procedure but I’m fully aware.

Several more nurses come in to observe. So there’s three trauma ER nurses and the doctor. One of the nurses moves over a bit and suggests — with a quiet and nervous laugh — the others follow so they don’t get hit by debris.

The doctor takes a snip and it sounds just like using bolt cutters on an actual bolt. There’s a ping as a bit of bone strikes the wall and bounces around; just like cutting a bolt. The nurses are cringing, but I don’t feel a thing. Quite a bit of blood is splurging around at this point, too.

A few more snips and some stitches and the doctor has it done. Over the next hour we have some issues with bleeding but the doctor eventually gets it stopped and all is well.

A month on and the healing is going well.

During a visit this week we discovered a bit of stitching that got left in. The nurse pulled several inches of it out but the pain was a bit intense so he stopped to consult with the doctor.

The doctor came in to check it out and said he thought the healing is going well and we’ll leave the stitch in until it heals a bit more. I’ll need at least one more procedure to remove the nail matrix on that finger; there’s just enough left to grow a little bit of finger nail which is causing some problems and isn’t useful. And we can remove the rest of the stitch at that time.

Night at the ER

I spent most of last night in the ER and we didn’t get home until 2 AM.

Yesterday I was working on our tractor attaching the backhoe when the implement shifted and caught the end of my glove and pulled the middle finger of my right hand into the hole for the attachment pin and sheared my finger off at the first knuckle.

Julie called 911 and an ambulance got here quickly and hauled me to a trauma center in Independence. Shock and adrenaline are amazing things and there was very little pain the whole time. The IV in my wrist was actually more painful than the finger injury.

The “leftovers” were too smushed to be reattached so they cleaned up the wound best they could and sutured it down. We had some issues with bleeding but the ER doc eventually got it stopped.

I’m now home with some heavy-duty pain meds and anti-biotics. Monday morning I need to see an orthopedic surgeon for next steps.

Everyone from the EMTs to the nurses and Doctors at the trauma center were amazing even in the midst of COVID-19.

This is the most serious injury I’ve ever had. Last night I got my first IV and first stitches. As you can imagine typing is going to be an issue, especially since I’m a touch typist and I essentially type for a living. Oh, and I need to program a new fingerprint into Touch ID on my Mac; the one I used to use is gone.

RV Internet

The most difficult thing about living in an RV when you work in technology is internet access.

Last time I did this was in 2001 and the best I could do was satellite internet at slow speeds at extreme cost.

Luckily in the intervening 19 years mobile “cellular” broadband is a thing in most places but the industry still isn’t providing affordable bandwidth.

The RV park in Dallas we’re at offers wifi, but even with their $30 a month “premium” service it’s very slow (< 1 Mbit/sec) with frequent times where it just stops working.

I can tether with my AT&T iPhone, but I blew though my 10GB bandwidth cap the first week. We have an “unlimited” data plan but they throttle that at 22GB and for tethering it’s cut down to 2G speeds at 10GB. I talked with AT&T and they couldn’t offer a solution with enough data for any cost.

T-mobile had a special where you could test drive a hot-spot for free with 30GB of data and they got me another three weeks before it hit that cap and became non-functional. I could reactivate it and get 20GB a month for >$100 a month. Not enough data and too expensive.

After a bit of searching I came across this new product.

Our fifth-wheel is a Heartland brand. They’re owned by the RV conglomerate Thor Industries that includes Airstream. They also own an internet service called Togo and run the Roadtrippers site. And they just launched an internet access service that was available with Airstream last year and now available for anyone. It’s called the Togo RoadLink C2 and it’s essentially a cellular “booster” you install on the roof of your RV and it provides a wifi hotspot inside. The cool trick is the service plan; they partnered with AT&T and are offering a “true” unlimited LTE data plan if you prepay for a year of service at $360. That’s $30 a month.

I ordered one and hooked it up to a regulated DC power supply in the house to get it configured and tested.

So far it’s working as advertised. We don’t have an especially strong AT&T signal in the house but it’s still getting 17 Mbit/sec down and 16 up.

It’s fast enough to do the video conferencing and remote desktop stuff I need to do for work.

For now I’ll probably just stick the external antenna in an underfloor storage compartment and splice into the wiring for the lights to power it. We have a strong signal at the RV park. If that works out I’ll pay an RV service place to install it on the roof; not sure I’m up to drilling holes in our roof and dealing with all the weatherproofing.

Two Month update

After a few months of use in the RV it’s a complete success. Even with a lot of Disney+, HBO, and Netflix viewing that drove our usage >100GB there’s no sign of throttling and we’re still getting around 14 Mbps in the RV. During commute times it does slow down a bit since we’re near several major freeways in the middle of DFW, but even then it’s still very functional.

December 21st update

A few days ago Togo sent out an email saying AT&T is “retiring” the unlimited data plan and putting in place data plans with absurdly low caps at excessive cost (the highest is 100GB for $300 a month). So, while I’m very happy with the Togo product the cost of the new data plans makes it unusable for most folks. No longer recommended.

Installing Docker on a Raspberry Pi

The Raspberry Pi is a fantastic low-cost way to experiment with Docker and Kubernetes.

There’s several ways to get the latest version of Docker installed on a Raspberry Pi; you can go with the Cypriot project or a full-fledged GUI-based Raspbian installation.

In my case I have a cluster of six Pi I need to configure for a Kubernetes install and I want the latest “Buster” release so I’m going with Raspbian Buster Lite.

After downloading the ISO image you need to burn it to the MicroSD card. On my modern Apple MacBook Pro this isn’t as easy as it used to be since there’s on SD card slot and all you get is USB-C ports. So, yeah, dongle life it is.

I used this Uni brand card adapter and it works perfectly.

There’s all kinds of ways to burn the Raspbian ISO file to the MicroSD card, but I prefer the command-line way.

First, insert the MicroSD card into the reader and do this to see which “disk” device it’s assigned to:

mike@jurassic ~ % diskutil list
/dev/disk0 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *500.3 GB   disk0
   1:                        EFI EFI                     314.6 MB   disk0s1
   2:                 Apple_APFS Container disk1         500.0 GB   disk0s2

/dev/disk1 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +500.0 GB   disk1
                                 Physical Store disk0s2
   1:                APFS Volume Jurassic - Data         161.0 GB   disk1s1
   2:                APFS Volume Preboot                 87.4 MB    disk1s2
   3:                APFS Volume Recovery                528.5 MB   disk1s3
   4:                APFS Volume VM                      1.1 GB     disk1s4
   5:                APFS Volume Jurassic                10.8 GB    disk1s5

/dev/disk2 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *63.9 GB    disk2
   1:             Windows_FAT_32 boot                    268.4 MB   disk2s1
   2:                      Linux                         63.6 GB    disk2s2

mike@jurassic ~ % 

As you can see, in this case the 64GB MicroSD card I’m using is mounted as /dev/disk2. To write to it we need to unmount it. This is different than “ejecting”; we need to umount it but still leave it attached to the system so we can write to it.

mike@jurassic ~ % diskutil umountDisk /dev/disk2
Unmount of all volumes on disk2 was successful
mike@jurassic ~ % 

Now we’re ready to write the downloaded Raspbian image to the MicroSD card.

sudo dd bs=1m if=/Users/mike/Downloads/2019-09-26-raspbian-buster-lite.img of=/dev/rdisk2 conv=sync

This will take awhile to run and after it completes the image will automatically mount to the system. Leave it this way for the next step.

I run my Raspberry Pi cluster headless, so instead of hooking up a monitor and keyboard to each Pi to configure it there’s several features you can enable on the newly imaged MicroSD card to streamline it’s headless setup.

First we need to enable ssh so we can login remotely. You can do this by creating an empty file named ‘ssh’ on the /boot partition.

Next we need to configure networking. I network my cluster via Wifi so we can create a file called wpa_supplicant.conf also on the /boot filesystem and when the Pi boots it’ll copy this file and it’s contents to the correct location and fire up the network.

mike@jurassic ~ % cd /Volumes/boot 
mike@jurassic boot % touch ssh
mike@jurassic boot % touch wpa_supplicant.conf
mike@jurassic boot % vi wpa_supplicant.conf 

Here’s the contents I use in wpa_supplicant.conf to attach to my wifi. Change the attributes to match your wifi settings.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=US

network={
    ssid="Cenozoic"
    key_mgmt=WPA-PSK
    psk="trilobite"
}

Ok, now we’re ready to eject the MicroSD card from the Mac and put it in the Pi and power it up.

mike@jurassic ~ % diskutil eject /dev/disk2
Disk /dev/disk2 ejected

After you power up the Pi you should be able to login to it via ssh. Of course, you’ll need to find out it’s IP address. There’s several ways you can do this with a wifi headless setting. You can just check your routers MAC table or DHCP settings if you have access. Or yu can use nmap to scan your network and find a list of devices with active ssh ports.

Once you figure that out and ssh in you’re ready to configure your Pi with Docker.

Login as the ‘pi’ user and change the default password to something more secure. I usually like update the OS, too.

passwd
sudo apt update 
sudo apt full-upgrade -y
sudo reboot

After the reboot with an updated system it’s time to get docker installed.

First, we need to install some packages we need to install docker:

sudo apt-get install apt-transport-https ca-certificates software-properties-common -y

Next we need to download and install the docker gpg key:

sudo wget -qO - https://download.docker.com/linux/raspbian/gpg | sudo apt-key add -

We need to setup the apt channel to fetch the docker pacakges:

sudo echo "deb https://download.docker.com/linux/raspbian/ buster stable" | sudo tee /etc/apt/sources.list.d/docker.list

Now, we’re ready to install Docker. The --no-install-recommends is to work around an issue with aufs-dkms you might run into. It’s not needed for the latest versions of Docker so we can skip it and avoid the issue.

sudo apt-get update
sudo apt-get install docker-ce --no-install-recommends -y

The last step for setup is to enable the pi user to run docker commands:

sudo usermod -aG docker pi

Logout of the pi account and back in and you should be good to go. Run docker info to check that Docker is running and is the latest version.

Some vi tips

Awhile back at a previous company I got hit with a last-minute request to update our Nginx redirect map with data provided in a spreadsheet. Normally I only have to do one or two redirect rules at a time. But today I got hit with 120 rules – each of which needed data from two cells in the spreadsheet. Doing it manually would require 240 cut-n-paste operations – not fun and error prone. Oh, and I only had 30-minutes to get this done and up in production.

We do these sorts of redirects when a customer changes their domain name. In this particular case the customer had a number of locations under one domain and they’re now splitting each location out into its own domain. But we don’t want visitors to the sites getting 404s due to the URL changing so we put in redirect rules to rewrite the request and forward the visitor to the new domain.

There’s a bunch of ways to do this, but this is how I did it.

First, a quick explanation of the Nginx redirect map format. There’s not much to it:

old.example.com new.example.com;

That’s the old URL, a space or spaces (or tab), the new URL and a semicolon terminating the line.

In this case, each row of the spreadsheet with the redirect data had 7 columns: the name of the property being redirected, and three URLs that needed to be redirected with the destination (i.e. Each location in this case only has three pages so there’s only three redirects each). Luckily the order of he data is just what I needed for the map.

The first thing I did was a CSV export of the data and opened it up in vi.

Some vi tips 1

The CSV export contained title information for the columns I don’t need so let’s just delete those right off.

Some vi tips 2

Ok, now the data is ready to be processed into something I can use. In another stroke of luck we can see each redirect pair is separated by two commas since the spreadsheet contained an empty column between the three pairs for each location. This will make things much easier.

Some vi tips 3

First, let’s get each redirect on its own line in the file. We can search and replace for the domain name to be redirected since they’re all the same. We’ll search for the domain and replace it as the first thing on its own line with:

:%s:www.myfavoriteapartment.com:\rwww.myfavoriteapartment.com:g

The % does all lines in the file; the s is for search; the \r is vi-speak for newline; and the g at the end is for global so it’ll process the whole line rather than the first match it finds on that line. By habit I use : for the separator in the search and replace command; you can also (and most people do) use /.

Some vi tips 4

Ok, we’re getting there. Next let’s get rid of the double commas and – since that’s always at the end of the redirect destination – put a semicolon at the end as Nginx requires:

:%s:,,:;:g

Some vi tips 5

Now we need to deal with the first column of data from the spreadsheet – the name of the location. We don’t need this information for the map file so let’s just delete it. Each location name is on its own line at this point and ends with a comma. So let’s search for lines ending with a comma and delete ‘em:

:g:,$:d

In this case, the g is for a global operation on all lines; the ,$ matches all lines with a , at the end of a line (the $); and the d is for delete.

Some vi tips 6

All we have left is to replace the single remaining comma on each line that separates the source and destination URLs. Nginx requires that the separation here be one or more spaces. I typically use a tab (although I probably shouldn’t – it makes the map file look messy) so let’s do this:

:%s:,:^I:g

The ^I is the tab character. Nowadays you can usually just press the tab key and vi will put in the ^I, but in the old days you had to do control-v and control-i to get a tab.

Some vi tips 7

And that’s it.

Some vi tips 8

Now all we got to do is save the file and do a copy and paste of its contents into the map file on our production configuration. Of course, you’ll want to scroll through the file and make sure it looks correct and do a nginx configtest before nginx reload to make sure it’s valid.