My First Job

My first job at age 16 was a summer job working at a food processing plant. I worked in the box room and all I did all summer was staple the bottom of boxes and stack them up for the production line.

Each day a different product was being packed. I’d get prep sheets with the proposed production volume and I’d have to figure out how many boxes I need to assemble. After that I’d have to go to the warehouse and get pallets of flattened boxes of the proper size and wheel those over to the box room.

The first few weeks I was the helper. I’d get the boxes from the pallet and fold it to shape and hand it to the guy running the stapler. Then I’d take the stapled box and stack it up before grabbing another flat one and handing it off. Eventually you’d get a rhythm going and you could really get going.

One day, a few weeks after I started, the guy running the staple machine mis-timed and drove a staple though his hand. Oops. After some downtime to get him to the hospital and get all the blood cleaned up we were back in action — I’d just been promoted to run the staple machine and one of the production line guys filled in as my helper so we could finish the day’s production run. The next day I had a new full-time helper to train.

The work was fast moving and you had to keep up or they’d have to shut the production line down. You didn’t want that.

One day we were really going and were able to get way ahead of the production line. We filled up every square foot of room we had with boxes ready to go. With no more room available we took off a few hours early, which was normal when we got ahead of production and had more boxes than needed for the day. But somehow my count was off and I got a very irate call from the plant manager asking why we left when there wasn’t enough boxes. Oops again.

Being a food processing plant it was cold on the production floor. The box room was right off that so it was cold there, too. It might be a hundred degrees outside but inside it was around 40-degrees so we always had to dress warmly, keeping in mind the nature of the machinery and keeping loose clothing away from those. And you’d have layers since you warmed up once you got going.

The worst days were those when the food prep room next to us was a bunch of folks peeling onions and tossing those into 55-gallon drums for some food product. Even with good ventilation all those onion fumes got to you. And you still had to operate that staple machine to keep the boxes flowing.

I came back the next summer and found out I had been replaced with automation. Now the production line just took a stack of cardboard and made the boxes on the fly, doing all the folding and taping the bottom instead of stapling.

I spent the rest of that summer opening one-gallon cans of Van Camps Pork and Beans and straining the beans out. I guess this was the cheapest way for them to get the beans for their "Made in Louisiana-Style Baked Beans" product.

MS-DOS 5.0 and Autodesk Products

My first job in technology was at Autodesk in AutoCAD Product Support at the old office in Sausalito on 2330 Marinship Way. 

At the time AutoCAD support was only though authorized dealers (or direct to the customer only on CompuServe). Most of the dealers were pretty good a figuring things out so we mostly only got really interesting support questions on the phone line.

When Microsoft introduced MS-DOS 5.0 in June 1991 our support lines blew up with dealers asking how our products will work with the new support for memory over 640KB.

In response I wrote this letter that we could send to dealers and post on our CompuServe forum. And by coincidence I wrote this letter exactly 32 years ago from this posting date.

How time flies!

More Rural Internet

A late entrant into the rural home internet game: T-Mobile Home Internet.

Looks good in theory: $50 a month, “truly” unlimited, up to 50 Mbit/sec. It’s GPS locked to your address so you can’t use it as a mobile hotspot.

So far it’s not living up to the promise. There’s a T-Mobile tower under 4 miles from me with a clear line of sight. But the signal strength is poor; only two bars from my office and four for a good signal in the front of the house.

I’m getting about 12 Mbit/sec with the good signal and about 4 Mbit/sec with the poor. A lot of ping jitter and evidence of traffic deprioritization due to network congestion.

There’s no external antenna connector on it either, so I’m stuck with whatever signal I can get inside the house.

I installed the modem in the dining room at the front of the house and hooked up a TP-Link power line adapter to run an ethernet connection back to my office area where I hooked it up to a Google WiFi base station.

It works, but the quality of the connection is variable. During the weekday it’s not too bad; I can get nearly 20 Mbit/sec. But after 5PM – and on weekends – it drops off sharply. And ping times are all over the place; ranging from 35ms to a high of 530ms.

Rural Internet

One of the joys of living out in the rural hither lands is a lack of broadband internet.

We have a satellite connection with Viasat, and while it’s fine for casual surfing it’s not usable for work stuff — with it connecting to a satellite in geosynchronous orbit (about 10% the distance to the moon; light takes a noticeable bit of time to go that far and back) the latency is way too high for video conferencing or VPN.

Until SpaceX gets their Starlink system released (which uses satellites in low-earth orbit so the latency is much reduced) I’m stuck with cellular internet.

I was using the system I had in the RV, but my yearly plan ran out and AT&T decided a price increase of 3000% was appropriate during a pandemic. So that’s out. But I did find a service that provides a suitable unlimited data plan for a bit over $100 a month on cellular networks. Which is nice since I use about 10GB a day for normal work stuff — it adds up over the month.

One problem is that I’m about 5 miles away from my local cell tower and we have a lot of trees and hills in the way so my signal strength is low and my connection isn’t reliable.

But the other day I was out on the deck with a telescope and noticed I had a line of sight to this tower if I looked though a gap in our trees and at the right location I can see the very top of the cell tower peeking over the distant ridge. That’ll work with the proper equipment.

So Amazon delivered some goodies. I got several log periodic dipole array (LPDA) antenna. These are highly directional wide bandwidth antenna tuned to 4G LTE frequencies. The trick with these is the installation — these work best in something called 2×2 MIMO to establish two data streams with the tower. They’re on the same frequency so for the spatial multiplexing to be effective the antennas must be isolated and configured to provide a low correlation coefficient. The easiest way to do this in a 2×2 system is to use orthogonal polarization; i.e. each antenna is rotated 90-degree from the other along a parallel axis pointing toward the distant tower. Then the antennas need to be between .5 and 2 λ (wavelength) apart so they don’t interfere with each other.

I’ll have to determine what band I’m using on the tower and do some math to get that measurement, but that’s not too hard and should around a foot or so.

After getting it hooked up and the antenna mounted it was time to do a little testing.

Measuring the signal, the RSRP (signal strength) went from -106 dBm to -86 dBm (marginal to excellent) and the RSRQ (signal quality) went from -16 dBm to -10 dBm (poor to excellent). My actual internet speeds didn’t change much (being around 4 Mbps up and down; good enough for out here) but the ping times are down and the connection should be more stable.

I couldn’t figure out a good way to get the thick cables inside the house without doing a lot of drilling, so I just ran the antenna leads under the deck and into a rubbermaid container out on the deck right outside from my office. In the container I put the LTE modem, a wifi router, a battery packup, and a Raspberry Pi for monitoring. There was already an electrical outlet on the deck for power and even though it’s all outside, my desk is close enough I get a full-strength wifi signal and I didn’t have to make any permanent alterations to the house. I hope I’ll only need this for a year or so until SpaceX gets Starlink operational as that should be a better fit for my internet needs.

RV Internet

The most difficult thing about living in an RV when you work in technology is internet access.

Last time I did this was in 2001 and the best I could do was satellite internet at slow speeds at extreme cost.

Luckily in the intervening 19 years mobile “cellular” broadband is a thing in most places but the industry still isn’t providing affordable bandwidth.

The RV park in Dallas we’re at offers wifi, but even with their $30 a month “premium” service it’s very slow (< 1 Mbit/sec) with frequent times where it just stops working.

I can tether with my AT&T iPhone, but I blew though my 10GB bandwidth cap the first week. We have an “unlimited” data plan but they throttle that at 22GB and for tethering it’s cut down to 2G speeds at 10GB. I talked with AT&T and they couldn’t offer a solution with enough data for any cost.

T-mobile had a special where you could test drive a hot-spot for free with 30GB of data and they got me another three weeks before it hit that cap and became non-functional. I could reactivate it and get 20GB a month for >$100 a month. Not enough data and too expensive.

After a bit of searching I came across this new product.

Our fifth-wheel is a Heartland brand. They’re owned by the RV conglomerate Thor Industries that includes Airstream. They also own an internet service called Togo and run the Roadtrippers site. And they just launched an internet access service that was available with Airstream last year and now available for anyone. It’s called the Togo RoadLink C2 and it’s essentially a cellular “booster” you install on the roof of your RV and it provides a wifi hotspot inside. The cool trick is the service plan; they partnered with AT&T and are offering a “true” unlimited LTE data plan if you prepay for a year of service at $360. That’s $30 a month.

I ordered one and hooked it up to a regulated DC power supply in the house to get it configured and tested.

So far it’s working as advertised. We don’t have an especially strong AT&T signal in the house but it’s still getting 17 Mbit/sec down and 16 up.

It’s fast enough to do the video conferencing and remote desktop stuff I need to do for work.

For now I’ll probably just stick the external antenna in an underfloor storage compartment and splice into the wiring for the lights to power it. We have a strong signal at the RV park. If that works out I’ll pay an RV service place to install it on the roof; not sure I’m up to drilling holes in our roof and dealing with all the weatherproofing.

Two Month update

After a few months of use in the RV it’s a complete success. Even with a lot of Disney+, HBO, and Netflix viewing that drove our usage >100GB there’s no sign of throttling and we’re still getting around 14 Mbps in the RV. During commute times it does slow down a bit since we’re near several major freeways in the middle of DFW, but even then it’s still very functional.

December 21st update

A few days ago Togo sent out an email saying AT&T is “retiring” the unlimited data plan and putting in place data plans with absurdly low caps at excessive cost (the highest is 100GB for $300 a month). So, while I’m very happy with the Togo product the cost of the new data plans makes it unusable for most folks. No longer recommended.

Installing Docker on a Raspberry Pi

The Raspberry Pi is a fantastic low-cost way to experiment with Docker and Kubernetes.

There’s several ways to get the latest version of Docker installed on a Raspberry Pi; you can go with the Cypriot project or a full-fledged GUI-based Raspbian installation.

In my case I have a cluster of six Pi I need to configure for a Kubernetes install and I want the latest “Buster” release so I’m going with Raspbian Buster Lite.

After downloading the ISO image you need to burn it to the MicroSD card. On my modern Apple MacBook Pro this isn’t as easy as it used to be since there’s on SD card slot and all you get is USB-C ports. So, yeah, dongle life it is.

I used this Uni brand card adapter and it works perfectly.

There’s all kinds of ways to burn the Raspbian ISO file to the MicroSD card, but I prefer the command-line way.

First, insert the MicroSD card into the reader and do this to see which “disk” device it’s assigned to:

mike@jurassic ~ % diskutil list
/dev/disk0 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *500.3 GB   disk0
   1:                        EFI EFI                     314.6 MB   disk0s1
   2:                 Apple_APFS Container disk1         500.0 GB   disk0s2

/dev/disk1 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +500.0 GB   disk1
                                 Physical Store disk0s2
   1:                APFS Volume Jurassic - Data         161.0 GB   disk1s1
   2:                APFS Volume Preboot                 87.4 MB    disk1s2
   3:                APFS Volume Recovery                528.5 MB   disk1s3
   4:                APFS Volume VM                      1.1 GB     disk1s4
   5:                APFS Volume Jurassic                10.8 GB    disk1s5

/dev/disk2 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *63.9 GB    disk2
   1:             Windows_FAT_32 boot                    268.4 MB   disk2s1
   2:                      Linux                         63.6 GB    disk2s2

mike@jurassic ~ % 

As you can see, in this case the 64GB MicroSD card I’m using is mounted as /dev/disk2. To write to it we need to unmount it. This is different than “ejecting”; we need to umount it but still leave it attached to the system so we can write to it.

mike@jurassic ~ % diskutil umountDisk /dev/disk2
Unmount of all volumes on disk2 was successful
mike@jurassic ~ % 

Now we’re ready to write the downloaded Raspbian image to the MicroSD card.

sudo dd bs=1m if=/Users/mike/Downloads/2019-09-26-raspbian-buster-lite.img of=/dev/rdisk2 conv=sync

This will take awhile to run and after it completes the image will automatically mount to the system. Leave it this way for the next step.

I run my Raspberry Pi cluster headless, so instead of hooking up a monitor and keyboard to each Pi to configure it there’s several features you can enable on the newly imaged MicroSD card to streamline it’s headless setup.

First we need to enable ssh so we can login remotely. You can do this by creating an empty file named ‘ssh’ on the /boot partition.

Next we need to configure networking. I network my cluster via Wifi so we can create a file called wpa_supplicant.conf also on the /boot filesystem and when the Pi boots it’ll copy this file and it’s contents to the correct location and fire up the network.

mike@jurassic ~ % cd /Volumes/boot 
mike@jurassic boot % touch ssh
mike@jurassic boot % touch wpa_supplicant.conf
mike@jurassic boot % vi wpa_supplicant.conf 

Here’s the contents I use in wpa_supplicant.conf to attach to my wifi. Change the attributes to match your wifi settings.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=US

network={
    ssid="Cenozoic"
    key_mgmt=WPA-PSK
    psk="trilobite"
}

Ok, now we’re ready to eject the MicroSD card from the Mac and put it in the Pi and power it up.

mike@jurassic ~ % diskutil eject /dev/disk2
Disk /dev/disk2 ejected

After you power up the Pi you should be able to login to it via ssh. Of course, you’ll need to find out it’s IP address. There’s several ways you can do this with a wifi headless setting. You can just check your routers MAC table or DHCP settings if you have access. Or yu can use nmap to scan your network and find a list of devices with active ssh ports.

Once you figure that out and ssh in you’re ready to configure your Pi with Docker.

Login as the ‘pi’ user and change the default password to something more secure. I usually like update the OS, too.

passwd
sudo apt update 
sudo apt full-upgrade -y
sudo reboot

After the reboot with an updated system it’s time to get docker installed.

First, we need to install some packages we need to install docker:

sudo apt-get install apt-transport-https ca-certificates software-properties-common -y

Next we need to download and install the docker gpg key:

sudo wget -qO - https://download.docker.com/linux/raspbian/gpg | sudo apt-key add -

We need to setup the apt channel to fetch the docker pacakges:

sudo echo "deb https://download.docker.com/linux/raspbian/ buster stable" | sudo tee /etc/apt/sources.list.d/docker.list

Now, we’re ready to install Docker. The --no-install-recommends is to work around an issue with aufs-dkms you might run into. It’s not needed for the latest versions of Docker so we can skip it and avoid the issue.

sudo apt-get update
sudo apt-get install docker-ce --no-install-recommends -y

The last step for setup is to enable the pi user to run docker commands:

sudo usermod -aG docker pi

Logout of the pi account and back in and you should be good to go. Run docker info to check that Docker is running and is the latest version.

s3cmd with Multiple AWS Accounts

Awhile back I was doing a lot of work involving Amazon’s Simple Storage Service (aka Amazon S3).

And while tools like Panic’s Transmit, the Firefox S3Fox extension, or even Amazon’s own S3 Management Console make it easy to use, sometimes you really just want a command-line tool.

There’s a lot of good tools out there, but the one I’ve been using is s3cmd. This tool is written in Python and is well documented. Installation on Linux or OS X is simple as is its configuration. And as a longtime Unix command-line user it’s syntax is simple. Some examples:

To list your buckets:

~ $ s3cmd ls
2010-04-28 23:50 s3://g5-images
2011-01-21 06:42 s3://g5-mongodb-backup
2011-03-21 21:23 s3://g5-mysql-backup
2010-06-03 17:45 s3://g5-west-images
2010-09-02 15:57 s3://g5engineering

List the size of a bucket with “human readable” units:

~ $ s3cmd du -H s3://g5-mongodb-backup
1132G s3://g5-mongodb-backup/

List the contents of a bucket:

~ $ s3cmd ls s3://g5-mongodb-backup
2011-08-08 14:43 3273232889 s3://g5-mongodb-backup/mongodb.2011-08-08-06.tar.gz
2011-08-08 21:12 3290592536 s3://g5-mongodb-backup/mongodb.2011-08-08-12.tar.gz
2011-08-09 03:16 3302734859 s3://g5-mongodb-backup/mongodb.2011-08-08-18.tar.gz
2011-08-09 09:09 3308369423 s3://g5-mongodb-backup/mongodb.2011-08-09-00.tar.gz
2011-08-09 14:51 3285753739 s3://g5-mongodb-backup/mongodb.2011-08-09-06.tar.gz

Show the MD5 hash of an asset:

~ $ s3cmd ls --list-md5 s3://g5-mongodb-backup/mongodb.2011-08-09-06.tar.gz
2011-08-09 14:51 3285753739 07747e3de16138799d9fe1846436a3ce \
s3://g5-mongodb-backup/mongodb.2011-08-09-06.tar.gz

Transferring a file to a bucket uses the get and put commands. And if you
forget an option or need a reminder of usage the very complete s3cmd –help
output will likely be all the help you need.

One problem I have with most tools for AWS is managing multiple accounts. Most
of these tools assume you have just one account, but I work with multiple
accounts and switching between them can be cumbersome.

Here’s how I work with multiple AWS accounts using s3cmd.

By default s3cmd puts its configuration file in ~/.s3cfg, but you can
override this and specify a configuration file with the -c option.

What I do is create a separate config file with the appropriate credentials for
each account I work with and give them unique names:

~ $ ls -1 .s3cfg*
.s3cfg-g5
.s3cfg-tcp

Another option is to keep the credentials for the account you use most often in
the standard ~/.s3cfg file and use the -c option when/if you need another
account. I don’t like this option because it’s too easy to mistakenly use the
wrong account. For example, without a ~/.s3cfg this is what happens when I use
s3cmd without specifying a configuration:

~ $ s3cmd ls
ERROR: /Users/mike/.s3cfg: No such file or directory
ERROR: Configuration file not available.
ERROR: Consider using --configure parameter to create one.

So, what to do? Using the -c all the time is a PITA. Answer: use Bash aliases!

Here’s a subset of the s3cmd aliases I have in my ~/.bashrc file:

# s3cmd aliases for different s3 accounts
alias s3g5='s3cmd -c ~/.s3cfg-g5'
alias s3tcp='s3cmd -c ~/.s3cfg-tcp'

Now, to list the buckets in my personal account I just do:

~ $ s3tcp ls
2011-07-01 06:10 s3://mikesisk-img
2011-07-05 23:16 s3://www.tcpipranch.com
2011-07-01 22:55 s3://www.watch4rocks.com

And I can still pass arguments:

~ $ s3tcp -H --list-md5 ls s3://mikesisk-img/me.jpg
2011-07-01 06:09 5k 13d7c86bccd8915dd93b085985305394 \
s3://mikesisk-img/me.jpg

Just keep in mind that calls to bash aliases from scripts and cronjobs might not work. Plus it’s bad form and will come back to bite you one of these days. Just use the long form with -c in these places and keep the aliases for your own interactive command-line usage.