More Rural Internet

A late entrant into the rural home internet game: T-Mobile Home Internet.

Looks good in theory: $50 a month, “truly” unlimited, up to 50 Mbit/sec. It’s GPS locked to your address so you can’t use it as a mobile hotspot.

So far it’s not living up to the promise. There’s a T-Mobile tower under 4 miles from me with a clear line of sight. But the signal strength is poor; only two bars from my office and four for a good signal in the front of the house.

I’m getting about 12 Mbit/sec with the good signal and about 4 Mbit/sec with the poor. A lot of ping jitter and evidence of traffic deprioritization due to network congestion.

There’s no external antenna connector on it either, so I’m stuck with whatever signal I can get inside the house.

I installed the modem in the dining room at the front of the house and hooked up a TP-Link power line adapter to run an ethernet connection back to my office area where I hooked it up to a Google WiFi base station.

It works, but the quality of the connection is variable. During the weekday it’s not too bad; I can get nearly 20 Mbit/sec. But after 5PM – and on weekends – it drops off sharply. And ping times are all over the place; ranging from 35ms to a high of 530ms.

Rural Internet

One of the joys of living out in the rural hither lands is a lack of broadband internet.

We have a satellite connection with Viasat, and while it’s fine for casual surfing it’s not usable for work stuff — with it connecting to a satellite in geosynchronous orbit (about 10% the distance to the moon; light takes a noticeable bit of time to go that far and back) the latency is way too high for video conferencing or VPN.

Until SpaceX gets their Starlink system released (which uses satellites in low-earth orbit so the latency is much reduced) I’m stuck with cellular internet.

I was using the system I had in the RV, but my yearly plan ran out and AT&T decided a price increase of 3000% was appropriate during a pandemic. So that’s out. But I did find a service that provides a suitable unlimited data plan for a bit over $100 a month on cellular networks. Which is nice since I use about 10GB a day for normal work stuff — it adds up over the month.

One problem is that I’m about 5 miles away from my local cell tower and we have a lot of trees and hills in the way so my signal strength is low and my connection isn’t reliable.

But the other day I was out on the deck with a telescope and noticed I had a line of sight to this tower if I looked though a gap in our trees and at the right location I can see the very top of the cell tower peeking over the distant ridge. That’ll work with the proper equipment.

So Amazon delivered some goodies. I got several log periodic dipole array (LPDA) antenna. These are highly directional wide bandwidth antenna tuned to 4G LTE frequencies. The trick with these is the installation — these work best in something called 2×2 MIMO to establish two data streams with the tower. They’re on the same frequency so for the spatial multiplexing to be effective the antennas must be isolated and configured to provide a low correlation coefficient. The easiest way to do this in a 2×2 system is to use orthogonal polarization; i.e. each antenna is rotated 90-degree from the other along a parallel axis pointing toward the distant tower. Then the antennas need to be between .5 and 2 λ (wavelength) apart so they don’t interfere with each other.

I’ll have to determine what band I’m using on the tower and do some math to get that measurement, but that’s not too hard and should around a foot or so.

After getting it hooked up and the antenna mounted it was time to do a little testing.

Measuring the signal, the RSRP (signal strength) went from -106 dBm to -86 dBm (marginal to excellent) and the RSRQ (signal quality) went from -16 dBm to -10 dBm (poor to excellent). My actual internet speeds didn’t change much (being around 4 Mbps up and down; good enough for out here) but the ping times are down and the connection should be more stable.

I couldn’t figure out a good way to get the thick cables inside the house without doing a lot of drilling, so I just ran the antenna leads under the deck and into a rubbermaid container out on the deck right outside from my office. In the container I put the LTE modem, a wifi router, a battery packup, and a Raspberry Pi for monitoring. There was already an electrical outlet on the deck for power and even though it’s all outside, my desk is close enough I get a full-strength wifi signal and I didn’t have to make any permanent alterations to the house. I hope I’ll only need this for a year or so until SpaceX gets Starlink operational as that should be a better fit for my internet needs.

Zero Turn

The accident that cost me a finger happened when I was converting our tractor from mowing mode to regular tractor mode by removing the mow deck and putting the backhoe and loader back on. Since we have a tractor why not use it for everything to save some money on not having a dedicated mower?

Ok, that ended up not really saving us anything.

So today John Deere delivered our new zero turn mower. We also got a fork set for the loader on the tractor to help us move stuff around the farm. Looks like taking the bucket off the loader to put the forks on will be a lot safer than taking the backhoe on and off so hopefully my remains digits will stay that way.

I’ve not had a zero turn mower before — it’s a lot harder to control than I anticipated, more so since we don’t have a lot of flat ground and you often have to bias one side a bit to keep it tracking straight. Looks like I’ll need some time to practice — luckily we have a lot of land to practice on. Unfortunately since my accident I’ve not done any mowing and the grass is about 2-feet high at the moment; which is a bit more than this residential class mower is comfortable with. Originally I wanted a diesel zero turn, but $20k to just mow grass wasn’t an option.

Update on Finger Injury

Ok, now that’s it a bit in the past I can talk a little more about that finger injury.

It wasn’t a perfectly clean amputation; the doctor had to do some trimming to clean it up.

To do that the doctor used a familiar tool. I’m not sure what the medical term for it is, but I have one very similar.

I call it a “bolt cutter”.

One of the nurses brings this in and says it’s all they have. The doctor picks up this comically large pair of stainless steel bolt cutters and says it’ll do.

Now I’m drugged up and have a nerve blocker so I can’t feel anything. And I’m not watching the procedure but I’m fully aware.

Several more nurses come in to observe. So there’s three trauma ER nurses and the doctor. One of the nurses moves over a bit and suggests — with a quiet and nervous laugh — the others follow so they don’t get hit by debris.

The doctor takes a snip and it sounds just like using bolt cutters on an actual bolt. There’s a ping as a bit of bone strikes the wall and bounces around; just like cutting a bolt. The nurses are cringing, but I don’t feel a thing. Quite a bit of blood is splurging around at this point, too.

A few more snips and some stitches and the doctor has it done. Over the next hour we have some issues with bleeding but the doctor eventually gets it stopped and all is well.

A month on and the healing is going well.

During a visit this week we discovered a bit of stitching that got left in. The nurse pulled several inches of it out but the pain was a bit intense so he stopped to consult with the doctor.

The doctor came in to check it out and said he thought the healing is going well and we’ll leave the stitch in until it heals a bit more. I’ll need at least one more procedure to remove the nail matrix on that finger; there’s just enough left to grow a little bit of finger nail which is causing some problems and isn’t useful. And we can remove the rest of the stitch at that time.

Night at the ER

I spent most of last night in the ER and we didn’t get home until 2 AM.

Yesterday I was working on our tractor attaching the backhoe when the implement shifted and caught the end of my glove and pulled the middle finger of my right hand into the hole for the attachment pin and sheared my finger off at the first knuckle.

Julie called 911 and an ambulance got here quickly and hauled me to a trauma center in Independence. Shock and adrenaline are amazing things and there was very little pain the whole time. The IV in my wrist was actually more painful than the finger injury.

The “leftovers” were too smushed to be reattached so they cleaned up the wound best they could and sutured it down. We had some issues with bleeding but the ER doc eventually got it stopped.

I’m now home with some heavy-duty pain meds and anti-biotics. Monday morning I need to see an orthopedic surgeon for next steps.

Everyone from the EMTs to the nurses and Doctors at the trauma center were amazing even in the midst of COVID-19.

This is the most serious injury I’ve ever had. Last night I got my first IV and first stitches. As you can imagine typing is going to be an issue, especially since I’m a touch typist and I essentially type for a living. Oh, and I need to program a new fingerprint into Touch ID on my Mac; the one I used to use is gone.

RV Internet

The most difficult thing about living in an RV when you work in technology is internet access.

Last time I did this was in 2001 and the best I could do was satellite internet at slow speeds at extreme cost.

Luckily in the intervening 19 years mobile “cellular” broadband is a thing in most places but the industry still isn’t providing affordable bandwidth.

The RV park in Dallas we’re at offers wifi, but even with their $30 a month “premium” service it’s very slow (< 1 Mbit/sec) with frequent times where it just stops working.

I can tether with my AT&T iPhone, but I blew though my 10GB bandwidth cap the first week. We have an “unlimited” data plan but they throttle that at 22GB and for tethering it’s cut down to 2G speeds at 10GB. I talked with AT&T and they couldn’t offer a solution with enough data for any cost.

T-mobile had a special where you could test drive a hot-spot for free with 30GB of data and they got me another three weeks before it hit that cap and became non-functional. I could reactivate it and get 20GB a month for >$100 a month. Not enough data and too expensive.

After a bit of searching I came across this new product.

Our fifth-wheel is a Heartland brand. They’re owned by the RV conglomerate Thor Industries that includes Airstream. They also own an internet service called Togo and run the Roadtrippers site. And they just launched an internet access service that was available with Airstream last year and now available for anyone. It’s called the Togo RoadLink C2 and it’s essentially a cellular “booster” you install on the roof of your RV and it provides a wifi hotspot inside. The cool trick is the service plan; they partnered with AT&T and are offering a “true” unlimited LTE data plan if you prepay for a year of service at $360. That’s $30 a month.

I ordered one and hooked it up to a regulated DC power supply in the house to get it configured and tested.

So far it’s working as advertised. We don’t have an especially strong AT&T signal in the house but it’s still getting 17 Mbit/sec down and 16 up.

It’s fast enough to do the video conferencing and remote desktop stuff I need to do for work.

For now I’ll probably just stick the external antenna in an underfloor storage compartment and splice into the wiring for the lights to power it. We have a strong signal at the RV park. If that works out I’ll pay an RV service place to install it on the roof; not sure I’m up to drilling holes in our roof and dealing with all the weatherproofing.

Two Month update

After a few months of use in the RV it’s a complete success. Even with a lot of Disney+, HBO, and Netflix viewing that drove our usage >100GB there’s no sign of throttling and we’re still getting around 14 Mbps in the RV. During commute times it does slow down a bit since we’re near several major freeways in the middle of DFW, but even then it’s still very functional.

December 21st update

A few days ago Togo sent out an email saying AT&T is “retiring” the unlimited data plan and putting in place data plans with absurdly low caps at excessive cost (the highest is 100GB for $300 a month). So, while I’m very happy with the Togo product the cost of the new data plans makes it unusable for most folks. No longer recommended.

Installing Docker on a Raspberry Pi

The Raspberry Pi is a fantastic low-cost way to experiment with Docker and Kubernetes.

There’s several ways to get the latest version of Docker installed on a Raspberry Pi; you can go with the Cypriot project or a full-fledged GUI-based Raspbian installation.

In my case I have a cluster of six Pi I need to configure for a Kubernetes install and I want the latest “Buster” release so I’m going with Raspbian Buster Lite.

After downloading the ISO image you need to burn it to the MicroSD card. On my modern Apple MacBook Pro this isn’t as easy as it used to be since there’s on SD card slot and all you get is USB-C ports. So, yeah, dongle life it is.

I used this Uni brand card adapter and it works perfectly.

There’s all kinds of ways to burn the Raspbian ISO file to the MicroSD card, but I prefer the command-line way.

First, insert the MicroSD card into the reader and do this to see which “disk” device it’s assigned to:

mike@jurassic ~ % diskutil list
/dev/disk0 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *500.3 GB   disk0
   1:                        EFI EFI                     314.6 MB   disk0s1
   2:                 Apple_APFS Container disk1         500.0 GB   disk0s2

/dev/disk1 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +500.0 GB   disk1
                                 Physical Store disk0s2
   1:                APFS Volume Jurassic - Data         161.0 GB   disk1s1
   2:                APFS Volume Preboot                 87.4 MB    disk1s2
   3:                APFS Volume Recovery                528.5 MB   disk1s3
   4:                APFS Volume VM                      1.1 GB     disk1s4
   5:                APFS Volume Jurassic                10.8 GB    disk1s5

/dev/disk2 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *63.9 GB    disk2
   1:             Windows_FAT_32 boot                    268.4 MB   disk2s1
   2:                      Linux                         63.6 GB    disk2s2

mike@jurassic ~ % 

As you can see, in this case the 64GB MicroSD card I’m using is mounted as /dev/disk2. To write to it we need to unmount it. This is different than “ejecting”; we need to umount it but still leave it attached to the system so we can write to it.

mike@jurassic ~ % diskutil umountDisk /dev/disk2
Unmount of all volumes on disk2 was successful
mike@jurassic ~ % 

Now we’re ready to write the downloaded Raspbian image to the MicroSD card.

sudo dd bs=1m if=/Users/mike/Downloads/2019-09-26-raspbian-buster-lite.img of=/dev/rdisk2 conv=sync

This will take awhile to run and after it completes the image will automatically mount to the system. Leave it this way for the next step.

I run my Raspberry Pi cluster headless, so instead of hooking up a monitor and keyboard to each Pi to configure it there’s several features you can enable on the newly imaged MicroSD card to streamline it’s headless setup.

First we need to enable ssh so we can login remotely. You can do this by creating an empty file named ‘ssh’ on the /boot partition.

Next we need to configure networking. I network my cluster via Wifi so we can create a file called wpa_supplicant.conf also on the /boot filesystem and when the Pi boots it’ll copy this file and it’s contents to the correct location and fire up the network.

mike@jurassic ~ % cd /Volumes/boot 
mike@jurassic boot % touch ssh
mike@jurassic boot % touch wpa_supplicant.conf
mike@jurassic boot % vi wpa_supplicant.conf 

Here’s the contents I use in wpa_supplicant.conf to attach to my wifi. Change the attributes to match your wifi settings.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev


Ok, now we’re ready to eject the MicroSD card from the Mac and put it in the Pi and power it up.

mike@jurassic ~ % diskutil eject /dev/disk2
Disk /dev/disk2 ejected

After you power up the Pi you should be able to login to it via ssh. Of course, you’ll need to find out it’s IP address. There’s several ways you can do this with a wifi headless setting. You can just check your routers MAC table or DHCP settings if you have access. Or yu can use nmap to scan your network and find a list of devices with active ssh ports.

Once you figure that out and ssh in you’re ready to configure your Pi with Docker.

Login as the ‘pi’ user and change the default password to something more secure. I usually like update the OS, too.

sudo apt update 
sudo apt full-upgrade -y
sudo reboot

After the reboot with an updated system it’s time to get docker installed.

First, we need to install some packages we need to install docker:

sudo apt-get install apt-transport-https ca-certificates software-properties-common -y

Next we need to download and install the docker gpg key:

sudo wget -qO - | sudo apt-key add -

We need to setup the apt channel to fetch the docker pacakges:

sudo echo "deb buster stable" | sudo tee /etc/apt/sources.list.d/docker.list

Now, we’re ready to install Docker. The --no-install-recommends is to work around an issue with aufs-dkms you might run into. It’s not needed for the latest versions of Docker so we can skip it and avoid the issue.

sudo apt-get update
sudo apt-get install docker-ce --no-install-recommends -y

The last step for setup is to enable the pi user to run docker commands:

sudo usermod -aG docker pi

Logout of the pi account and back in and you should be good to go. Run docker info to check that Docker is running and is the latest version.

Some vi tips

Awhile back at a previous company I got hit with a last-minute request to update our Nginx redirect map with data provided in a spreadsheet. Normally I only have to do one or two redirect rules at a time. But today I got hit with 120 rules – each of which needed data from two cells in the spreadsheet. Doing it manually would require 240 cut-n-paste operations – not fun and error prone. Oh, and I only had 30-minutes to get this done and up in production.

We do these sorts of redirects when a customer changes their domain name. In this particular case the customer had a number of locations under one domain and they’re now splitting each location out into its own domain. But we don’t want visitors to the sites getting 404s due to the URL changing so we put in redirect rules to rewrite the request and forward the visitor to the new domain.

There’s a bunch of ways to do this, but this is how I did it.

First, a quick explanation of the Nginx redirect map format. There’s not much to it:;

That’s the old URL, a space or spaces (or tab), the new URL and a semicolon terminating the line.

In this case, each row of the spreadsheet with the redirect data had 7 columns: the name of the property being redirected, and three URLs that needed to be redirected with the destination (i.e. Each location in this case only has three pages so there’s only three redirects each). Luckily the order of he data is just what I needed for the map.

The first thing I did was a CSV export of the data and opened it up in vi.

Some vi tips 1

The CSV export contained title information for the columns I don’t need so let’s just delete those right off.

Some vi tips 2

Ok, now the data is ready to be processed into something I can use. In another stroke of luck we can see each redirect pair is separated by two commas since the spreadsheet contained an empty column between the three pairs for each location. This will make things much easier.

Some vi tips 3

First, let’s get each redirect on its own line in the file. We can search and replace for the domain name to be redirected since they’re all the same. We’ll search for the domain and replace it as the first thing on its own line with:\

The % does all lines in the file; the s is for search; the \r is vi-speak for newline; and the g at the end is for global so it’ll process the whole line rather than the first match it finds on that line. By habit I use : for the separator in the search and replace command; you can also (and most people do) use /.

Some vi tips 4

Ok, we’re getting there. Next let’s get rid of the double commas and – since that’s always at the end of the redirect destination – put a semicolon at the end as Nginx requires:


Some vi tips 5

Now we need to deal with the first column of data from the spreadsheet – the name of the location. We don’t need this information for the map file so let’s just delete it. Each location name is on its own line at this point and ends with a comma. So let’s search for lines ending with a comma and delete ‘em:


In this case, the g is for a global operation on all lines; the ,$ matches all lines with a , at the end of a line (the $); and the d is for delete.

Some vi tips 6

All we have left is to replace the single remaining comma on each line that separates the source and destination URLs. Nginx requires that the separation here be one or more spaces. I typically use a tab (although I probably shouldn’t – it makes the map file look messy) so let’s do this:


The ^I is the tab character. Nowadays you can usually just press the tab key and vi will put in the ^I, but in the old days you had to do control-v and control-i to get a tab.

Some vi tips 7

And that’s it.

Some vi tips 8

Now all we got to do is save the file and do a copy and paste of its contents into the map file on our production configuration. Of course, you’ll want to scroll through the file and make sure it looks correct and do a nginx configtest before nginx reload to make sure it’s valid.

Configuring PTR records using the Rackspace Cloud DNS API

This is an archive originally posted on the Rackspace Developer Blog.


This is a guest post by Mike Sisk, a Racker working on DevOps for Rackspace’s new cloud products. You can read his blog at or follow @msisk on Twitter.

I’m going to show you how to setup a PTR record in Rackspace Cloud DNS for a Cloud Load Balancer using the command-line utility cURL.

I’m using a Rackspace Cloud production account below; when you try it with your account change the account number and authentication token to yours.

A few notes about cURL: this is a command line utility found on Mac and most Linux machines. You can also use other tools called “REST Clients”. One popular one is an extension for Firefox and Chrome located here.

Here’s a quick reference to some of the options I’ll be sending with cURL in the examples below:

  • -D

    Dump header to file (- sends to stdout)

  • -H

    Extra header(s) to send

  • -I

    Get header only

  • -s

    silent (no progress bars or errors)

  • -X

    Request method (GET is default)

The first thing we need to do is get an Authentication Token:

$ curl -s -I -H "X-Auth-Key: auth-key-for-account" -H "X-Auth-User: username"

This will return just the header from the request (note the -I option) and the one we need is the value after “X-Auth-Token:”

Let’s list the domain so we can grab its ID (I’ve obscured my Authentication Token below with $TOKEN):

$ curl -s -H "X-Auth-Token: $TOKEN" {"domains":[{"name":"","id":3325158,"created":"2012-07-20T18:04:11.000 0000","accountId":636983,"updated":"2012-07-31T18:40:22.000 0000","emailAddress":""},{"name":"","id":3174423,"created":"2012-02-28T19:26:35.000 0000","accountId":636983,"updated":"2012-07-21T21:58:41.000 0000","emailAddress":""},{"name":"","id":3314413,"comment":"This is a test domain created via the Cloud DNS API","created":"2012-07-11T14:54:00.000 0000","accountId":636983,"updated":"2012-07-11T14:54:03.000 0000","emailAddress":""}],"totalEntries":3}

Ok, that’s a little hard to read. But I can see the domain ID in question I need.

$ curl -s -H "X-Auth-Token: $TOKEN" | python -m json.tool
 "accountId": 636983,
 "created": "2012-07-20T18:04:11.000 0000",
 "emailAddress": "",
 "id": 3325158,
 "name": "",
 "nameservers": [],
 "recordsList": {
 "records": [],
 "totalEntries": 4
 "ttl": 300,
 "updated": "2012-07-31T18:40:22.000 0000"

The default output of the API is JSON, but it also supports XML if you send it another header. The little bit of Python on the end just sends the output though a parser to format it for humans. You can see I have already created an A record for the www address. The PTR we’re creating below is essentially the reverse of that to map the IP address to

Through the control panel I had already created a Cloud Load Balancer on this account. We need its ID for the PTR record, so let’s list the load balancer though its API:

 $ curl -s -H "Accept: application/json" -H "X-Auth-Token: $TOKEN" | python -m json.tool
 "loadBalancer": {
 "algorithm": "LEAST_CONNECTIONS",
 "cluster": {
 "name": ""
 "connectionLogging": {
 "enabled": false
 "contentCaching": {
 "enabled": false
 "created": {
 "time": "2012-07-31T18:36:34Z"
 "id": 47219,
 "name": "Test1",
 "nodes": [],
 "port": 80,
 "protocol": "HTTP",
 "sourceAddresses": {
 "ipv4Public": "",
 "ipv4Servicenet": "",
 "ipv6Public": "2001:4800:7901::9/64"
 "status": "ACTIVE",
 "updated": {
 "time": "2012-07-31T18:36:44Z"
 "virtualIps": []

Ok, that returns a lot of stuff. The thing we need from this output is the ID, 114445 and the ipv4 Public IP,

Now that we’ve collected all the required information, the next step is calling the Cloud DNS API with a POST operation to create the PTR record and associate it with the Load Balancer. First thing I did was create the following JSON data in a text editor:

"recordsList" : {
"records" : [ {
"name" : "",
"type" : "PTR",
"data" : "",
"ttl" : 56000
}, {
"name" : "",
"type" : "PTR",
"data" : "2001:4800:7901:0000:290c:0b6b:0000:0001",
"ttl" : 56000
} ]
"link" : {
"content" : "",
"href" : "",
"rel" : "cloudLoadBalancers"

This lists all the data we need to create the PTR record. In this example I also added the IPV6 address, too. Let’s give it a try:

$ curl -D – -X POST -d '{
"recordsList" : {
"records" : [ {
"name" : "",
"type" : "PTR",
"data" : "",
"ttl" : 56000
}, {
"name" : "",
"type" : "PTR",
"data" : "2001:4800:7901:0000:290c:0b6b:0000:0001",
"ttl" : 56000
} ]
"link" : {
"content" : "",
"href" : "",
"rel" : "cloudLoadBalancers"
}' -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN"

I just typed in the commands and pasted in the above JSON file between the single-quote marks after the -d. In the Cloud DNS API commands that create stuff are asynchronous — what we get back is a URL to check to see the status of the job.

This is what we get back from the above POST:

{"request":"{\n "recordsList" : {\n "records" : [ {\n "name" : "",\n "type" : "PTR",\n "data" : "",\n "ttl" : 56000\n }, {\n "name" : "",\n "type" : "PTR",\n "data" : "2001:4800:7901:0000:290c:0b6b:0000:0001",\n "ttl" : 56000\n } ]\n },\n "link" : {\n "content" : "",\n "href" : "",\n "rel" : "cloudLoadBalancers"\n }\n}","status":"INITIALIZED","verb":"POST","jobId":"f99c1203-b42a-44c4-885f-c80bd7e3aba0","callbackUrl":"","requestUrl":""}

Let’s check the job status:

$ curl -s -H "X-Auth-Token: $TOKEN" | python -m json.tool
 "callbackUrl": "",
 "jobId": "f99c1203-b42a-44c4-885f-c80bd7e3aba0",
 "status": "COMPLETED"

These jobs complete quickly, so by the time you check it’s usually done. If the system is under a lot of load, or it’s a really big job (like updating hundreds of records) it might take a few minutes.

Ok, let’s see what the PTR record looks like. First, keep in mind the PTR record lives with the device, not the domain record. So doing a list of the domain won’t show us the PTR. We have to list it from the device like this:

$ curl -s -H "X-Auth-Token: $TOKEN" | python -m json.tool
 "records": []

Ok, looks good. Let’s test it:

$ host has address
$ host domain name pointer

Looks good — that’s what we expect to see. To compare, let’s see what it looks like with the root of the domain, which is pointing to a first generation cloud server that doesn’t support PTR records:

$ host has address
$ host domain name pointer

For more information on the Rackspace Cloud DNS API, refer to our API documentation.