Previously I never checked if `net.ParseIP()` returned `nil` for an IPv4
address—I couldn't imagine my IPv4 regex was incomplete. I was wrong.
Moral of the story: always check for errors, always check for nil.
Oddly, I checked for IPv6 addresses—I guess I wasn't as confident about
the regex used.
Drive-bys:
- updated SOA with today's date
- updated dependencies `go get -u`
[fixes#15]
Also, I moved the "versio" endpoint: `version.sslip.io` →
`version.status.sslip.io`. It seemed to make more sense to corral the
special endpoints under `status`.
- The metrics aren't fleshed out. In fact, there's only two so far:
1. uptime
2. number of queries
- Even though the metrics aren't complete, I'm checking it in because
this commit is already much too big.
- I moved the version information to `version.status.sslip.io`;
previously it was at `version.sslip.io`. I didn't want one endpoint
for both metrics & version (worry: DNS amplification), and I wanted a
consistent subdomain to find that information (i.e.
`status.sslip.io`).
- I'm not worried about atomic updates to the metrics; if a metric is
off by one, if I skip a count because two lookups are happening at the
exact same time, I don't care.
- The `Metrics` struct is a pointer within `Xip` because I might have
several copies of `Xip` (if I'm binding to several interfaces
individually), but I must only have one copy of `Metrics`
- I only include the metrics I'm interested in, usually because it took
some work to implement that feature. I don't care about MX records,
but I care about IPv6 lookups, DNS-01 challenges, public IP lookups.
- got rid of a section of unreachable code at the end of
`ProcessQuestion()`; I was tired of Goland flagging it. I had it there
mostly because I was paranoid of falling through a `switch` statement
The Docker images are now created automatically with our pipeline.
That's right: with 80 hours of work we saved 30 seconds of work! We are
nothing if not efficient.
Our documentation was wrong; our homepage said to get the origin IP
address by querying the TXT record of the root, i.e. `dig
@ns-aws.nono.io txt . +short`; however, our code worked differently: it
returned the origin IP when the `.ip` TLD was queried.
The new behavior is that it returns the origin IP when `ip.sslip.io.` is
queried, and the documentation now reflects that behavior.
Also, that behavior is marked "experimental" to give us leeway to
change.
[fixes#11]
- Returns version information for DNS server
- Contains 3 strings:
- Semantic version, e.g. "2.2.1"
- Date of compilation
- Latest git hash
Note: the BOSH Release will have a different compilation date &
different git hash than the released executables; the semantic version
will be the same.
I needed a way of determining the version that a server was running. I
orginally considered a command-line argument, but then I thought, "Why
not create a DNS record for it? That way I can query running servers
without needing to ssh onto the machine."
The TXT record consists of three distinct strings: version, compile
date, and git hash.
```bash
dig txt version.sslip.io +short
"2.2.1"
"2021/10/03-15:08:54+0100"
"6a928eb"
```
The behavior of `dig` version **9.11.25-RedHat-9.11.25-2.fc32** differs
from macOS's `dig` version **9.10.6**. In other words, this test passes
on my mac but not until now on (Linux-based) CI.
I also took the opportunity to refactor our `dig` arguments to conform with
the suggested usage:
> Usage: dig [@global-server] [domain] [q-type] [q-class] {q-opt}
fixes <https://ci.nono.io/teams/main/pipelines/sslip.io/jobs/unit/builds/145>:
```
Expected
<int>: 9
to match exit code:
<int>: 0
```
Note that for the `any` test I had to append an additional `+notcp`
argument to avoid an attempted TCP connection. I suspect a bug in `dig`:
```
dig any sslip.io @localhost
;; Connection to 127.0.0.1#53(127.0.0.1) for sslip.io failed: connection refused.
```
- it appears that Let's Encrypt requires setting at least two TXT
records; before I only allowed one to be set; now you can set as many as
you want.
- our records had a TTL of 0 seconds; I bumped it to 60: long enough to
get a cert, short enough to refesh for a second attempt if the first one
failed.
We had moved the DNS server to a sub-directory to make room for a
sibling application, a small DNS server + small HTTP server.
fixes:
```
cannot find package "main.go" in any of:
/usr/local/Cellar/go/1.15.6/libexec/src/main.go (from $GOROOT)
/Users/cunnie/go/src/main.go (from $GOPATH)
```
**This process still does not work**. We need to fix our sslip.io DNS
server code. That being said, once our DNS server code is fixed, this
process _should_ work.
As much as we'd have liked to use `joohoi/acme-dns`, it didn't work with
our setup, possibly due to our DNS server code brokenness, mentioned
above. At any rate, we have our own `acme-dns` replacement, which we
intend to use going forward.
This small DNS server only returns one type of record, a TXT record,
meant to be a token assigned by a certificate authority (e.g. Let's
Encrypt) to verify domain ownership.
The TXT record will be updateable by an API endpoint on the webserver
(same executable as the DNS server), but I haven't yet written that
portion.
Drive-by: in our _other_ (main) sslip.io DNS server, I changed `break` →
`continue` in the main loop. Had we gotten a malformed UDP packet, we
would have exited, but now we continue to the next packet. Exiting is
not that big a deal—`monit` would have restarted the server—but moving
on to the next packet is a more robust approach.
[#6]
Warning: these instructions do not work & are incomplete.
I had high hopes for [acme-dns](https://github.com/joohoi/acme-dns), but
it seems much too baroque for my purposes—authentication, subdomains,
CNAMEs. It seems quite clever for a use case that is much more
complicated than mine.
I've resolved to write an _acme-dns_-compatible HTTP server & DNS server
to meet my much simpler needs.
`DEVELOPER.md` had the wrong tests (mostly missing newlines); that's
been fixed. Also, I added a new test for DNS records which contain
`_acme-challenge.`, which may enable users to generate wildcard certs
for their sslip.io domains.
Rather than using Docker Hub's automated build feature (which doesn't
seem to work when setting up new repositories), I've opted to manually
build & push the images.
There are workarounds which might allow me to use GitHub's automated
build feature, like creating an organization, moving the repos to the
new organization, and creating a 'bot' user to publish the images, but
that seems like a lot of work for little gain.
fixes:
> Fetch source repositories failed.
> Connect a GitHub account to cunnie to enable automated builds. If it is already connected, please re-link the source provider.
We use the Alpine image; it's a lean 5.6 MB, and our 3 MB server keeps
it lean at below 9 MB.
Though we include instructions to build the Dockerfile, we plan to use
Docker Hub's automated builds feature.