Sure, they have unit tests, but the methods are so simple I'm not sure
they're worth testing.
I changed the hostmaster to `yoyo@nono.io` because I felt more
comfortable having the email on ProtonMail in lieu of Gmail.
- Refactored the tests, but they're still hard to follow
Todo:
- break out the case statement to a separate method in `QueryResponse()`
- add NS, MX records
- Change Ginkgo's `To(Not(` to use the shorter `ToNot(`
- did fewer initializations in the `vars` block and moved them to the
`BeforeEach()` blocks.
The `QueryResponse()` test is too long & convoluted; even I have a hard
time understanding them, and I wrote them! The tests & code should be
re-written, but that's for another day.
- It automatically populates the header for us, which would have been a
big headache to do manually.
- Switched `ENOTFOUND` to `ErrNotFound`, and updated the error message
as well. As sad as it was to make this switch, I must acknowledge that
I'm coding in Go, not C, and I should follow its conventions.
- TWO OF THE TESTS ARE BROKEN. I know, I'll fix them soon. I should have
fixed the tests first, then the code, but I was overeager.
- it resolves `127.0.0.1.sslip.io`
- it ranges through all the questions in query, even though, IIRC, only
the first one is ever populated.
- ran both `gofmt` and `goimports`
- currently hard-coded. And I didn't think too hard about how I could
make it more flexible in the future.
- various times stolen from the domain `google.com`, with the exception
of `minTTL`, which I bumped from 60 to 300.
- I called variable names that are arrays "...Array" because they're so
rare--slices are much more common.
- fixed a bug in main.go where the error-logic was inverted.
`QueryResponse()` takes a byte array and returns a byte array. It's a
black box that `main.go` can use to input the DNS query and get back the
DNS response. This enables us to have a very lean `main.go`, which means
we can put much of the processing into the library, and which means we
can unit-test the components.
- A better-late-than-never `gofmt -w .` included cosmetic changes.
IPv6 only works on dashes, not dots. Mostly because the double-colon:
`--1` → `::1`. The double-colon, in dot-notation, would be `..`, which
is invalid in DNS.
- tested with ginkgo
- The primary method, `NameToA`, returns a resource and an error.
The error can be one value, "ENOTFOUND". I was not sure about the
returning the error—maybe I could return nil (not possible) when
I can't find the IP, or maybe return a 0.0.0.0 IP, but 0.0.0.0 is a
valid IP, so I use the error as out-of-band signaling.
ns-vultr.nono.io is a bad nameserver because it's shut down for ~8 days
each month (when the unbelievable Singapore hunger for NTP uses up my
monthly allowance of 3TB)
Besides, three nameservers is enough.
This reverts commit b8a327b128.
PowerDNS's bind backend doesn't appear to handle wildcards consistently
as secondaries, so I'm reverting this change and instead using a pair of
FreeBSD+bind servers (ns-he + ns-digitalocean) to provide the DNS.
fixes:
```
Jul 21 01:07:03 Caught an exception instantiating a backend: launch= suffixes are not supported on the bindbackend
```
```
Jul 21 01:08:47 Fatal error: Trying to set unknown parameter 'bind-first-config'
```
```
Jul 21 01:08:57 Fatal error: Trying to set unknown parameter 'pipe-second-command'
```
We now introduce a second Dockerfile, `Dockerfile-nginx`, to be used for
the web assets for sslip.io.
It does not run TLS; we assume that the load balancer will take care of
that.
We also gussied-up the PowerDNS Dockerfile with minor changes.
- nodePort service is merely a proof-of-concept; this won't be the final
form the service takes. The port needs to be 53, not 32767.
- the deployment doesn't include the nginx webserver, merely the DNS
server. Also, I had trouble connecting both UDP & TCP to port 53,
so I chose UDP.
We are now secondaries for diarizer.com because it needs to share the
same webserver as *.cf.nono.io, and needs SSL certs, and needs to be
able to participate in the DNS challenge.
- Include BIND secondaries for nono.io/nono.com
(use this & you'll be unwitting secondaries for my domains)
- Fedora-based. Because IBM/Red Hat hires a lot of the Linux kernel developers.
I turn off ns-vultr typically the last week of the month because it
exceeds its 3TB bandwidth because it's one of the few NTP servers in
Singapore. Because it's not consistently up, it should not be a
nameserver, removing.
fixes <https://ci.nono.io/teams/main/pipelines/sslip.io/jobs/check-dns/builds/1874>
```
nameserver ns-vultr.nono.io.'s SOA record match (FAILED - 2)
nameserver ns-vultr.nono.io. resolves 199.147.119.111.sslip.io to 199.147.119.111 (FAILED - 3)
nameserver ns-vultr.nono.io. resolves 28-165-216-73.sslip.io to 28.165.216.73 (FAILED - 4)
nameserver ns-vultr.nono.io. resolves 5fjtv1hr.82-45-16-87.sslip.io to 82.45.16.87 (FAILED - 5)
nameserver ns-vultr.nono.io. resolves 207-60-213-72.9cs26rza to 207.60.213.72 (FAILED - 6)
nameserver ns-vultr.nono.io. resolves api.--.sslip.io' to eq ::)} (FAILED - 7)
nameserver ns-vultr.nono.io. resolves localhost.--1.sslip.io' to eq ::1)} (FAILED - 8)
nameserver ns-vultr.nono.io. resolves 2001-4860-4860--8888.sslip.io' to eq 2001:4860:4860::8888)} (FAILED - 9)
nameserver ns-vultr.nono.io. resolves 2601-646-100-69f0--24.sslip.io' to eq 2601:646:100:69f0::24)} (FAILED - 10)
```
The PowerDNS pipe backend will return NO RECORDS for domains which are
excluded (`XIP_EXCLUDED_DOMAINS`);
This fixes an error where the pipe backend returns authoritative records
for the domains which I want the bind backend to answer; surprisingly,
this behavior breaks wildcard records:
fixes:
```
TYPE=any RECORD=c.pas.nono.io; dig +short $TYPE $RECORD @ns-aws.nono.io; echo; dig +short $TYPE $RECORD @ns-he.nono.io
ns-aws.nono.io.
ns-azure.nono.io.
ns-gce.nono.io.
ns-vultr.nono.io.
"protonmail-verification=ce0ca3f5010aa7a2cf8bcc693778338ffde73e26"
10 mail.protonmail.ch.
briancunnie.gmail.com. ns-he.nono.io. 2018092000 300 300 300 300
haproxy.pas.nono.io.
```
- I had to remove `ns-he.nono.io`; I'm moving back to BIND on that one.
- `resolve_ns_subdomain` is deprecated; I don't need to resolve
the IP addresses of the NS records, for they're in a different domain.
- Added `localhost` resolution; it was one of the common queries.
- Pull the pipeline configuration from Concourse, but re-add the
comments at the top & the entire `resources` section which has YAML
anchors and is much more brief as a result
Previously _deploy-pws-diego-cellblock-02_ waited for
_deploy-pws-pivotal-internal-apps_ to complete before starting, but that
particular job has taken as long as 1:47 (HH:MM) (cf-deployment v2.5.0).
_deploy-pws-diego-cellblock-02_'s other dependency,
_deploy-pws-diego-cellblock-01_, completed in a much more reasonable
timeframe (1:08), and is also a more similar deployment (in other words,
if the deployment to cellblock 01 has succeeded, then we should proceed
with cellblock 02 & not bother to wait for Internal Apps).
This is a dummy pipeline to demonstrate visually the changes to
accelerate the deployment to PWS (Pivotal Web Services). We hope to
reduced deployment time from 17 hours to 11 hours while restricting
Diego cell vacating to one az (availability zone) at a time.
Yes, according to the RFC it shouldn't begin with a hyphen. And, since
we're on the topic, underscores were supposed to be off the table, too,
but Microsoft used them anyway, and you know what? We're gonna use the
"forbidden hyphen". And we're gonna instruct `dig` to not be so
persnickety.
fixes:
```
dig +short AAAA api.--.sslip.io
dig: idn2_lookup_ul failed: string start/ends with forbidden hyphen
```
I had to make it work for old-style (e.g. macOS dig) which is version
"DiG 9.8.3-P1" as well as for the new version ("DiG
9.11.3-RedHat-9.11.3-6.fc28") which has this new
[library](https://www.gnu.org/software/libidn/libidn2/reference/libidn2-idn2.html)
which does the following:
> Perform IDNA2008 lookup string conversion on domain name src , as described in section 5 of RFC 5891
Nothing like a good example to drive the point home.
I need to update the AWS and Hetzner content to reflect these changes,
and include the new URL in the Hetzner LetsEncrypt list.