Default PTR record domain has changed from "sslip.io" to "nip.io".
For example, `dig -x 127.0.0.1 @ns.nip.io` previously returned
`127-0-0-1.sslip.io.`, now returns `127-0-0-1.nip.io.`
Previously, the PTR domain was hard-coded to `sslip.io.`, but this
commit introduces two changes:
- the default PTR domain is now `nip.io.`. Hey, it's shorter.
- the PTR domain can now be set with the `-ptr-domain` flag, e.g. `go
run main.go -ptr-domain=xip.example.com` and then querying `dig -x
169.254.169.254` would return `169-254-169-254.xip.example.com.`
Notes:
- Our new flag, `-ptr-domain`, follows the kebab-case convention of
Golang flags, but this is inconsistent with our previous camelCase
convention, e.g. `-blocklistURL`. We didn't know any better, and it's
too late to change existing flags.
- removed two comment-out `panic()` whose purpose has long since been
forgotten
- I don't feel bad about changing the default behavior because hardly
anyone uses PTR lookups. Out of 12,773,617,290 queries, only 1564 were
PTR records (0.000012%)!
- In that vein, I acknowledge that this is a feature that no one's
clamoring for, no one will use, but it's important to me for reasons
that I don't fully understand.
nip.io has the complete set of NS records that sslip.io has. Previously
all the nameservers had only sslip.io records, e.g. ns-ovh.sslip.io.
With this commit, we now duplicate the nameservers, so now there's an
ns-ovh.nip.io as well. This also includes the "wildcard" record,
ns.sslip.io.
This unlocks the ability to use the shorter "nip.io" domain for certain
lookups, e.g. "dig txt @ns.nip.io ip.nip.io", whereas previously I'd
have to do "dig txt @ns.sslip.io ..."
This reverts commit dea655a990.
The Public Suffix List (PSL) denied our pull request to add sslip.io to
their list: <https://github.com/publicsuffix/list/pull/2206>
So there's no reason to keep their TXT record around; it only adds to
the clutter.
We replace `ns-ovh-sg` with `ns-do-sg`; this is a purely financial
decision: `ns-ovh-sg` costs $60/month, $720/year.
`ns-do-sg` (Digital Ocean), is also a Singapore-based DNS server. It's a
basic-regular-2vcpu-4GiB RAM-80GB SSD-4TiB bandwidth for $24/month,
$288/year.
That's a yearly savings of $432.
I had originally overspec'ed the Singapore server because I suspected
that there was a ton of traffic in Asia; I was wrong. It's not even 20%
the traffic of Europe or North America. I am confident the Digital Ocean
server will be able to handle it.
I also reintroduce `ns-gce` as the second server in North America, backing
up `ns-hetzner`. My hope is that `ns-hetzner` carries most of the load,
and `ns-gce` carries the rest, but not so much as to trigger Google
Cloud Platform's (GCP's) expensive bandwidth billing.
| DNS server | Queries / second |
|:-----------|-----------------:|
| ns-hetzner | 10706.4 |
| ns-ovh | 10802.0 |
| ns-ovh-sg | 1677.7 |
When tests with long output fail, I have difficulty troubleshooting
because Gomega truncates the output at 4000 bytes. With this commit, we
tell Gomega not to truncate the output, which allows me to see what's
broken, which is invariably at the end of the output.
Fixes, when running `gingko -r .`:
```
Gomega truncated this representation as it exceeds 'format.MaxLength'.
Consider having the object provide a custom 'GomegaStringer' representation
or adjust the parameters in Gomega's 'format' package.
```
I'm worried the traffic to my GCP server will cost me a hundred dollars
in bandwidth fees. It has a volume similar to my late AWS server which,
in its last month, racked up ~$130 in bandwidth fees!
I'm also trying to balance the servers more geographically: instead of
having two servers in the US and none in Asia, I'll have one server in
the US and one in Asia (Singapore).
The OVH server in Asia is expensive — $60/month instead of $20/month for
the OVH server in Warsaw. Also there's a monthly bandwidth cap in
Singapore in addition to the 300 Mbps cap.
I went with a dedicated server, similar to the one in Warsaw, but I took
the opportunity to upgrade it (same price):
- ns-ovh: KS-4: Intel Xeon-E3 1230 v6
- ns-ovh-sg: KS-5: Intel Xeon-E3 1270 v6
I'm hoping that by adding this server to Singapore, the traffic to the
ns-ovh, the Warsaw server, will lessen, and I won't get thos "Anti-DDoS
protection enabled for IP address 51.75.53.19" emails every few days.
Current Queries per second:
- 4,087 ns-gce
- 1,131 ns-hetzner
- 7,183 ns-ovh
When I had introduced ns-hetzner, I forgot to update the records for
ns.sslip.io, which continued to point to the old, deprecated ns-azure.
This commit updates the ns.sslip.io records.
The nameserver on Azure is probably my least-favorite: much slower, much
higher latency. Even though it would've made more geographic sense to
dismantle my GCP nameserver in favor of the Hetzner, I'm using this
opportunity to get rid of the Azure.
And, of course, introduce the Hetzner nameserver with its 20TB of
bandwidth allowance, which I've come to need.
The torrent of traffic I'm receiving has caused my AWS bill to spike
from $9 to $148, all of the increase due to bandwidth charges.
I'm still maintaining ns-aws; the VM still continue to run, and continue
to serve web traffic, and maintain its hostname and IP addresses;
however, it will no longer be in the list of NS records for sslip.io.
There are much less expensive hosting providers. OVH is my current
favorite.
We want to place sslip.io on the Public Suffix List so we don't need to
pester Let's Encrypt for rate limit increases.
According to https://publicsuffix.org/submit/:
> owners of privately-registered domains who themselves issue subdomains
to mutually-untrusting parties may wish to be added to the PRIVATE
section of the list.
References:
- https://publicsuffix.org/
- https://github.com/publicsuffix/list/pull/2206
[Fixes#57]
Previously when the NS records were returned, ns-aws was always returned
first. Coincidentally, 64% of the queries were directed to ns-aws. And
once I exceeded AWS's 10 TB bandwidth limit, AWS began gouging me for
bandwidth charges, and $12.66/month rapidly climbed to $62.30
I'm hoping that by randomly rotating the order of nameservers, the
traffic will balance across the nameservers.
Current snapshot (already ns-ovh is helping):
ns-aws.sslip.io
"Queries: 237744377 (1800.6/s)"
"Answered Queries: 63040894 (477.5/s)"
ns-azure.sslip.io
"Queries: 42610823 (323.4/s)"
"Answered Queries: 14660603 (111.3/s)"
ns-gce.sslip.io
"Queries: 59734371 (454.1/s)"
"Answered Queries: 17636444 (134.1/s)"
ns-ovh.sslip.io
"Queries: 135897332 (1034.4/s)"
"Answered Queries: 36010164 (274.1/s)"
- located in Warsaw, Poland
- IPv4: 51.75.53.19
- IPv6: 2001:41d0:602:2313::1
The crux of this is to take the load off ns-aws, which jumped from
$12.66 → $20.63 → $38.51 → $62.30 in the last four months due to
bandwidth charges exceeding 10 TB.
The real fix is to randomize the order in which the nameservers are
returned.
Meant for obtaining wildcard certs from Let's Encrypt using the DNS-01
challenge.
- introduce a variant of `blocklist.txt` to be used for testing
(`blocklist-test.txt`) because the blocklist has grown so large it
clutters the test output
- more rigorous about lowercasing hostnames when matching against
customized records. This needs to be extendend when we parse _any_
arguments
TODOs:
- remove the wildcard DNS servers
- update instructions
- That's where the code is expected to be
- The only reason the code was buried two directories down was because
it was originally a BOSH release
- There hasn't been a BOSH release in over two years; last one was Feb
26, 2022
- Other than a slight adjustment to the relative location of
`blocklist.txt` file in the integration tests, there were no other
changes