Showing posts with label ipv4. Show all posts
Showing posts with label ipv4. Show all posts

24 November, 2016

cafe

When you're in a cafe

64 bytes from 8.8.8.8: icmp_seq=3 ttl=59 time=26.8 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=59 time=23.9 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=59 time=26.0 ms

and then their internet dies

From 192.168.1.254 icmp_seq=28 Destination Host Unreachable
From 192.168.1.254 icmp_seq=29 Destination Host Unreachable
From 192.168.1.254 icmp_seq=30 Destination Host Unreachable

and then there is a power cut

From 169.254.6.166 icmp_seq=51 Destination Host Unreachable
From 169.254.6.166 icmp_seq=52 Destination Host Unreachable
From 169.254.6.166 icmp_seq=53 Destination Host Unreachable

but that reset the router

64 bytes from 8.8.8.8: icmp_seq=1786 ttl=59 time=20.7 ms
64 bytes from 8.8.8.8: icmp_seq=1787 ttl=59 time=24.0 ms
64 bytes from 8.8.8.8: icmp_seq=1788 ttl=59 time=21.5 ms

and all is good in the world again.

28 May, 2015

10 minute Haskell talk: An awkward interaction between lazy ByteStrings and a misbehaving (non-)transparent HTTP middlebox

The slides for a lightning talk I gave at the London Haskell User Group are here. Press a in the browser and you'll get some explanatory notes with the slides; otherwise they're a bit sparse.

05 August, 2014

ping reverse dns

Slightly unexpected hostname lookup on a CNAME.

maven.op is a CNAME to lulu; the reverse DNS points only to lulu.

Someone has already figured out the lulu hostname by the first line of output because it's shown there. But the ping lines for a few seconds show the name I gave and then turn to the "real" hostname (perhaps when a reverse DNS happens? rather than using the name that we looked up forward-wise to begin with?)

No big deal, but slightly unexpected.

benc@utsire:~$ ping maven.ops.xeus.co.uk
PING lulu.xeus.co.uk (46.4.100.47) 56(84) bytes of data.
64 bytes from maven.ops.xeus.co.uk (46.4.100.47): icmp_req=1 ttl=51 time=466 ms
64 bytes from maven.ops.xeus.co.uk (46.4.100.47): icmp_req=2 ttl=51 time=51.1 ms
64 bytes from maven.ops.xeus.co.uk (46.4.100.47): icmp_req=3 ttl=51 time=51.9 ms
64 bytes from lulu.xeus.co.uk (46.4.100.47): icmp_req=4 ttl=51 time=60.1 ms
64 bytes from lulu.xeus.co.uk (46.4.100.47): icmp_req=5 ttl=51 time=96.5 ms
64 bytes from lulu.xeus.co.uk (46.4.100.47): icmp_req=6 ttl=51 time=50.9 ms
64 bytes from lulu.xeus.co.uk (46.4.100.47): icmp_req=7 ttl=51 time=49.5 ms
64 bytes from lulu.xeus.co.uk (46.4.100.47): icmp_req=8 ttl=51 time=50.6 ms

benc@utsire:~$ ping -V
ping utility, iputils-sss20101006

07 August, 2013

ping error

Got this while pinging Google DNS from my mifi:

64 bytes from 8.8.8.8: icmp_req=302 ttl=50 time=2095 ms
64 bytes from 8.8.8.8: icmp_req=303 ttl=50 time=2107 ms
wrong data byte #52 should be 0x34 but was 0x45
#8      8 9 a b c d e f 10 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20 21 22 23 24 25 26 27
#40     28 29 2a 2b 2c 2d 2e 2f 30 31 32 33 45 0 0 54
64 bytes from 8.8.8.8: icmp_req=305 ttl=50 time=2127 ms
64 bytes from 8.8.8.8: icmp_req=316 ttl=50 time=11091 ms
64 bytes from 8.8.8.8: icmp_req=317 ttl=50 time=10083 ms

Interesting to me that i) ping is doing more checking than I thought, and ii) there are link layers around that corrupt data (rather than drop packets) in reality not just theory.

03 July, 2013

5 layer NAT

in a hotel I'm staying in, looks like NAT 5 layers deep... (yes they could be just routed with a final single NAT layer...)
 1. 192.168.99.1                                                      0.0%     6    1.2  15.4   1.1  84.9  34.1
 2. 192.168.20.1                                                      0.0%     6    2.4  36.4   2.3 206.1  83.1
 3. 192.168.1.1                                                       0.0%     6    3.8  28.7   2.9 154.5  61.6
 4. 192.168.10.254                                                    0.0%     6   86.7 107.1  77.9 195.2  43.9
 5. 192.168.219.250                                                   0.0%     6   28.2  56.0  27.2 141.7  46.8
 6. 195.14.158.85                                                    16.7%     6   26.6  55.1  26.6 115.7  35.0
 7. 195.14.158.145                                                   25.0%     5   31.1  33.7  30.9  39.1   4.7

24 June, 2011

non-cidr netmask: "worked in testing but broke colleagues' brains"

CIDR prefix length (for example, the 24 in 128.9.128.0/24 is a more concise notation for (a commonly used subset of) netmasks.

A prefix length contains less information - it can only represent netmasks that consist of a sequence of 1 bits, followed by 0 bits to the end. For example, /24 is 11111111111111111111111100000000 (24 1s and then 32-24=8 0s)

This is useful because thats how most people use netmasks.

But there's a set of netmasks that aren't representable this way - for example 11111111000000001111111100000000.

Did anyone ever use netmasks that weren't prefix-length-representable? Apparently yes:

Addresses were allocated from these networks sequentially, and the oldest
web sites tended to get the most traffic, so a straightforward setup that
spread the six /18s across the reverse proxies didn't balance the load
particularly well. I toyed with using 0xffff0003 netmasks to split the /16
so that successive addresses could be routed to each of the four London
reverse proxies in turn.

This worked in testing but I didn't deploy it because it broke my
colleagues' brains and non-contiguous netmasks were an unsupported
feature.