mercutio: they have quite a few
https://bgp.he.net/AS33597
so telia, gtt, ntt, cogent, he.net
they're on any2ix too acf_: yeah I was looking at that
are some of those peers only at the european location?
also it looks like at this exact moment, all traffic is going via only atlanticmetro for some reason? mercutio: all outbound is going via atlanticmetro if you're terminating in the new cage
all US traffic is going over redundant connections to atlanticmetro
incoming is dual advertised to both atm, but will be shifting to atlanticmetro only up_the_irons: Thank you, everyone, for putting up with some of the pains of this cage move. Sucky that IPv6 became broken for some VLANs. ***: solj has joined #arpnetworks solj: up_the_irons: i'm having some trouble getting to my dedicated machine and the dns for my recovery email is on that machine. anything i can do to get it running? mhoran: solj: best bet is to open a support ticket if you can, that's the best way to get ahold of the support team. solj: mhoran: yeah, i was able to get it resolved--thanks! ***: carvite has quit IRC (Ping timeout: 246 seconds)
carvite_ has joined #arpnetworks
carvite_ is now known as carvite acf_: so is this atlanticmetro change permanent? or just until the cage move is complete? mercutio: permanent acf_: oh interesting
so basically ARP won't be doing its own transit blend anymore
just buying all transit from this new provider
does that affect both v4 and v6? mkb: up_the_irons, is ipv6 expected to work everywhere now?
I followed vom's suggestion which worked.
Later after seeing someone say it had been fixed I removed the workaround. But mine didn't work. I put the workaround back and figured I'd let things settle down before trying to revert again mercutio: mkb what's your ipv6 address?
ipv6 should work everywhere yes
yeah it keeps thing simpler and actually gains us more upstream transit acf mkb: 2607:f2f8:a730::
(but I'm rebooted to remove workaround...)
s/m/ve/ BryceBot: <mkb> (but I've rebooted to remove workaround...) mkb: or s/ed/ing/
and I have ipv6
this was particularly annoying for me since my nameservers are both ipv6... I guess that's a point of failure I didn't think about mercutio: hmm not sure, i can reach the gateway of ::1 on that subnet
it's not showing the previous problem
what was your workaround? mkb: I normally statically configure that IP and a default route to 2607:f2f8:a730::1 mercutio: nothing weird about that mkb: the workaround was to enable stateless autoconfig which configured another (random? mac-based? I think both actually) IP and used the local-only fe80::... as the default route mercutio: ah
and that worked where the other didn't? mkb: the side effect was that my source IP when initiating packets from the server was one of the autoconfig IPs... but inbound to the static configured IP worked fine
yeah mercutio: ah curious.
yeah we weren't getting the /128 for ::1 injecting into the route table on some vlans.
so that makes sense
but all the ::1 addresses wer pinged and showing fine before acf_: yeah that makes sense. simpler is usually better :P
btw is the ipv6 router still a linux VM? mercutio: no, cisco hardware accelerated acf_: ahhh nice
I guess I've been gone a while haha mercutio: it's only just changing now
it was never a linux VM btw. it was openbsd VM prior, then openbsd hardware. acf_: oh got it
I knew it was some software / x86 thing mercutio: yeah
traffic volumes on ipv6 have been creeping up. acf_: so that means ipv6 will be gigabit now? mercutio: it's only taken how many years?
ipv6 has been gigabit a while acf_: I guess since your move to the bare metal hardware mercutio: yeah well it scaled better then but it was gigabit on vm prior. acf_: and what's going on with s1.lax and s3.lax? mercutio: they're being deprecated acf_: so you've got a new cisco I'm guessing to replace them mercutio: yeah acf_: so many upgrades, nice mercutio: yeah
we've also completed ceph migrations
so none of our VM in los angeles are on local storage. acf_: ahh
is VM still just plain qemu? mercutio: which makes the migrations easier
yeah acf_: last I used an ARP vm it was qemu with an ssh management thing mercutio: ssh management? acf_: I heard you're live-migrating them to the new cage
which is also pretty cool
or maybe web management, and ssh for vnc or something? mercutio: there's a serial console that you can access via ssh
we've also got a web console now
it's a bit easier than mananging a dedicated server acf_: ugh that supermicro ipmi mercutio: yeah
i prefer hp acf_: I prefer anything that doesn't require a java applet mercutio: hp has ssh and can do text console emulation as well as serial from there
you can also change cdrom from the ssh acf_: that's pretty good mercutio: yeah acf_: biggest annoyance for me is supermicro's proprietary vnc thing
when I need console access mercutio: yeah
IPMIView helps a bit acf_: last time I had to try on like 5 platforms to get the thing to run without crashing mercutio: but it's different on different servers
yeah i've had issues too
but yeah if customers don't have to experience those issues it's good. there are quite a lot of benefits to going for arp thunder or such ***: ziyourenxiang__ has joined #arpnetworks
maxp has joined #arpnetworks acf_: yeah but it's definitely a tradeoff
I've seen plenty of qemu bugs too mercutio: true acf_: also what kind of r/w performance do you get out of ceph?
I feel like it's got to be a lot lower than an ssd in the physical box
(I also have no idea how ceph works) ***: qbit has quit IRC (Ping timeout: 246 seconds)