yeah it's trhough atlanticmetro now they have quite a few https://bgp.he.net/AS33597 so telia, gtt, ntt, cogent, he.net they're on any2ix too yeah I was looking at that are some of those peers only at the european location? also it looks like at this exact moment, all traffic is going via only atlanticmetro for some reason? all outbound is going via atlanticmetro if you're terminating in the new cage all US traffic is going over redundant connections to atlanticmetro incoming is dual advertised to both atm, but will be shifting to atlanticmetro only Thank you, everyone, for putting up with some of the pains of this cage move. Sucky that IPv6 became broken for some VLANs. up_the_irons: i'm having some trouble getting to my dedicated machine and the dns for my recovery email is on that machine. anything i can do to get it running? solj: best bet is to open a support ticket if you can, that's the best way to get ahold of the support team. mhoran: yeah, i was able to get it resolved--thanks! so is this atlanticmetro change permanent? or just until the cage move is complete? permanent oh interesting so basically ARP won't be doing its own transit blend anymore just buying all transit from this new provider does that affect both v4 and v6? up_the_irons, is ipv6 expected to work everywhere now? I followed vom's suggestion which worked. Later after seeing someone say it had been fixed I removed the workaround. But mine didn't work. I put the workaround back and figured I'd let things settle down before trying to revert again mkb what's your ipv6 address? ipv6 should work everywhere yes yeah it keeps thing simpler and actually gains us more upstream transit acf 2607:f2f8:a730:: (but I'm rebooted to remove workaround...) s/m/ve/ (but I've rebooted to remove workaround...) or s/ed/ing/ and I have ipv6 this was particularly annoying for me since my nameservers are both ipv6... I guess that's a point of failure I didn't think about hmm not sure, i can reach the gateway of ::1 on that subnet it's not showing the previous problem what was your workaround? I normally statically configure that IP and a default route to 2607:f2f8:a730::1 nothing weird about that the workaround was to enable stateless autoconfig which configured another (random? mac-based? I think both actually) IP and used the local-only fe80::... as the default route ah and that worked where the other didn't? the side effect was that my source IP when initiating packets from the server was one of the autoconfig IPs... but inbound to the static configured IP worked fine yeah ah curious. yeah we weren't getting the /128 for ::1 injecting into the route table on some vlans. so that makes sense but all the ::1 addresses wer pinged and showing fine before yeah that makes sense. simpler is usually better :P btw is the ipv6 router still a linux VM? no, cisco hardware accelerated ahhh nice I guess I've been gone a while haha it's only just changing now it was never a linux VM btw. it was openbsd VM prior, then openbsd hardware. oh got it I knew it was some software / x86 thing yeah traffic volumes on ipv6 have been creeping up. so that means ipv6 will be gigabit now? it's only taken how many years? ipv6 has been gigabit a while I guess since your move to the bare metal hardware yeah well it scaled better then but it was gigabit on vm prior. and what's going on with s1.lax and s3.lax? they're being deprecated so you've got a new cisco I'm guessing to replace them yeah so many upgrades, nice yeah we've also completed ceph migrations so none of our VM in los angeles are on local storage. ahh is VM still just plain qemu? which makes the migrations easier yeah last I used an ARP vm it was qemu with an ssh management thing ssh management? I heard you're live-migrating them to the new cage which is also pretty cool or maybe web management, and ssh for vnc or something? there's a serial console that you can access via ssh we've also got a web console now it's a bit easier than mananging a dedicated server ugh that supermicro ipmi yeah i prefer hp I prefer anything that doesn't require a java applet hp has ssh and can do text console emulation as well as serial from there you can also change cdrom from the ssh that's pretty good yeah biggest annoyance for me is supermicro's proprietary vnc thing when I need console access yeah IPMIView helps a bit last time I had to try on like 5 platforms to get the thing to run without crashing but it's different on different servers yeah i've had issues too but yeah if customers don't have to experience those issues it's good. there are quite a lot of benefits to going for arp thunder or such yeah but it's definitely a tradeoff I've seen plenty of qemu bugs too true also what kind of r/w performance do you get out of ceph? I feel like it's got to be a lot lower than an ssd in the physical box (I also have no idea how ceph works)