[00:34] yeah it's trhough atlanticmetro now [00:34] they have quite a few [00:34] https://bgp.he.net/AS33597 [00:35] so telia, gtt, ntt, cogent, he.net [00:35] they're on any2ix too [01:16] yeah I was looking at that [01:17] are some of those peers only at the european location? [01:17] also it looks like at this exact moment, all traffic is going via only atlanticmetro for some reason? [01:38] all outbound is going via atlanticmetro if you're terminating in the new cage [01:38] all US traffic is going over redundant connections to atlanticmetro [01:39] incoming is dual advertised to both atm, but will be shifting to atlanticmetro only [03:35] Thank you, everyone, for putting up with some of the pains of this cage move. Sucky that IPv6 became broken for some VLANs. [04:26] *** solj has joined #arpnetworks [04:27] up_the_irons: i'm having some trouble getting to my dedicated machine and the dns for my recovery email is on that machine. anything i can do to get it running? [09:02] solj: best bet is to open a support ticket if you can, that's the best way to get ahold of the support team. [09:16] mhoran: yeah, i was able to get it resolved--thanks! [14:58] *** carvite has quit IRC (Ping timeout: 246 seconds) [14:59] *** carvite_ has joined #arpnetworks [14:59] *** carvite_ is now known as carvite [15:50] so is this atlanticmetro change permanent? or just until the cage move is complete? [16:09] permanent [16:39] oh interesting [16:39] so basically ARP won't be doing its own transit blend anymore [16:40] just buying all transit from this new provider [16:40] does that affect both v4 and v6? [16:42] up_the_irons, is ipv6 expected to work everywhere now? [16:43] I followed vom's suggestion which worked. [16:44] Later after seeing someone say it had been fixed I removed the workaround. But mine didn't work. I put the workaround back and figured I'd let things settle down before trying to revert again [16:49] mkb what's your ipv6 address? [16:49] ipv6 should work everywhere yes [16:50] yeah it keeps thing simpler and actually gains us more upstream transit acf [17:02] 2607:f2f8:a730:: [17:03] (but I'm rebooted to remove workaround...) [17:03] s/m/ve/ [17:03] (but I've rebooted to remove workaround...) [17:03] or s/ed/ing/ [17:04] and I have ipv6 [17:05] this was particularly annoying for me since my nameservers are both ipv6... I guess that's a point of failure I didn't think about [17:16] hmm not sure, i can reach the gateway of ::1 on that subnet [17:17] it's not showing the previous problem [17:17] what was your workaround? [17:18] I normally statically configure that IP and a default route to 2607:f2f8:a730::1 [17:19] nothing weird about that [17:19] the workaround was to enable stateless autoconfig which configured another (random? mac-based? I think both actually) IP and used the local-only fe80::... as the default route [17:19] ah [17:20] and that worked where the other didn't? [17:20] the side effect was that my source IP when initiating packets from the server was one of the autoconfig IPs... but inbound to the static configured IP worked fine [17:20] yeah [17:20] ah curious. [17:21] yeah we weren't getting the /128 for ::1 injecting into the route table on some vlans. [17:21] so that makes sense [17:21] but all the ::1 addresses wer pinged and showing fine before [17:36] yeah that makes sense. simpler is usually better :P [17:38] btw is the ipv6 router still a linux VM? [17:47] no, cisco hardware accelerated [17:50] ahhh nice [17:51] I guess I've been gone a while haha [17:51] it's only just changing now [17:51] it was never a linux VM btw. it was openbsd VM prior, then openbsd hardware. [17:52] oh got it [17:52] I knew it was some software / x86 thing [17:52] yeah [17:52] traffic volumes on ipv6 have been creeping up. [17:52] so that means ipv6 will be gigabit now? [17:52] it's only taken how many years? [17:52] ipv6 has been gigabit a while [17:52] I guess since your move to the bare metal hardware [17:53] yeah well it scaled better then but it was gigabit on vm prior. [17:53] and what's going on with s1.lax and s3.lax? [17:53] they're being deprecated [17:53] so you've got a new cisco I'm guessing to replace them [17:53] yeah [17:53] so many upgrades, nice [17:54] yeah [17:54] we've also completed ceph migrations [17:54] so none of our VM in los angeles are on local storage. [17:55] ahh [17:55] is VM still just plain qemu? [17:55] which makes the migrations easier [17:55] yeah [17:55] last I used an ARP vm it was qemu with an ssh management thing [17:55] ssh management? [17:55] I heard you're live-migrating them to the new cage [17:55] which is also pretty cool [17:55] or maybe web management, and ssh for vnc or something? [17:57] there's a serial console that you can access via ssh [17:57] we've also got a web console now [17:57] it's a bit easier than mananging a dedicated server [17:57] ugh that supermicro ipmi [17:57] yeah [17:58] i prefer hp [17:58] I prefer anything that doesn't require a java applet [17:58] hp has ssh and can do text console emulation as well as serial from there [17:59] you can also change cdrom from the ssh [17:59] that's pretty good [17:59] yeah [17:59] biggest annoyance for me is supermicro's proprietary vnc thing [17:59] when I need console access [17:59] yeah [17:59] IPMIView helps a bit [17:59] last time I had to try on like 5 platforms to get the thing to run without crashing [17:59] but it's different on different servers [18:00] yeah i've had issues too [18:01] but yeah if customers don't have to experience those issues it's good. there are quite a lot of benefits to going for arp thunder or such [18:04] *** ziyourenxiang__ has joined #arpnetworks [18:58] *** maxp has joined #arpnetworks [20:17] yeah but it's definitely a tradeoff [20:17] I've seen plenty of qemu bugs too [21:22] true [23:33] also what kind of r/w performance do you get out of ceph? [23:34] I feel like it's got to be a lot lower than an ssd in the physical box [23:34] (I also have no idea how ceph works) [23:59] *** qbit has quit IRC (Ping timeout: 246 seconds)