#arpnetworks 2019-04-22,Mon

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
mercutioyeah it's trhough atlanticmetro now
they have quite a few
https://bgp.he.net/AS33597
so telia, gtt, ntt, cogent, he.net
they're on any2ix too
[00:34]
......... (idle for 41mn)
acf_yeah I was looking at that
are some of those peers only at the european location?
also it looks like at this exact moment, all traffic is going via only atlanticmetro for some reason?
[01:16]
..... (idle for 21mn)
mercutioall outbound is going via atlanticmetro if you're terminating in the new cage
all US traffic is going over redundant connections to atlanticmetro
incoming is dual advertised to both atm, but will be shifting to atlanticmetro only
[01:38]
........................ (idle for 1h56mn)
up_the_ironsThank you, everyone, for putting up with some of the pains of this cage move. Sucky that IPv6 became broken for some VLANs. [03:35]
........... (idle for 51mn)
***solj has joined #arpnetworks [04:26]
soljup_the_irons: i'm having some trouble getting to my dedicated machine and the dns for my recovery email is on that machine. anything i can do to get it running? [04:27]
........................................................ (idle for 4h35mn)
mhoransolj: best bet is to open a support ticket if you can, that's the best way to get ahold of the support team. [09:02]
soljmhoran: yeah, i was able to get it resolved--thanks! [09:16]
..................................................................... (idle for 5h42mn)
***carvite has quit IRC (Ping timeout: 246 seconds)
carvite_ has joined #arpnetworks
carvite_ is now known as carvite
[14:58]
........... (idle for 51mn)
acf_so is this atlanticmetro change permanent? or just until the cage move is complete? [15:50]
.... (idle for 19mn)
mercutiopermanent [16:09]
....... (idle for 30mn)
acf_oh interesting
so basically ARP won't be doing its own transit blend anymore
just buying all transit from this new provider
does that affect both v4 and v6?
[16:39]
mkbup_the_irons, is ipv6 expected to work everywhere now?
I followed vom's suggestion which worked.
Later after seeing someone say it had been fixed I removed the workaround. But mine didn't work. I put the workaround back and figured I'd let things settle down before trying to revert again
[16:42]
mercutiomkb what's your ipv6 address?
ipv6 should work everywhere yes
yeah it keeps thing simpler and actually gains us more upstream transit acf
[16:49]
mkb2607:f2f8:a730::
(but I'm rebooted to remove workaround...)
s/m/ve/
[17:02]
BryceBot<mkb> (but I've rebooted to remove workaround...) [17:03]
mkbor s/ed/ing/
and I have ipv6
this was particularly annoying for me since my nameservers are both ipv6... I guess that's a point of failure I didn't think about
[17:03]
mercutiohmm not sure, i can reach the gateway of ::1 on that subnet
it's not showing the previous problem
what was your workaround?
[17:16]
mkbI normally statically configure that IP and a default route to 2607:f2f8:a730::1 [17:18]
mercutionothing weird about that [17:19]
mkbthe workaround was to enable stateless autoconfig which configured another (random? mac-based? I think both actually) IP and used the local-only fe80::... as the default route [17:19]
mercutioah
and that worked where the other didn't?
[17:19]
mkbthe side effect was that my source IP when initiating packets from the server was one of the autoconfig IPs... but inbound to the static configured IP worked fine
yeah
[17:20]
mercutioah curious.
yeah we weren't getting the /128 for ::1 injecting into the route table on some vlans.
so that makes sense
but all the ::1 addresses wer pinged and showing fine before
[17:20]
.... (idle for 15mn)
acf_yeah that makes sense. simpler is usually better :P
btw is the ipv6 router still a linux VM?
[17:36]
mercutiono, cisco hardware accelerated [17:47]
acf_ahhh nice
I guess I've been gone a while haha
[17:50]
mercutioit's only just changing now
it was never a linux VM btw. it was openbsd VM prior, then openbsd hardware.
[17:51]
acf_oh got it
I knew it was some software / x86 thing
[17:52]
mercutioyeah
traffic volumes on ipv6 have been creeping up.
[17:52]
acf_so that means ipv6 will be gigabit now? [17:52]
mercutioit's only taken how many years?
ipv6 has been gigabit a while
[17:52]
acf_I guess since your move to the bare metal hardware [17:52]
mercutioyeah well it scaled better then but it was gigabit on vm prior. [17:53]
acf_and what's going on with s1.lax and s3.lax? [17:53]
mercutiothey're being deprecated [17:53]
acf_so you've got a new cisco I'm guessing to replace them [17:53]
mercutioyeah [17:53]
acf_so many upgrades, nice [17:53]
mercutioyeah
we've also completed ceph migrations
so none of our VM in los angeles are on local storage.
[17:54]
acf_ahh
is VM still just plain qemu?
[17:55]
mercutiowhich makes the migrations easier
yeah
[17:55]
acf_last I used an ARP vm it was qemu with an ssh management thing [17:55]
mercutiossh management? [17:55]
acf_I heard you're live-migrating them to the new cage
which is also pretty cool
or maybe web management, and ssh for vnc or something?
[17:55]
mercutiothere's a serial console that you can access via ssh
we've also got a web console now
it's a bit easier than mananging a dedicated server
[17:57]
acf_ugh that supermicro ipmi [17:57]
mercutioyeah
i prefer hp
[17:57]
acf_I prefer anything that doesn't require a java applet [17:58]
mercutiohp has ssh and can do text console emulation as well as serial from there
you can also change cdrom from the ssh
[17:58]
acf_that's pretty good [17:59]
mercutioyeah [17:59]
acf_biggest annoyance for me is supermicro's proprietary vnc thing
when I need console access
[17:59]
mercutioyeah
IPMIView helps a bit
[17:59]
acf_last time I had to try on like 5 platforms to get the thing to run without crashing [17:59]
mercutiobut it's different on different servers
yeah i've had issues too
but yeah if customers don't have to experience those issues it's good. there are quite a lot of benefits to going for arp thunder or such
[17:59]
***ziyourenxiang__ has joined #arpnetworks [18:04]
........... (idle for 54mn)
maxp has joined #arpnetworks [18:58]
................ (idle for 1h19mn)
acf_yeah but it's definitely a tradeoff
I've seen plenty of qemu bugs too
[20:17]
.............. (idle for 1h5mn)
mercutiotrue [21:22]
........................... (idle for 2h11mn)
acf_also what kind of r/w performance do you get out of ceph?
I feel like it's got to be a lot lower than an ssd in the physical box
(I also have no idea how ceph works)
[23:33]
...... (idle for 25mn)
***qbit has quit IRC (Ping timeout: 246 seconds) [23:59]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)