[00:05] yep. back down to around zero now [00:06] interesting the impact it had on the whole VPS network [00:07] brycec: lol stop smashing things [00:07] up_the_irons: I contend that your things were already smashed [00:07] above 10mbps yet? ^_^ [00:07] Continuous 10MB/s of IPv6 traffic shouldn't have that much affect, esp. on v4 traffic [00:07] brycec: ah so that was from pummeling my poor IPv6 OpenBSD router? ;) [00:08] "curl -6" So, yes. [00:08] is there a single 100mbit/s pipe serving all of the VPS? [00:08] no, 100 mbps per host [00:09] I'm wondering if mirrors.arp is on the same host as the v6 router and the website [00:09] Wait you said mirrors. wasn't a vps [00:09] yeah, not a vps [00:09] so just v6 router and website - same host? [00:10] but the vps host that hosts the IPv6 OpenBSD router *will* be affected [00:10] and yeah, v6 router and website are on the same host [00:11] So if everybody was polling against the website's vps, and the host was burdened with my traffic... I guess that explains why even v4 traffic appeared to be affected. [00:11] Sorry guys :( [00:11] yeah that would make sense [00:11] everyone on kvr02 would see the lag [00:11] I'm still reeling from the revelation that the v6 router was a VPS [00:12] Always assumed that, like the other routers, it was a physical box [00:12] yeah, it still "does the job" [00:12] is the 100mbit/s limit imposed to prevent the VM hosts from fighting for bandwidth or something? [00:12] or they just have 100mbit/s NICs? [00:12] acf_: no, it's just that over 5 years ago when i designed everything, all gigabit line cards were really expensive ;) [00:13] ah, makes sense [00:13] So what's the excuse now, knowing that I have a second NIC with GbE intra-network (backup host)? [00:13] [on my vps] [00:13] :P [00:14] i imagine your traffic wasn't going over that link [00:14] Correct [00:14] My point being that the host machines have GbE links now, ARP has >=GbE uplink. Why don't VPS now have GbE? [00:14] oh you mean the excuse for not have every host on all gigabit? [00:15] ;D [00:15] (It's not an issue, just wondering) [00:15] (If it was an issue, I'd throw money at Metal, or at least The American plan and bitch^Wrequest) [00:16] so, it's like this, all kvr hosts have 3 active links: [00:16] 1. Primary uplink, 100 Mbps, to s1.lax [00:16] 2. "Backplane" network, not primary, 2nd tier, "Can go down with sirens screaming", 1000 Mbps, to s6.lax [00:17] 3. Secondary uplink, 1000 Mbps, to s7.lax (THIS IS NEW) [00:17] (yesterday alone, my eth0 rx was nearly 900GB, most of which from mirrors.arp) [00:17] technically, I can switch any vps to use #3, but it simply hasn't been a big priority. and as you guys probably know, s7.lax has had more problems than I'd like, so I refrained from switching people over. [00:18] s/Can go down with/Can go down without/ [00:18] 2. "Backplane" network, not primary, 2nd tier, "Can go down without sirens screaming", 1000 Mbps, to s6.lax [00:18] (makes more sense, thx) [00:18] np [00:18] so, as a Metal customer, I have gigabit to s1.lax? [00:18] through other layer 2 switches I assume [00:19] s6.lax, for those wondering, is an Extreme Networks 48-port all gigabit switch, L2 only. If you're familiar with the purple packet eaters, you know why it is 2nd tier. [00:19] acf_: yes, Metal customers get 1 Gbps to s1.lax, through a pair of Foundry switches (s8.lax and s9.lax). [00:20] now everyone knows the entire network topology ;) [00:20] it's great. I learn a little bit more about your network every day :) [00:20] So s6 is purely internal networking [00:20] acf_: :) [00:20] thanks for the info btw [00:20] brycec: yes, purely internal [00:20] acf_: np [00:21] and i must say, that little Extreme switch has been a trooper [00:21] i wonder what the uptime is... [00:21] * up_the_irons checks [00:21] Yes but is it on a rotating platform? http://www.skylineconst.com/technology/ [00:21] i don't even know how to check on the Extreme... [00:22] If ARP had office space in a colo... [00:22] up_the_irons "show switch" [00:23] danke [00:23] * brycec just googles... http://extrcdn.extremenetworks.com/wp-content/uploads/2014/01/EAS_100-24t_CLI_V1.pdf [00:23] System UpTime: 1172 days 5 hours 7 minutes 8 seconds [00:24] that actually seems a little low ;) I wonder why i rebooted it ~3 years ago... [00:24] up_the_irons: Need a spare for $30? http://www.ebay.com/itm/Extreme-Network-48si-48-port-Network-Switch-Awesome-Purple-/181471260591 [00:25] (these things are littering eBay for 30-150) [00:25] i think those are 100 mbit [00:25] touche [00:26] the gigabits are around 100-150 [00:26] but i bet the X350-48t (s6.lax) isn't much more expensive ;) [00:26] $75 http://www.ebay.com/itm/EXTREME-NETWORKS-16202-SUMMIT-x350-48t-48-PORT-10-100-GIGABIT-NETWORK-SWITCH-/191283473576 [00:27] ya know what is great about them? they can do 4K active VLANs, which *no* other switch could do anywhere near its price range (not even Juniper) [00:27] Nice. A pretty embarrassing shortcut, imo [00:27] "Oh yeah, we support 4k VLANs, but not all at once" [00:28] lol [00:28] I have trouble wrapping my mind around any legitimate technical reason for that. [00:28] every switch i researched had that problem. it was like, 4K VLANs supported, but only 1K at the same time. [00:28] * brycec suddenly wonders about his own switches [00:29] the cheap Dell PowerConnects: 1K VLANs, 250 active [00:29] lame [00:29] That's really pathetic [00:29] yup [00:30] oh shit it's more common that I'd realized [00:30] I have a Netgear that supports "up to 64" [00:31] * brycec may just die laughing [00:31] yeah it's really common [00:32] Alright, out of 3 managed switches, two are limited to 512, and the 3rd limited to 64. Good thing I don't need much. [00:33] I have a new appreciation for up_the_irons' switch shopping [00:33] LOL [00:35] The worst however is a switch in a wifi ap that's been hardcoded to just 1 VLAN. Took forever to realize that and figure out why the guest SSID wasn't working right. [00:35] No docs on the subject (and why would there be? it's just some home wifi ap), just a total mystery why I didn't see vlan-tagged traffic when tcpdumping on it [00:37] By my estimate, I pulled 1.2TB from mirrors, hooray for internal bandwidth. [00:38] LOL [00:38] http://i.imgur.com/0MXJJa6.png [00:38] brycec: thank you for "exercising" my network, sometimes it needs a good push ;) [00:39] :D [00:39] And now we've learned more about it, and what happens when you load down kvr02 [00:55] yup:) [00:55] time to get off this computer screen for a while, bbl [08:16] iirc, the ex3300s support 4096 active vlans [08:16] and 8000 routes (since it's a layer 3 switch) [08:17] but they're also roughly 10x the cost of pre-owned extreme gear, and they're not purple [08:18] the whole range of 3300s only support 8000 routes? hm [08:19] the model i looked at did [08:19] 8000 in hw [08:20] but even if it goes to switching on the cpu, it's not terrible like if you drop stuff from cef on cisco gear [08:28] haha, my graph looks fine now. started looking okay right around 0100 mdt [08:56] *** NiTeMaRe has quit IRC (*.net *.split) [09:08] *** NiTeMaRe has joined #arpnetworks [11:31] *** pyvpx has quit IRC (Remote host closed the connection) [18:34] up_the_irons: I was caught off guard when you revealed that arpnetworks is on a VPS, (I guess VPS is ok for a website) but the IPv6 Router - well I mean I knew it was a software BSD router, but one run in a VM? That really surprised me. [18:40] Heh, I've known that for years! [18:41] why is it surprising to use VMs for that? [19:04] RandalSchwartz: I thought routing is something that needs near real-time CPU access to enable low Latency routing. [19:05] VPS / VMs would add a delay, increasing latency [19:06] besides it is routing IPv6 traffic for ALL of Arpnetworks and its customers if I am not mistaken [19:09] *** carvite has quit IRC (Quit: Lost terminal) [19:10] *** carvite has joined #arpnetworks [19:33] mnathani - not if you have something like em0 [19:33] which is a virtualized interface [19:33] very little emulation :) [19:33] Oh wait... I'm thinking of virtio [19:34] does arpnetworks support virtio? [19:34] if so, I wonder why we end up with em devices [19:36] Oh. first in freebsd 9 [19:36] not there yet. :) [19:36] maybe it gets better later :) [20:09] *** m0unds has quit IRC (Ping timeout: 240 seconds) [20:09] *** m0unds has joined #arpnetworks [20:33] i've used a couple different VMs as IPv6 tunnel routers; not a ton of devices with v6 support, but still passing a fair amt of traffic [23:07] arp's routing is being done by a VPS? [23:53] v6.