↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |
Who | What | When |
---|---|---|
acf_ | yep. back down to around zero now
interesting the impact it had on the whole VPS network | [00:05] |
up_the_irons | brycec: lol stop smashing things | [00:07] |
brycec | up_the_irons: I contend that your things were already smashed | [00:07] |
acf_ | above 10mbps yet? ^_^ | [00:07] |
brycec | Continuous 10MB/s of IPv6 traffic shouldn't have that much affect, esp. on v4 traffic | [00:07] |
up_the_irons | brycec: ah so that was from pummeling my poor IPv6 OpenBSD router? ;) | [00:07] |
brycec | "curl -6" So, yes. | [00:08] |
acf_ | is there a single 100mbit/s pipe serving all of the VPS? | [00:08] |
up_the_irons | no, 100 mbps per host | [00:08] |
brycec | I'm wondering if mirrors.arp is on the same host as the v6 router and the website
Wait you said mirrors. wasn't a vps | [00:09] |
up_the_irons | yeah, not a vps | [00:09] |
brycec | so just v6 router and website - same host? | [00:09] |
up_the_irons | but the vps host that hosts the IPv6 OpenBSD router *will* be affected
and yeah, v6 router and website are on the same host | [00:10] |
brycec | So if everybody was polling against the website's vps, and the host was burdened with my traffic... I guess that explains why even v4 traffic appeared to be affected.
Sorry guys :( | [00:11] |
up_the_irons | yeah that would make sense
everyone on kvr02 would see the lag | [00:11] |
brycec | I'm still reeling from the revelation that the v6 router was a VPS
Always assumed that, like the other routers, it was a physical box | [00:11] |
up_the_irons | yeah, it still "does the job" | [00:12] |
acf_ | is the 100mbit/s limit imposed to prevent the VM hosts from fighting for bandwidth or something?
or they just have 100mbit/s NICs? | [00:12] |
up_the_irons | acf_: no, it's just that over 5 years ago when i designed everything, all gigabit line cards were really expensive ;) | [00:12] |
acf_ | ah, makes sense | [00:13] |
brycec | So what's the excuse now, knowing that I have a second NIC with GbE intra-network (backup host)?
[on my vps] :P | [00:13] |
up_the_irons | i imagine your traffic wasn't going over that link | [00:14] |
brycec | Correct
My point being that the host machines have GbE links now, ARP has >=GbE uplink. Why don't VPS now have GbE? | [00:14] |
up_the_irons | oh you mean the excuse for not have every host on all gigabit? | [00:14] |
brycec | ;D
(It's not an issue, just wondering) (If it was an issue, I'd throw money at Metal, or at least The American plan and bitch^Wrequest) | [00:15] |
up_the_irons | so, it's like this, all kvr hosts have 3 active links:
1. Primary uplink, 100 Mbps, to s1.lax 2. "Backplane" network, not primary, 2nd tier, "Can go down with sirens screaming", 1000 Mbps, to s6.lax 3. Secondary uplink, 1000 Mbps, to s7.lax (THIS IS NEW) | [00:16] |
brycec | (yesterday alone, my eth0 rx was nearly 900GB, most of which from mirrors.arp) | [00:17] |
up_the_irons | technically, I can switch any vps to use #3, but it simply hasn't been a big priority. and as you guys probably know, s7.lax has had more problems than I'd like, so I refrained from switching people over.
s/Can go down with/Can go down without/ | [00:17] |
BryceBot | <up_the_irons> 2. "Backplane" network, not primary, 2nd tier, "Can go down without sirens screaming", 1000 Mbps, to s6.lax | [00:18] |
brycec | (makes more sense, thx) | [00:18] |
up_the_irons | np | [00:18] |
acf_ | so, as a Metal customer, I have gigabit to s1.lax?
through other layer 2 switches I assume | [00:18] |
up_the_irons | s6.lax, for those wondering, is an Extreme Networks 48-port all gigabit switch, L2 only. If you're familiar with the purple packet eaters, you know why it is 2nd tier.
acf_: yes, Metal customers get 1 Gbps to s1.lax, through a pair of Foundry switches (s8.lax and s9.lax). now everyone knows the entire network topology ;) | [00:19] |
acf_ | it's great. I learn a little bit more about your network every day :) | [00:20] |
brycec | So s6 is purely internal networking | [00:20] |
up_the_irons | acf_: :) | [00:20] |
acf_ | thanks for the info btw | [00:20] |
up_the_irons | brycec: yes, purely internal
acf_: np and i must say, that little Extreme switch has been a trooper i wonder what the uptime is... up_the_irons checks | [00:20] |
brycec | Yes but is it on a rotating platform? http://www.skylineconst.com/technology/ | [00:21] |
up_the_irons | i don't even know how to check on the Extreme... | [00:21] |
brycec | If ARP had office space in a colo...
up_the_irons "show switch" | [00:22] |
up_the_irons | danke | [00:23] |
brycec | brycec just googles... http://extrcdn.extremenetworks.com/wp-content/uploads/2014/01/EAS_100-24t_CLI_V1.pdf | [00:23] |
up_the_irons | System UpTime: 1172 days 5 hours 7 minutes 8 seconds
that actually seems a little low ;) I wonder why i rebooted it ~3 years ago... | [00:23] |
brycec | up_the_irons: Need a spare for $30? http://www.ebay.com/itm/Extreme-Network-48si-48-port-Network-Switch-Awesome-Purple-/181471260591
(these things are littering eBay for 30-150) | [00:24] |
up_the_irons | i think those are 100 mbit | [00:25] |
brycec | touche
the gigabits are around 100-150 | [00:25] |
up_the_irons | but i bet the X350-48t (s6.lax) isn't much more expensive ;) | [00:26] |
brycec | $75 http://www.ebay.com/itm/EXTREME-NETWORKS-16202-SUMMIT-x350-48t-48-PORT-10-100-GIGABIT-NETWORK-SWITCH-/191283473576 | [00:26] |
up_the_irons | ya know what is great about them? they can do 4K active VLANs, which *no* other switch could do anywhere near its price range (not even Juniper) | [00:27] |
brycec | Nice. A pretty embarrassing shortcut, imo
"Oh yeah, we support 4k VLANs, but not all at once" | [00:27] |
up_the_irons | lol | [00:28] |
brycec | I have trouble wrapping my mind around any legitimate technical reason for that. | [00:28] |
up_the_irons | every switch i researched had that problem. it was like, 4K VLANs supported, but only 1K at the same time. | [00:28] |
brycec | brycec suddenly wonders about his own switches | [00:28] |
up_the_irons | the cheap Dell PowerConnects: 1K VLANs, 250 active
lame | [00:29] |
brycec | That's really pathetic | [00:29] |
up_the_irons | yup | [00:29] |
brycec | oh shit it's more common that I'd realized
I have a Netgear that supports "up to 64" brycec may just die laughing | [00:30] |
up_the_irons | yeah it's really common | [00:31] |
brycec | Alright, out of 3 managed switches, two are limited to 512, and the 3rd limited to 64. Good thing I don't need much.
I have a new appreciation for up_the_irons' switch shopping | [00:32] |
up_the_irons | LOL | [00:33] |
brycec | The worst however is a switch in a wifi ap that's been hardcoded to just 1 VLAN. Took forever to realize that and figure out why the guest SSID wasn't working right.
No docs on the subject (and why would there be? it's just some home wifi ap), just a total mystery why I didn't see vlan-tagged traffic when tcpdumping on it By my estimate, I pulled 1.2TB from mirrors, hooray for internal bandwidth. | [00:35] |
up_the_irons | LOL | [00:38] |
brycec | http://i.imgur.com/0MXJJa6.png | [00:38] |
up_the_irons | brycec: thank you for "exercising" my network, sometimes it needs a good push ;) | [00:38] |
brycec | :D
And now we've learned more about it, and what happens when you load down kvr02 | [00:39] |
.... (idle for 16mn) | ||
up_the_irons | yup:)
time to get off this computer screen for a while, bbl | [00:55] |
......................................................................................... (idle for 7h21mn) | ||
m0unds | iirc, the ex3300s support 4096 active vlans
and 8000 routes (since it's a layer 3 switch) but they're also roughly 10x the cost of pre-owned extreme gear, and they're not purple | [08:16] |
pyvpx | the whole range of 3300s only support 8000 routes? hm | [08:18] |
m0unds | the model i looked at did
8000 in hw but even if it goes to switching on the cpu, it's not terrible like if you drop stuff from cef on cisco gear | [08:19] |
haha, my graph looks fine now. started looking okay right around 0100 mdt | [08:28] | |
...... (idle for 28mn) | ||
*** | NiTeMaRe has quit IRC (*.net *.split) | [08:56] |
NiTeMaRe has joined #arpnetworks | [09:08] | |
............................. (idle for 2h23mn) | ||
pyvpx has quit IRC (Remote host closed the connection) | [11:31] | |
..................................................................................... (idle for 7h3mn) | ||
mnathani | up_the_irons: I was caught off guard when you revealed that arpnetworks is on a VPS, (I guess VPS is ok for a website) but the IPv6 Router - well I mean I knew it was a software BSD router, but one run in a VM? That really surprised me. | [18:34] |
mhoran | Heh, I've known that for years! | [18:40] |
RandalSchwartz | why is it surprising to use VMs for that? | [18:41] |
..... (idle for 23mn) | ||
mnathani | RandalSchwartz: I thought routing is something that needs near real-time CPU access to enable low Latency routing.
VPS / VMs would add a delay, increasing latency besides it is routing IPv6 traffic for ALL of Arpnetworks and its customers if I am not mistaken | [19:04] |
*** | carvite has quit IRC (Quit: Lost terminal)
carvite has joined #arpnetworks | [19:09] |
..... (idle for 23mn) | ||
RandalSchwartz | mnathani - not if you have something like em0
which is a virtualized interface very little emulation :) Oh wait... I'm thinking of virtio does arpnetworks support virtio? if so, I wonder why we end up with em devices Oh. first in freebsd 9 not there yet. :) maybe it gets better later :) | [19:33] |
....... (idle for 33mn) | ||
*** | m0unds has quit IRC (Ping timeout: 240 seconds)
m0unds has joined #arpnetworks | [20:09] |
..... (idle for 24mn) | ||
m0unds | i've used a couple different VMs as IPv6 tunnel routers; not a ton of devices with v6 support, but still passing a fair amt of traffic | [20:33] |
............................... (idle for 2h34mn) | ||
mus1cbox | arp's routing is being done by a VPS? | [23:07] |
.......... (idle for 46mn) | ||
mhoran | v6. | [23:53] |
↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |