I think this was the thing where the connection between ARP and his Comcast line was slow... over IPv6 we determined that it could have been HE http://irclogger.arpnetworks.com/irclogger_log/arpnetworks?date=2014-12-11,Thu&sel=34#l30 ^ that's the testing we did ah i c is that ntt->verizon ipv4 congestion still happening? i wonder if he sends a lot of traffic to comcast considering they both do a lot of ipv6 it may just be a congested link which means it may not get fixed until next yera very little NTT -> Verizon congestion shows up on smokeping now cool. it was kind of disturbing that happened for so long That's what she said!! yea well not as disturbing as comcast.. it was replaced with Amazon EC2 -> Verizon congestion :P err cogent damnit, i go tthe two mixed up. heh. amazon is lame :) i used to have terrible speeds here from amazon east coast. which seems to be where most stuff is looks like the Verizon <-> Comcast congestion is gone too? i have no way to test that but it'd depend on city too i imagine http://unixcube.org/who/acf/tmp/comcast-net.png what's that uhh an ad from Comcast afaik it's legit probably they want their TWC merger to go through... congestion is such a complicated issue netflix kind of hilighted it. That's what she said!! but it's happened many times over the years. i kind of like the fact that cogent let it happen in a way. cos it brought it to attention more. I think it was also a stupid thing for them to do but at&t, verizon, comcast etc are all evil also they still don't IPv6 peer with HE it's fine as long as your isp isn't using cogent :) I wonder how much of that congestion we had been seeing there was from Netflix it's complicatged. but after private peering normal cogent stuff got better also I believe NTT was a transit provider for Netflix but who's to know how much the traffic hurt their network internally Comcast's network? i haven't been following US news, what's happening now with regards to government net neutrility? nah cogent's network they were doing qos oh that's right I think they were doing that just for peering though not because of their backbone the real solution is to run links at < 50% utilisation and upgrade them when they're nearing 50% so that you can handle failures, and spikes. That's what she said!! right. but cogent was being their usual asshole self and didn't want to negotiate that probably not both at once :) well it should be funded by both sides equally well, I guess a lot of people were in that boat this time i think it's reasonably to not carry traffic a long distance. the only way the net is going to work well is if large companys can't charge smaller companies to send them data. or charge unreasonable fees at least. otherwise only large companies can exist, and bandwidth prices go up, and bandwidth is seen as being in short supply and valuable. which means conservation of bandwidth and slowing economy. that has only happened with eyeball networks so far lots of competition for transit still well verizon, at&t, comcast etc are all just as bad as each other. yea, they're the bad ones I don't know if it's completely unreasonable to request payment for traffic though for carrying traffic sure it follows the "sender pays" model we've always had like if a user is in new york, and a sender is in los angeles, it's reasonable to expect them to carry the traffic to new york. right or to pay someone else to carry it to new york. but if a user is in east/south side of canada, it's not unreasonable to want to dump it in new york too and expect the isp to pick it up from there. but that's where it gets tricky. how many places should you offload data. like amazon don't pay to send data to new zealand, they offload it in the US... interesting enough, microsoft pay to send data to new zealand, and offload it in NZ.. and microsoft have free peering so I suppose it depends on the source / destination of the traffic? billing would have to change a lot.. well basically senders should send data as close to the end user as possible in an ideal world. but you can't expect them to be in every little city. yea but if you're doing more than 500 megabit in a city, it's not unreasonable to offload data there. and to make that work well, you need to reduce the cost of mpls links, and remote data centre ports etc. and i think that's where things are slowly moving towards. so you think things could resolve themselves over time? not necessarily. if you can charge $10,000 for a gigabit, or charge $0 for a gigabit, what would you rather? right. but that's if the telecom monopolies control all the links otherwise the pricing should become competitive but if you want to send 16 megabit video to 1000 users, it's reasonable to have your own connection into that city. so how to ensure people are being "reasonable"? and it's up to you if you want to do it as 10 gigabit, or gigabit with smarts for uplink capacity to the city. by encouraging "local peering" isp's like verizon/comcast should have to advertise ip's within 100 miles or osmething. and provide free peering. ah I see distance is just the easiest metric. i heard somewhere that cogent were being extra nasty about where they were offloading data and i've heard of other companies doing similar. "extra nasty"? "unreasonable" ah, so like they pick up the traffic from a customer in LAX, then offload it onto Verizon in LAX? like if you want to dump lots of data, it's good to do it somewehre like dallas, california, new york, illinois, flordia etc. where there are major interconnects already in place. why is that nasty though? That's what she said!! it's not so reasonable to dump lots of traffic in kansas or such. oh yes because in smaller areas there's less infrastructure or less central points. why would they want to do that though? I can't imagine it would be cheaper kanasas is a backup route anyway i think because netflix can host their shit anywhere., but yeah it's not cheaper. it's just more rude. so they're paying more to be rude well i think it's more complicated than that I would imagine.. apparently President Obama wants to regulate consumer Internet access as a telecom service that's the latest plan here some target like 500 megabit is just a good way to make sure there are more interconnects. but basically there shoudl be shit loads of interconnects, and transit providers giving cheap transit to those locations. what city are you in? Orcutt, CA tiny city between LAX and SFO Verizon goes via LAX, Comcast goes via SJC so santa maria is the nearest location? yea so santa maria looks like somewhere an interconnect should happen how would that happen? there is no datacenter or anthing... hmm checking population so yeah it's a bit over 100k there is a private DC nearby in San Luis Obispo, CA here it happens in telephone exchanges. That's what she said!! Verizon DSL traffic all gets tunneled to the router in SLO but Verizon doesn't peer there dsl is legacy what happens when you get fibre I think there aren't transit providers there actually I think Verizon stopped rolling out fiber too expensive, they didn't really care anyway http://fibrebuild.fibrenet.it/en/fibre-net-a-successful-completion-for-the-restoration-of-santa-maria-assunta-abbey-in-san-gimignano/ oh wrong fibre :P verizon should terminate ports closer :/ 100,000 population is enough to terminate ports locally change is probably very difficult for them so yeah it means they need to provision bng's locally. yeah. they're big, and have old things I think they might do static routes all the way to LA but yeah ok ideally they should be terminating on bng's closer. where is SLO? bng? pppoe termination or ppp as well as dhcp etc. 40 miles from here opposite direction of LA though http://www.cisco.com/c/en/us/td/docs/routers/asr9000/software/asr9k_r4-3/bng/configuration/guide/b_bng_cg43xasr9k/b_bng_cg43asr9k_chapter_01.html still not that bad. ok that odesn't make it easier. hmm 40 miles i suppose that's not too bad then. ok so say SLO should have interconnects then yea they even have a DC, etc... is there other providers other than verizon? AT&T so verizon and at&t should peer there too. but I believe that datacenter leases lines to LAX and SJC and traffic shouldn't have to go via sjc/lax no real providers have presences there oh also Comcast yea. that would be good and anyone else that wnats to peer I'd guess adding a ton of peering points like that would vastly increase their network complexity and if netflix wants to send data to users it should host something there like their own cache possibly adding more points of failure? it's not really more complex. it's more things to go wrong, but also more redundancy I suppose certainly Verizon would have to change a lot of their crap if users stay connected and fibre is servered they can still talk to each other but yeah this is ideal speaking rather than money speaking chances are that gigabit peering would be fine for the location atm but it'd be nicer to provision 10 gigabit switches, and get some provider to pay for them :/ err like ciscso or juniper or something but yeah even with isps's like at&t, verizon etc, local traffic would probably be fine on gigabit atm until users get faster connections https://www.google.com/maps/@36.3807012,-119.2606125,7z/data=!4m2!6m1!1szKX6_3rmHouE.k3rceCRgcUQs can you see that? but as connections go up to gigabit etc, it'd be nice to be able to shift data quickly between providfers. that's my Comcast -> ARP path hmm i don't see a path you may have to do copy link url it seemed to get rid of the end bits hmm and changes to maps.google.com https://www.google.com/maps/d/edit?mid=zKX6_3rmHouE.k3rceCRgcUQs that should work you can see San Luis Obispo on there too my Verizon traffic goes to there, then straight to LAX but the latencies are about equal, because of the DSL FEC stuff also Verizon DSL has huge queues rtt goes up to 10 seconds when downloading a file netflix's idea is that you host your own google caches. err own netflix caches. using your power etc? it's better to have something that covers whoever wants to send you data and they pay for transit to the location we don't realyl have buffer bloat issues in this country, its' weird. because a lot of people are running short queues, and it can impact international performance. I think google has done that with YouTube for a while yeah that worked is there no cable from santa maria to santa barbara to santa clarita etc. I'm sure there is maybe it's not Comcast's though or their routing just sucks there is a cable owned by Level3 that goes that way.. so yeah taking a wide stab in the dark, modesto, fresno, santa maria, santa barbara, santa clarita, lancaster etc should ll have interconnections. i wonder what lancaster's population is like 156,633 in 2010 so yeah, big enough that certainly would be good new zealand has actually got quite good peering in general. and the US has quite bad peering :) yea although a friend of mine runs a wireless isp and there's no peering in his vity. city. but it's like 50k people or something with some surrounding towns hmm 74k no public data centres etc, closest is telephone exchange I think Verizon, AT&T, etc... make peering in telephone exchanges way too expensive yeah. well those are already there. there is a law that says CLECs have to rent you space in their COs and that's where government could step in. it's expensive here too. but your stuff has to be 48v powered but people have connections in there anyway. that's not a bad thing. and it's super expensive, and all your equipment has to be approved by them they probably have dslams there anyway? they do so yeah, as well as a dslam, adding a bng there. I got to visit a US CO once those telephone switches are huge apparently juniper are adding bng functionality to mx80s. I wonder how much motivation the CLECs have to do any sort of network improvement at least here it can take ages to get stuff permitted to be installed too. I'd guess most of their revenue comes from cell backhaul, MPLS, etc... *ILEC hard to know. that reminds me when I was having that Verizon packet loss issue with the NTT peering there was also internal Verizon packet loss this country is surprising me with how much network improvements have been happening apparently 200 megabit is the new target. it wasn't that long ago that 20 megabit was rare. Comcast is doing OK here also Google is trying to compete oh is it you i have been smokepinging for ages? idk. my IP probably has changed :P herh maybe but it'll be someone else nearby :/ yea I actually got Verizon to fix their things by calling them hmm snloca formerly my Verizon IP cool. snloca == San Luis Obispo yeah. so yeah it was stable for me for ages :) but i think my uplink was via verizon oh it's via level3 now so i think level3 are passing to verizon in los angeles and that seems long enough distance wise, that if two people had fibre connecting locally would be faster. err would be noticably faster. that dsl interleaving sucks :/ yea I tried to get them to turn it off once.. they didn't why wouldn't they? "not supported" lame is it vdsl or adsl? adsl no excuses then :) does vdsl have the same interlaving? they don't offer vdsl here and is it fixed delay, or depending on sync rate? I'm close enough to the CO to be able to get it though that sucks. i have vdsl. I'm not sure about that I've never been able to see my modem stats but I can't remember it ever changing but adsl here shifted from fixed intearleaving delay to a kind of adaptive one and if you reduced your snr margin you could reduce delay. hmm interesting oh you can't see your modem stats. I think it's fixed here i've been snr tweaking adsl for years. most people here have a modem with integrated router. I don't normal sync rates are around 16 to 18 megabit here but with snr tweaking can be 21 to 23. how do you accomplish that? on broadcom chipsets you just telnet into modem and do "adsl configure --snr 65480 or such" if you have snr margin of 12 and it'll go to like 3 or 4 or something. or low normal numbers to do less than 6db diff what? how do you adjust that in software? it's like the lower the better adjusting tx power? unless you go to like 65000+ :) it doesn't adjust power like i have adsl as well as vdsl http://pastebin.com/i8MqgwDF so that training margin of 8 means it's less of a tweak than i used to do http://pastebin.com/gzEaEvpp and it ends uip with sync of 22 megabit'ish. oh I see with no interleaving. adjusting "training margin" enables faster line rates what kind of latency do you get? yeh. about 10 msec on adsl and 5 msec on vdsl oh nice looks like I get 30 ms with interleaving i want lower :) I get 8 ms on DOCSIS i used to get 20 msec with interleaving. docsis2? i think a lot of that 8 msec is getting to los angeles or san jose no it's the jitter that hurts with docsis though just to the local node that seems high. ummm... also my vdsl is screwed atm because i have bad wiring "Cable Modem Status Offline" and it added 8 msec interleaving downstream :( after fixing cable. looks like DOCSIS 3 docsis3 is actually a lot better than docsis2 1. c-67-180-12-1.hsd1.ca.comcast.ne 0.0% 10 9.0 10.5 7.9 22.7 4.8 7.9 ms best you have two lines? Verizon and Comcast my Verizon line is cheap, and bundled with phone but it sucks so it's like backup? yea I don't think I've ever seen it completey fail I've seen the Comcast one fail several times actually heh there's only one cable network in new zealand, and it used to fail heaps. but have much lower delay than adsl. until the interleaving stuff got sorted. that's how it is here now then it was kind of worse to my mind but it's not *too* much failure.. only like four times a year, for a couple of hours :P That's what she said!! then more stuff shifted to docsis 3 and prices came down and i think it's ok now. they also had nasty transparent proxies that would severely limit download speeds. like 100 megabit connections that would do 300k/sec to the UK oh super yeah i did some tests with iperf for someone. err with someone and udp was fine. Comcast tried injecting TCP FIN s for BitTorrent connections once people got really pissed but tcp/ip was shit. but http was even worse. i think that was partially queuing issues i care more than i should about such things. but on connections with short queues, often artificially capping the speed you send at them will speed up downloads. especially on rate limited faster connections. I got that working on my Verizon DSL line actually like if you have 200 megabit connection sold at 30 megabit etc. well that was to work around delays? work around the huge queues yeah. here queues are less than 20 msec normally like uhh axel -a to local server jumps it up from 13.6 msec to 26.7 msec not too bad and 10 to 28 on adsl. actually adsl was sometimes a bit lower than that too yeah it's not too bad except that it can stall transfers and give losss easily up_the_irons: https://github.com/iojs/build/issues/1#issuecomment-67944799 <-- re the iojs build stuff we talked about a while back dunno if that's enough to justify it for ya :D up_the_irons: Did you do something? Because now throughput is just as fast on v6/v4. Or maybe it's a time of day thing? mhoran: i wouldn't be surprised if it's time of day related. Slower now, but not as slow as last night / last time we looked into it. well it is xmas. i imagine less business users using the net at least. i dunno if it was business or residential peak setup smokeping with curl :) Where in the world is it already xmas??? 11:59:49 mercutio | well it is xmas. it's "xmas period" it's xmas eve here? lots of people don't work today here at least. it's just another work day for me. stonehenge doesn't pay sick days or holidays. :) or vacation The life of the self-employed, eh? are you working christmas day too? http://updog.pw/ well, part of it, yes. as an atheist, christmas doesn't really mean much except "lots of things closed needlessly" :) I don't see why, as an atheist, you'd eschew a holiday and a joyful times. Sure, it has "Christ" in the name, but in this day and age that's pretty much the extent of religious value in Xmas Why not celebrate yule and the winter harvest instead? :) Maybe festivus! Oh crap, that's today! And you got me nothing? You scoundrel! Time for the airing of grievances! which you wear on your feets of strength? :) lol I'm not touching that with an aluminum pole. :) twss Okay! twss! 'I'm not touching that with an aluminum pole. :)' heh usually it overtags I wonder why it undertagged that one What's really odd is that training the filter actually lowered the bayesian score. (0.92451488735822 -> 0.9150846827481) even weirder mhoran: i didn't do anything qbit: interesting thread "also the build bot server identifiers have the provider names in them and they get referenced and seen quite a lot" qbit: ^^ what does he mean by that? I mean, who sees it? qbit: remind me of the machine size you would need nodejs nerds dual cpu with 2g ram should work i can probably swing that, but after the holidays. if it uses too much proc IO / disk, we'll have to talk about a different solution (maybe cheap dedi) lol Anyone that looks at the nodejs version, I guess... But it's definitely less visible than, say, the hostname of the build machine in the system uname ^ which gets printed on boot, on login, etc yeah (In fact, it doesn't even print for node --version, so I have no idea where that "glory" is supposed to be seen) But rest assured up_the_irons, you'll be FAMOUS LOL maybe the build server status is reported in some IRC / HipChat / Slack channel At my last job, it was very, very frequently told to us by [potential] customers that they're going to buy a hundred units a month (etc), and unsurprisingly they never really did. Consequently, I have become extraordinarily jaded when it comes to lines like that. I wonder if the FreeBSD people would let you donate the use of a dedi as a build machine like brycec was saying, those hostnames really do appear places I'm under the impression that OpenBSD and FreeBSD are very well-equipped when comes to the x86 architectures. the netbsd automatic build site is hosted from Columbia U and has really crappy connectivity at least NetBSD cross compiles everything. so it's all done on amd64 :P What you see as a strength, the OpenBSD project sees as a weakness re:cross-compiling. if an architecture cannot self-host, it's unsupported. And I tend to agree the problem is that everything takes forever to compile on vax no argument there :p But at least you don't have compiler issues http://www.openbsd.org/images/rack2009.jpg *all* the architectures (as of 2009) (things have changed since then, new ones, some retired, etc) Also, that image is missing zaurus lol oh shit, nevermind it's there just tiny mtr -4 vax.openbsd.org I think I can ping the vax? HA! There are two architectures I don't see in the photo (that would have been supported in 2009) - mac68k and m88k maybe the use other 68k machines to build them? What other 68k machines are in that photo? (To be clear, I'm simply disproving "17:26:55 acf_ | *all* the architectures") oh wow you're right I wonder if there in some other place or if they build them on qemu or something oh wow you're right twss Okay! twss! 'oh wow you're right' If they can only build on qemu, then there's no reason to support the arch :p I suspect those build machines are simply "elsewhere" openbsd suppports 68k? my amiga is so slow with unix it reminds me of sun3s i can't actually remember which i stick on it, it may have been old openbsd. openbsd doesn't support 68k anymore afaik at least not amiga oh mac68k.