acf_: over IPv6
we determined that it could have been HE
http://irclogger.arpnetworks.com/irclogger_log/arpnetworks?date=2014-12-11,Thu&sel=34#l30
^ that's the testing we did ***: kevr_ has quit IRC (Quit: ZNC - http://znc.in)
kevr has joined #arpnetworks up_the_irons: ah i c mercutio: is that ntt->verizon ipv4 congestion still happening?
i wonder if he sends a lot of traffic to comcast
considering they both do a lot of ipv6
it may just be a congested link
which means it may not get fixed until next yera acf_: very little NTT -> Verizon congestion shows up on smokeping now mercutio: cool.
it was kind of disturbing that happened for so long BryceBot: That's what she said!! acf_: yea mercutio: well not as disturbing as comcast.. acf_: it was replaced with Amazon EC2 -> Verizon congestion :P mercutio: err cogent
damnit, i go tthe two mixed up.
heh.
amazon is lame :)
i used to have terrible speeds here from amazon east coast.
which seems to be where most stuff is acf_: looks like the Verizon <-> Comcast congestion is gone too? mercutio: i have no way to test that
but it'd depend on city too i imagine acf_: http://unixcube.org/who/acf/tmp/comcast-net.png mercutio: what's that
uhh acf_: an ad from Comcast
afaik it's legit
probably they want their TWC merger to go through... mercutio: congestion is such a complicated issue
netflix kind of hilighted it. BryceBot: That's what she said!! mercutio: but it's happened many times over the years.
i kind of like the fact that cogent let it happen in a way.
cos it brought it to attention more. acf_: I think it was also a stupid thing for them to do mercutio: but at&t, verizon, comcast etc are all evil acf_: also they still don't IPv6 peer with HE mercutio: it's fine as long as your isp isn't using cogent :) acf_: I wonder how much of that congestion we had been seeing there was from Netflix mercutio: it's complicatged.
but after private peering normal cogent stuff got better acf_: also I believe NTT was a transit provider for Netflix mercutio: but who's to know how much the traffic hurt their network internally acf_: Comcast's network? mercutio: i haven't been following US news, what's happening now with regards to government net neutrility?
nah cogent's network
they were doing qos acf_: oh that's right
I think they were doing that just for peering though
not because of their backbone mercutio: the real solution is to run links at < 50% utilisation
and upgrade them when they're nearing 50%
so that you can handle failures, and spikes. BryceBot: That's what she said!! acf_: right. but cogent was being their usual asshole self and didn't want to negotiate that mercutio: probably not both at once :)
well it should be funded by both sides equally acf_: well, I guess a lot of people were in that boat this time mercutio: i think it's reasonably to not carry traffic a long distance.
the only way the net is going to work well is if large companys can't charge smaller companies to send them data.
or charge unreasonable fees at least.
otherwise only large companies can exist, and bandwidth prices go up, and bandwidth is seen as being in short supply and valuable.
which means conservation of bandwidth and slowing economy. acf_: that has only happened with eyeball networks so far
lots of competition for transit still mercutio: well verizon, at&t, comcast etc are all just as bad as each other. acf_: yea, they're the bad ones
I don't know if it's completely unreasonable to request payment for traffic though mercutio: for carrying traffic sure acf_: it follows the "sender pays" model we've always had mercutio: like if a user is in new york, and a sender is in los angeles, it's reasonable to expect them to carry the traffic to new york. acf_: right mercutio: or to pay someone else to carry it to new york.
but if a user is in east/south side of canada, it's not unreasonable to want to dump it in new york too
and expect the isp to pick it up from there.
but that's where it gets tricky.
how many places should you offload data.
like amazon don't pay to send data to new zealand, they offload it in the US...
interesting enough, microsoft pay to send data to new zealand, and offload it in NZ..
and microsoft have free peering acf_: so I suppose it depends on the source / destination of the traffic?
billing would have to change a lot.. mercutio: well basically senders should send data as close to the end user as possible in an ideal world.
but you can't expect them to be in every little city. acf_: yea mercutio: but if you're doing more than 500 megabit in a city, it's not unreasonable to offload data there.
and to make that work well, you need to reduce the cost of mpls links, and remote data centre ports etc.
and i think that's where things are slowly moving towards. acf_: so you think things could resolve themselves over time? mercutio: not necessarily.
if you can charge $10,000 for a gigabit, or charge $0 for a gigabit, what would you rather? acf_: right. but that's if the telecom monopolies control all the links
otherwise the pricing should become competitive mercutio: but if you want to send 16 megabit video to 1000 users, it's reasonable to have your own connection into that city. acf_: so how to ensure people are being "reasonable"? mercutio: and it's up to you if you want to do it as 10 gigabit, or gigabit with smarts for uplink capacity to the city.
by encouraging "local peering"
isp's like verizon/comcast should have to advertise ip's within 100 miles or osmething.
and provide free peering. acf_: ah I see mercutio: distance is just the easiest metric.
i heard somewhere that cogent were being extra nasty about where they were offloading data
and i've heard of other companies doing similar. acf_: "extra nasty"? mercutio: "unreasonable" acf_: ah, so like they pick up the traffic from a customer in LAX, then offload it onto Verizon in LAX? mercutio: like if you want to dump lots of data, it's good to do it somewehre like dallas, california, new york, illinois, flordia etc.
where there are major interconnects already in place. acf_: why is that nasty though? BryceBot: That's what she said!! mercutio: it's not so reasonable to dump lots of traffic in kansas or such. acf_: oh yes mercutio: because in smaller areas there's less infrastructure
or less central points. acf_: why would they want to do that though?
I can't imagine it would be cheaper mercutio: kanasas is a backup route anyway i think
because netflix can host their shit anywhere.,
but yeah it's not cheaper.
it's just more rude. acf_: so they're paying more to be rude mercutio: well i think it's more complicated than that acf_: I would imagine..
apparently President Obama wants to regulate consumer Internet access as a telecom service
that's the latest plan here mercutio: some target like 500 megabit is just a good way to make sure there are more interconnects.
but basically there shoudl be shit loads of interconnects, and transit providers giving cheap transit to those locations.
what city are you in? acf_: Orcutt, CA
tiny city between LAX and SFO
Verizon goes via LAX, Comcast goes via SJC mercutio: so santa maria is the nearest location? acf_: yea mercutio: so santa maria looks like somewhere an interconnect should happen acf_: how would that happen?
there is no datacenter or anthing... mercutio: hmm checking population
so yeah it's a bit over 100k acf_: there is a private DC nearby in San Luis Obispo, CA mercutio: here it happens in telephone exchanges. BryceBot: That's what she said!! acf_: Verizon DSL traffic all gets tunneled to the router in SLO
but Verizon doesn't peer there mercutio: dsl is legacy
what happens when you get fibre acf_: I think there aren't transit providers there actually
I think Verizon stopped rolling out fiber
too expensive, they didn't really care anyway mercutio: http://fibrebuild.fibrenet.it/en/fibre-net-a-successful-completion-for-the-restoration-of-santa-maria-assunta-abbey-in-san-gimignano/
oh
wrong fibre acf_: :P mercutio: verizon should terminate ports closer :/
100,000 population is enough to terminate ports locally acf_: change is probably very difficult for them mercutio: so yeah it means they need to provision bng's locally.
yeah. acf_: they're big, and have old things
I think they might do static routes all the way to LA mercutio: but yeah ok ideally they should be terminating on bng's closer.
where is SLO? acf_: bng? mercutio: pppoe termination
or ppp
as well as dhcp etc. acf_: 40 miles from here
opposite direction of LA though mercutio: http://www.cisco.com/c/en/us/td/docs/routers/asr9000/software/asr9k_r4-3/bng/configuration/guide/b_bng_cg43xasr9k/b_bng_cg43asr9k_chapter_01.html
still not that bad.
ok that odesn't make it easier.
hmm
40 miles i suppose that's not too bad then.
ok so say SLO should have interconnects then acf_: yea
they even have a DC, etc... mercutio: is there other providers other than verizon? acf_: AT&T mercutio: so verizon and at&t should peer there too. acf_: but I believe that datacenter leases lines to LAX and SJC mercutio: and traffic shouldn't have to go via sjc/lax acf_: no real providers have presences there
oh also Comcast
yea. that would be good mercutio: and anyone else that wnats to peer acf_: I'd guess adding a ton of peering points like that would vastly increase their network complexity mercutio: and if netflix wants to send data to users it should host something there
like their own cache acf_: possibly adding more points of failure? mercutio: it's not really more complex.
it's more things to go wrong, but also more redundancy acf_: I suppose
certainly Verizon would have to change a lot of their crap mercutio: if users stay connected and fibre is servered they can still talk to each other
but yeah this is ideal speaking
rather than money speaking
chances are that gigabit peering would be fine for the location atm
but it'd be nicer to provision 10 gigabit switches, and get some provider to pay for them :/
err like ciscso or juniper or something
but yeah even with isps's like at&t, verizon etc, local traffic would probably be fine on gigabit atm
until users get faster connections acf_: https://www.google.com/maps/@36.3807012,-119.2606125,7z/data=!4m2!6m1!1szKX6_3rmHouE.k3rceCRgcUQs
can you see that? mercutio: but as connections go up to gigabit etc, it'd be nice to be able to shift data quickly between providfers. acf_: that's my Comcast -> ARP path mercutio: hmm i don't see a path
you may have to do copy link url
it seemed to get rid of the end bits hmm
and changes to maps.google.com acf_: https://www.google.com/maps/d/edit?mid=zKX6_3rmHouE.k3rceCRgcUQs
that should work
you can see San Luis Obispo on there too
my Verizon traffic goes to there, then straight to LAX
but the latencies are about equal, because of the DSL FEC stuff
also Verizon DSL has huge queues
rtt goes up to 10 seconds when downloading a file mercutio: netflix's idea is that you host your own google caches.
err own netflix caches.
using your power etc?
it's better to have something that covers whoever wants to send you data
and they pay for transit to the location
we don't realyl have buffer bloat issues in this country, its' weird.
because a lot of people are running short queues, and it can impact international performance. acf_: I think google has done that with YouTube for a while mercutio: yeah that worked
is there no cable from santa maria to santa barbara to santa clarita etc. acf_: I'm sure there is
maybe it's not Comcast's though
or their routing just sucks
there is a cable owned by Level3 that goes that way.. mercutio: so yeah taking a wide stab in the dark, modesto, fresno, santa maria, santa barbara, santa clarita, lancaster etc should ll have interconnections.
i wonder what lancaster's population is like
156,633
in 2010
so yeah, big enough acf_: that certainly would be good mercutio: new zealand has actually got quite good peering in general.
and the US has quite bad peering :) acf_: yea mercutio: although a friend of mine runs a wireless isp and there's no peering in his vity.
city.
but it's like 50k people or something
with some surrounding towns
hmm 74k
no public data centres etc, closest is telephone exchange acf_: I think Verizon, AT&T, etc... make peering in telephone exchanges way too expensive mercutio: yeah.
well those are already there. acf_: there is a law that says CLECs have to rent you space in their COs mercutio: and that's where government could step in.
it's expensive here too. acf_: but your stuff has to be 48v powered mercutio: but people have connections in there anyway.
that's not a bad thing. acf_: and it's super expensive, and all your equipment has to be approved by them mercutio: they probably have dslams there anyway? acf_: they do mercutio: so yeah, as well as a dslam, adding a bng there. acf_: I got to visit a US CO once
those telephone switches are huge mercutio: apparently juniper are adding bng functionality to mx80s. acf_: I wonder how much motivation the CLECs have to do any sort of network improvement mercutio: at least here it can take ages to get stuff permitted to be installed too. acf_: I'd guess most of their revenue comes from cell backhaul, MPLS, etc...
*ILEC mercutio: hard to know. acf_: that reminds me
when I was having that Verizon packet loss issue
with the NTT peering
there was also internal Verizon packet loss mercutio: this country is surprising me with how much network improvements have been happening
apparently 200 megabit is the new target.
it wasn't that long ago that 20 megabit was rare. acf_: Comcast is doing OK here
also Google is trying to compete mercutio: oh is it you i have been smokepinging for ages? acf_: idk. my IP probably has changed :P mercutio: herh
maybe
but it'll be someone else nearby :/ acf_: yea
I actually got Verizon to fix their things
by calling them mercutio: hmm snloca acf_: formerly my Verizon IP mercutio: cool. acf_: snloca == San Luis Obispo mercutio: yeah.
so yeah it was stable for me for ages :)
but i think my uplink was via verizon
oh it's via level3 now
so i think level3 are passing to verizon in los angeles
and that seems long enough distance wise, that if two people had fibre connecting locally would be faster.
err would be noticably faster.
that dsl interleaving sucks :/ acf_: yea
I tried to get them to turn it off once.. they didn't mercutio: why wouldn't they? acf_: "not supported" mercutio: lame
is it vdsl or adsl? acf_: adsl mercutio: no excuses then :)
does vdsl have the same interlaving? acf_: they don't offer vdsl here mercutio: and is it fixed delay, or depending on sync rate? acf_: I'm close enough to the CO to be able to get it though mercutio: that sucks.
i have vdsl. acf_: I'm not sure about that
I've never been able to see my modem stats
but I can't remember it ever changing mercutio: but adsl here shifted from fixed intearleaving delay to a kind of adaptive one
and if you reduced your snr margin you could reduce delay. acf_: hmm interesting mercutio: oh you can't see your modem stats. acf_: I think it's fixed here mercutio: i've been snr tweaking adsl for years. acf_: most people here have a modem with integrated router. I don't mercutio: normal sync rates are around 16 to 18 megabit here
but with snr tweaking can be 21 to 23. acf_: how do you accomplish that? mercutio: on broadcom chipsets you just telnet into modem and do "adsl configure --snr 65480 or such"
if you have snr margin of 12
and it'll go to like 3 or 4 or something.
or low normal numbers to do less than 6db diff acf_: what? how do you adjust that in software? mercutio: it's like the lower the better acf_: adjusting tx power? mercutio: unless you go to like 65000+ :)
it doesn't adjust power
like i have adsl as well as vdsl
http://pastebin.com/i8MqgwDF
so that training margin of 8 means it's less of a tweak than i used to do
http://pastebin.com/gzEaEvpp
and it ends uip with sync of 22 megabit'ish. acf_: oh I see mercutio: with no interleaving. acf_: adjusting "training margin" enables faster line rates
what kind of latency do you get? mercutio: yeh.
about 10 msec on adsl and 5 msec on vdsl acf_: oh nice
looks like I get 30 ms with interleaving mercutio: i want lower :) acf_: I get 8 ms on DOCSIS mercutio: i used to get 20 msec with interleaving.
docsis2?
i think a lot of that 8 msec is getting to los angeles or san jose acf_: no mercutio: it's the jitter that hurts with docsis though acf_: just to the local node mercutio: that seems high. acf_: ummm... mercutio: also my vdsl is screwed atm because i have bad wiring acf_: "Cable Modem Status Offline" mercutio: and it added 8 msec interleaving downstream :(
after fixing cable. acf_: looks like DOCSIS 3 mercutio: docsis3 is actually a lot better than docsis2 acf_: 1. c-67-180-12-1.hsd1.ca.comcast.ne 0.0% 10 9.0 10.5 7.9 22.7 4.8
7.9 ms best mercutio: you have two lines? acf_: Verizon and Comcast
my Verizon line is cheap, and bundled with phone
but it sucks mercutio: so it's like backup? acf_: yea
I don't think I've ever seen it completey fail
I've seen the Comcast one fail several times actually mercutio: heh
there's only one cable network in new zealand, and it used to fail heaps.
but have much lower delay than adsl.
until the interleaving stuff got sorted. acf_: that's how it is here now mercutio: then it was kind of worse to my mind acf_: but it's not *too* much failure.. only like four times a year, for a couple of hours :P BryceBot: That's what she said!! mercutio: then more stuff shifted to docsis 3
and prices came down
and i think it's ok now.
they also had nasty transparent proxies
that would severely limit download speeds.
like 100 megabit connections that would do 300k/sec to the UK acf_: oh super mercutio: yeah i did some tests with iperf for someone.
err with someone
and udp was fine. acf_: Comcast tried injecting TCP FIN s for BitTorrent connections once
people got really pissed mercutio: but tcp/ip was shit.
but http was even worse.
i think that was partially queuing issues
i care more than i should about such things.
but on connections with short queues, often artificially capping the speed you send at them will speed up downloads.
especially on rate limited faster connections. acf_: I got that working on my Verizon DSL line actually mercutio: like if you have 200 megabit connection sold at 30 megabit etc.
well that was to work around delays? acf_: work around the huge queues mercutio: yeah.
here queues are less than 20 msec normally
like uhh
axel -a to local server jumps it up from 13.6 msec to 26.7 msec acf_: not too bad mercutio: and 10 to 28 on adsl.
actually adsl was sometimes a bit lower than that too
yeah it's not too bad except that it can stall transfers and give losss easily ***: RandalSchwartz has quit IRC (*.net *.split)
jpalmer has quit IRC (*.net *.split)
Hien_ has quit IRC (*.net *.split)
staticsafe has quit IRC (*.net *.split)
up_the_irons has quit IRC (*.net *.split)
NiTeMaRe has quit IRC (*.net *.split)
tellnes has quit IRC (*.net *.split)
meingtsla has quit IRC (*.net *.split)
twobithacker has quit IRC (*.net *.split)
hazardous has quit IRC (*.net *.split)
eryc has quit IRC (*.net *.split)
JC_Denton has quit IRC (*.net *.split)
medum has quit IRC (*.net *.split)
anisfarhana has quit IRC (*.net *.split)
pjs has quit IRC (*.net *.split)
mike-burns has quit IRC (*.net *.split)
dwarren has quit IRC (*.net *.split)
jlgaddis has quit IRC (*.net *.split)
RandalSchwartz has joined #arpnetworks
jpalmer has joined #arpnetworks
Hien_ has joined #arpnetworks
staticsafe has joined #arpnetworks
up_the_irons has joined #arpnetworks
tellnes has joined #arpnetworks
meingtsla has joined #arpnetworks
twobithacker has joined #arpnetworks
sinisalo.freenode.net sets mode: +o up_the_irons
hazardous has joined #arpnetworks
eryc has joined #arpnetworks
JC_Denton has joined #arpnetworks
medum has joined #arpnetworks
anisfarhana has joined #arpnetworks
pjs has joined #arpnetworks
mike-burns has joined #arpnetworks
dwarren has joined #arpnetworks
jlgaddis has joined #arpnetworks
sinisalo.freenode.net sets mode: +o mike-burns
NiTeMaRe has joined #arpnetworks qbit: up_the_irons: https://github.com/iojs/build/issues/1#issuecomment-67944799 <-- re the iojs build stuff we talked about a while back
dunno if that's enough to justify it for ya :D mhoran: up_the_irons: Did you do something? Because now throughput is just as fast on v6/v4. Or maybe it's a time of day thing? ***: awyeah has quit IRC (Quit: ZNC - http://znc.in)
awyeah has joined #arpnetworks mercutio: mhoran: i wouldn't be surprised if it's time of day related. mhoran: Slower now, but not as slow as last night / last time we looked into it. mercutio: well it is xmas.
i imagine less business users using the net at least.
i dunno if it was business or residential peak
setup smokeping with curl :) brycec: Where in the world is it already xmas??? 11:59:49 mercutio | well it is xmas. mercutio: it's "xmas period"
it's xmas eve here?
lots of people don't work today here at least. RandalSchwartz: it's just another work day for me.
stonehenge doesn't pay sick days or holidays. :)
or vacation ***: awyeah has quit IRC (Quit: ZNC - http://znc.in)
awyeah has joined #arpnetworks brycec: The life of the self-employed, eh? mercutio: are you working christmas day too?
http://updog.pw/ RandalSchwartz: well, part of it, yes.
as an atheist, christmas doesn't really mean much
except "lots of things closed needlessly" :) brycec: I don't see why, as an atheist, you'd eschew a holiday and a joyful times. Sure, it has "Christ" in the name, but in this day and age that's pretty much the extent of religious value in Xmas
Why not celebrate yule and the winter harvest instead? :) RandalSchwartz: Maybe festivus!
Oh crap, that's today! brycec: And you got me nothing? You scoundrel! RandalSchwartz: Time for the airing of grievances! -: brycec names his socks "grievance" RandalSchwartz: which you wear on your feets of strength? :) brycec: lol RandalSchwartz: I'm not touching that with an aluminum pole. :) brycec: twss BryceBot: Okay! twss! 'I'm not touching that with an aluminum pole. :)' RandalSchwartz: heh
usually it overtags
I wonder why it undertagged that one brycec: What's really odd is that training the filter actually lowered the bayesian score. (0.92451488735822 -> 0.9150846827481) RandalSchwartz: even weirder -: RandalSchwartz transfers from the sky club to the departure gate... up_the_irons: mhoran: i didn't do anything
qbit: interesting thread
"also the build bot server identifiers have the provider names in them and they get referenced and seen quite a lot"
qbit: ^^ what does he mean by that? I mean, who sees it?
qbit: remind me of the machine size you would need qbit: nodejs nerds
dual cpu with 2g ram should work up_the_irons: i can probably swing that, but after the holidays. if it uses too much proc IO / disk, we'll have to talk about a different solution (maybe cheap dedi) brycec: lol
Anyone that looks at the nodejs version, I guess...
But it's definitely less visible than, say, the hostname of the build machine in the system uname
^ which gets printed on boot, on login, etc up_the_irons: yeah brycec: (In fact, it doesn't even print for node --version, so I have no idea where that "glory" is supposed to be seen)
But rest assured up_the_irons, you'll be FAMOUS up_the_irons: LOL
maybe the build server status is reported in some IRC / HipChat / Slack channel brycec: At my last job, it was very, very frequently told to us by [potential] customers that they're going to buy a hundred units a month (etc), and unsurprisingly they never really did. Consequently, I have become extraordinarily jaded when it comes to lines like that. acf_: I wonder if the FreeBSD people would let you donate the use of a dedi as a build machine
like brycec was saying, those hostnames really do appear places brycec: I'm under the impression that OpenBSD and FreeBSD are very well-equipped when comes to the x86 architectures. acf_: the netbsd automatic build site is hosted from Columbia U
and has really crappy connectivity
at least NetBSD cross compiles everything. so it's all done on amd64 brycec: <obligatory desparaging remark about NetBSD> acf_: :P brycec: What you see as a strength, the OpenBSD project sees as a weakness re:cross-compiling. if an architecture cannot self-host, it's unsupported.
And I tend to agree acf_: the problem is that everything takes forever to compile on vax brycec: no argument there :p
But at least you don't have compiler issues acf_: http://www.openbsd.org/images/rack2009.jpg
*all* the architectures -: up_the_irons pets his vax machine brycec: (as of 2009)
(things have changed since then, new ones, some retired, etc)
Also, that image is missing zaurus up_the_irons: lol brycec: oh shit, nevermind it's there
just tiny acf_: mtr -4 vax.openbsd.org
I think I can ping the vax? brycec: HA! There are two architectures I don't see in the photo (that would have been supported in 2009) - mac68k and m88k acf_: maybe the use other 68k machines to build them? brycec: What other 68k machines are in that photo?
(To be clear, I'm simply disproving "17:26:55 acf_ | *all* the architectures") acf_: oh wow you're right
I wonder if there in some other place
or if they build them on qemu or something brycec: oh wow you're right
twss BryceBot: Okay! twss! 'oh wow you're right' brycec: If they can only build on qemu, then there's no reason to support the arch :p
I suspect those build machines are simply "elsewhere" mercutio: openbsd suppports 68k?
my amiga is so slow with unix
it reminds me of sun3s
i can't actually remember which i stick on it, it may have been old openbsd. openbsd doesn't support 68k anymore afaik at least not amiga
oh mac68k.