***: gizmoguy has joined #arpnetworks
dwarren has joined #arpnetworks
LT has joined #arpnetworks
pyvpx_ has joined #arpnetworks
hoggworm_ has joined #arpnetworks
m0unds__ has joined #arpnetworks
jcv_ has joined #arpnetworks
Seju has joined #arpnetworks
bitslip_ has joined #arpnetworks
daca1 has joined #arpnetworks
Seji has quit IRC (*.net *.split)
daca has quit IRC (*.net *.split)
m0unds_ has quit IRC (*.net *.split)
jcv has quit IRC (*.net *.split)
hoggworm has quit IRC (*.net *.split)
pyvpx has quit IRC (*.net *.split)
bitslip has quit IRC (*.net *.split)
hive-mind has quit IRC (*.net *.split)
dj_goku has quit IRC (*.net *.split)
tooth has quit IRC (*.net *.split)
mnathani has quit IRC (*.net *.split)
d^_^b has quit IRC (*.net *.split)
relrod has quit IRC (*.net *.split)
hive-mind has joined #arpnetworks
dj_goku has joined #arpnetworks
tooth has joined #arpnetworks
mnathani has joined #arpnetworks
d^_^b has joined #arpnetworks
relrod has joined #arpnetworks
hive-mind has quit IRC (Max SendQ exceeded)
hive-mind has joined #arpnetworks
pyvpx_ is now known as pyvpx
daca1 is now known as DaCa
gizmoguy has quit IRC (Ping timeout: 252 seconds)
gizmoguy has joined #arpnetworks
jcv_ has quit IRC (Quit: leaving)
jcv has joined #arpnetworks
gizmoguy has quit IRC (Ping timeout: 256 seconds)
gizmoguy has joined #arpnetworks
LT has quit IRC (Quit: Leaving)
mnathani__: is it just me or is freenode doing a bunch of netsplits / rejoins recently
mercutio: it does seem to be the case
but it's more that frreenode was surprisingly reliable for a while
***: cloudkitsch has joined #arpnetworks
cloudkitsch has quit IRC (Quit: ZNC - http://znc.sourceforge.net)
qbit has quit IRC (Ping timeout: 245 seconds)
medum has quit IRC (Ping timeout: 245 seconds)
qbit has joined #arpnetworks
qbit is now known as Guest27982
erratic has joined #arpnetworks
erratic is now known as Guest42160
Guest42160: up_the_irons: I'm looking to learn some BGP stuff before tuesday. I was wondering if theres anything I can do with this /48 that I have that I could set up with quagga?
is it obvious I donno what I'm talking about ? Not the first time, trying to learn
I guess I'd need to register an AS number or something
mercutio: Guest42160: what are you tryin gto learn?
Guest42160: how to setup / admin bgp
mercutio: a) i wouldn't really recommend quagga
b) most bgp setups are really simple, or really complicated; there's not a lot of inbetween
Guest42160: yeah I dont have any cisco hardware
mercutio: c) bgp is usually used for people using multiple upstream providers for incresed reliability or performance.
you don't need to use cisco, but as someone who's used multiple open soruce routing platforms i'd recommend openbgpd or bird over quagga.
Guest42160: ah ok
hmm
well thats a start :)
mercutio: i used quagga a long time ago before bird or openbgpd existed.
Guest42160: net-misc/bird
mercutio: and it was better than zebra.
but zebra was terrible.
and quagga was still terrible.
and the last i looked it's still not really that wonderful.
you should be able to get a looking glass connection if you want to get a view of the internet.
Guest42160: I set it up the other night but I have nothing to really configure with it
but I checked out the zebra shell and stuff
mercutio: but that doesn't necessarily help you learn.
Guest42160: no
indeed not
mkb: stay far away from quagga and zebra
mercutio: you can use private asn's to make your little own mini network.
Guest42160: ah cool
mercutio: mkb: well i didn't want to cause offense... :)
it's terrible when someone just finds something new and interesting to play with and someone comes and says it's terrible!
it tends to make people stop listening :)
mkb: well true
Guest42160: yeah I mean I just figure Ive got this whole /48 I figure I ought to be able to do something kinda neat with BGP and that
but I'm just guessing
mercutio: a /48 isn't that much space.
you can't advertise any smalelr
mkb: it's probably easist to play with virtualbox
mercutio: you kind of need at least a /48 to do anycas.t
Guest42160: its an incomprehensible # of addresses lol
mercutio: adn you need multiple locations to do much with bgp really
mkb: mercutio, you can on a private network which is what he has anyway
mercutio: mkb: well true, but hten anycast doesn't help :)
Guest42160: ah yeah now I remember
I wonder if I could use an EC2 instance along with my arpnetworks vps
and setup anycast with it
mkb: last time I played with a private BGP network with some friends one guy had zebra, and that's where my hatrid of it and linux routing came from
Guest, not on a public IP
mercutio: Guest42160: what's this "by tuesday" you talk of?
Guest42160: I just want to have some experience to speak of by tuesday because I'm interviewing for a job
mercutio: my hatred came from quagga dying and keeping routes in the table.
Guest42160: I dont think its a huge priority for them but I want to cover all of the bases
mercutio: guest: it depends on the work place, but lots of work places don't really get excited by people playing with things at home.
mkb: mercutio, did you know Linux refuses packets that come addressed to you with a source from some link that the routing table wouldn't send packets to?
Guest42160: because I'm tired of screwing around and I want to get back to work. It's amazon so I figure i'd better not squander any opportunity to beef up
mercutio: and so if you can't connect it to a solution you provided for somebody it can seem like you like to waste time doing "non productive" things.
i look at things as getting experience/exposure in different areas helps divergent thinking.
Guest42160: mercutio: yeah I am familiar with that attitude and I'm happy to tell anybody who doesn't regard experience as experience to go sodomize themself
fuckin pisses me off
mercutio: but employers aren't necessarily like that :)
guest: careful about your language when interviewing too.
mkb: I'm sure Amazon will like his use of EC2 at least
mercutio: cool calm and collected.
Guest42160: I know, I just had to vent there for a second
mercutio: all good :)
what kind of job at amazon is it?
mkb: yes i did know that actually
mkb: I HATE IT
mkb: it's rp_filter that does it.
mkb: AND AT THE SAME TIME IT'LL ARP FOR OTHER INTERFACES BY DEFAULT
mkb: it took me like a week to find
Guest42160: that whole gentrifier pseudo professional attitude about experience doesn't count unless its for an org is b.s
what's an org
mercutio: huh guest?
mkb: they weren't ethernet fortunately so I didn't know that
***: Guest42160 is now known as erratic_
mercutio: erratic_: i'm talking about solutions, not about doing things for big businesss or something.
fwiw i have bgp at home :)
erratic_: I did some research into anycast several years ago for a HA project I was working on but I didn't have the budget for it
mercutio: but if i was applying for a job i wouldn't say that
mkb: that actually sounds good, and you'll presumably be able to talk intelligently about anycast
mercutio: you can say why you chose not to etc.
also sometimes people like to hear about mistakes people have made
some people try and be perfect and act as if they never make mistakes
mkb: I don't know if this is a networking position where you'd be expected to or just something else and you're trying to show general knowledge
mercutio: but someone who believes they make no mistakes will deal worse with mistakes they do make
erratic_: mkb: thats kinda the point, I'm not a total noob I know more about this than I let on but its hard to express a good starting point so I just wanted to start from the beginning when I said "what can I do with quagga and my /48"
mkb: but very few "programmers" have an accurate idea of how networks work
mercutio: mkb: yeah that always confused me.
but then i have no idea how opengl works
mkb: I remember in school when they tried to teach a class on writing a raytracer. It was an utter failure because nobody had the math involved
mercutio: mkb: actually lots of "networks" people don't seem to know how networks work too
mkb: networks have more general applicability than opengl anyway
mercutio: i did graphics design in high school for like a year?
erratic_: nice
mercutio: that's kind of like ray tracing right
this wasn't using computers :/
but had stuff about perspective and shit
erratic_: open in Firefox http://jsfiddle.net/erratic/n4be8273/11/
its a tilemap engine that I made
mkb: erratic_, as far as playing around goes I'd play with openbsd and it's routing domains in virtualbox. it's easier to have all the network access rather than trying to debug some issue between ARP and EC2
erratic_: part of it I followed a tutorial for but the rest (path generation / pathfinding) I had to figure out
mercutio: why do you comment //down in a function called MoveDown?
sorry :)
mkb: mercutio, yeah I think the lack of art training confused people too
erratic_, nice
erratic_: mkb: thank you also fyi playing around is subjective http://paigeadelethompson.blogspot.com/2015/02/my-2014-network.html
mercutio: i'm upgradi~ng my home network to 32gigabit
once my friend sends me my cards :)
erratic_: though most of that is dead now, my server back in greece has been down for months since the house lost power and my bf can't be bothered to mess with it
mkb: I have a BGP network operated via gif tunnels over DSL lines :) I know what "playing around" is
erratic_: thats awesome :)
mercutio: erratic_: i reckon you'd be better in a small business
err smaller
doing more general stuff
erratic_: I don't know
I'll be happy whereever I am as long as its interesting
and pays me
mercutio: you seem like you're in the area where you'll be most likely comfortable dealing with "complexity"
rathe than doing the same things again and again
generlaly spekaing larger businesses tend to specialise more
so it's more about doing similar tasks quickly and efficiently
whereas there's more novelty in smaller businesses.
(which some people find more stressful)
erratic_: you know i have a lot of fun working on my own projects so I think if I can't get enough enjoyment out of work there is always that
mercutio: heh
there is always that
erratic_: but I think at this point I just need something thats consistent
mercutio: ok
erratic_: and I need to stay at a job for awhile
mkb: I'm in a smaller business and programming. I have enough trouble fixing up how other people deal with complexity.
erratic_: so I'm not really arguing with you but its like you said I have a hard time finding people who will take me seriously so I can't be too picky
mercutio: mkb: have you tried fq_codel?
mkb: i.e. they don't and hope it doesn't matter. It usually does.
mercutio: i used to do programming
i tried to get back into it again last night
erratic_: actually shouldn't even say take me seriously, most people seem to regard me as not even worth talking to so
mercutio: then i was googling and i found some site called freelancer.com
mkb: nope
erratic_: programming is fun
mercutio: i found some job making someone's site run quicker
erratic_: although I hate doing it for other people
mkb: I use pf's queuing facility
mercutio: erratic_: yeh that's what i'm like.
erratic_: programmer jobs are aggravating
mercutio: well i don't necesarily mind for other poeple
erratic_: I've worked in several
mercutio: as long as it's what i want to do
mkb: I could enjoy it if I didn't have to deal with other people's code
mercutio: but invariably it's not.
mkb: this site had db issues :/
mkb: and if I got to rewrite practically every component I come across
mercutio: *cough*
mkb: sql injection?
mercutio: nah
single table
no indexes.
erratic_: the last job I had was programming and it sucked, was .NET C# / VB, porting horrible broken VB code, boring ecommerce code
mercutio: huge table scan.s
and sort by rand()
err order by
i got it going way way way quicker.
mkb: my current job is converting some code from BDB -> LMDB. Every method I touch gets it's line count at least halved
mercutio: but it still could be faster.
erratic_: lol nice
mkb: erratic_, that's annoying
mercutio: and that's why i odn't like programming for other people
mkb: its* why can I never do that right
mercutio: like i reckon web pages should load in 50 microseconds.
or use 50 microseconds of cpu time
mkb: not the way they use JavaScript now
mercutio: yeah jaevascript alone kills it
but there are heaps of latency interactions etc too.
erratic_: mkb: it was awful, I'll never do it again I only took that job because I had to and I was desperate then things changed and I found myself in a position where I didn't have to worry about work for awhile
mercutio: you realyl want the whole web site to be within 20 msec of you
it's an area i'm kind of interested in actually
erratic_: and that was good I needed it after what I'd been through plus hating that job
mercutio: like moving application logic to the edge
and having systems that don't rely on a central system
mkb: oh practically nobody can design software
they start writing code and hope it comes out okay
erratic_: heheh
yeah people have different styles and some people can do better with others and cant do with all any other way but one
mkb: mercutio, so how do you handle db consistency?
erratic_: try to remember that
mkb: erratic_, it's not just that. you know fizzbuzz right? most "programmers" can't solve basic problems
erratic_: it helps the ones who are willing to try and interested in learning, also having patience
mercutio: mkb: well that's one of the complex parts.
but usually you don't need to.
you can always have reads so they don't touch and writes so they do
but say you have a cart shared between systems, and you add something to your cart.
and you'r still going to the close system, if it can't talk to the remote system it shouldn't matter at that stage.
you can say there should always be a constant stock availbility.
but ime stock counts are often wrong anyway
mkb: the cart would be handled client side if you're willing to give up non-JavaScript users (not that you should; there are a few of us)
mercutio: like you can purchase something on amazon onyl to find out there's no stock.
mkb: honestly stock counts are usually handled when they go to the storeroom
mercutio: so if you're a bit late with finding there's no stock available it doesn't really change things.
mkb: stock tracking is one of those problems that "seems simple"
erratic_: the thing that kills me that I struggle with is in C# when people make a class for everything. This is a common problem with Java too. You can often times elegantly make a class for a lot of things but sometimes its just easier and less chaotic to have redundant properties on classes instead of shared nested objects because you end up with so many files that its hard to find your way around the source tree.
mercutio: and you can do everything perfectly from a db/systems pov.
and then find there's issue with stock being damaged/lost/misplaced etc.
stolen.
mkb: then someone leaves some a little off the shelf and the cart picks it up and deposits it 10 feet away
later someone finds it and shelves it there instead
mercutio: mkb: but then the box is empty :)
because some other one was damged, and someone stuck the one from that box with the other one
because they didn't want to do a new order.
yeah there are a lot of complexitys.
and a lot of them can get out of scope.
mkb: my parents run a pawn shop without computers. I've seen storeroom problems :)
mercutio: erratic_: what bugs me is when people comment obvious things :)
erratic_: people go way too crazy though i think, an example is you have a class called SoftwareBusiness, extends Business, has a nested property of type OrganizationalContact, extends abastract Contact, class Business has a property called Contact, class OrganizationalContact is a typeof Contact ...
mercutio: becuase someone taught them to comment regularly.
i have old school type coding though
i use iterators called i.
so for (i = 0; i < ..
mkb: though thinking about how this db would be kept on paper records gives you some clarity I've found
mercutio: doesn't seem off to me
mkb: mercutio, people don't do that now?
erratic_: so do I unless I'm using an IDE like visual studio or monodevelop that completes variable names, then i use "index"
mercutio: whereas the new style is to make up some fancy three word name for it
mkb: it's pretty uncommon now
well if you look at java etc code rather than c code.
mkb: oh god I hate java
mercutio: the way i see it is that code blocks should fit on the screen.
if the code block is too large to fit on screen then it's hard to comprehend
mkb: you have to close over final variables because any other way could be "confusing" then you have mutable objects which for some reason aren't "confusing"
mercutio: but i'm not that opposed to three page functions if they don't have loops that cover multiple pages.
mkb: it's not just line count
mercutio: and having lots of single line procdeures..
erratic_: mkb: Ive gotten so used to C# https://github.com/paigeadele/nMVC/blob/master/nMVC/Core%20Classes/HTTP/RouterManager.cs#L69
mkb: this BDB -> LMDB thing I mentioned. I've found a million instances like this: int i = 0; i = findsomevaluefunction(...);
erratic_: I can practically write stuff like that in my sleep
mkb: but when I tread the first statement I have to think about why it's zet to zero
mercutio: haha
i'm so behind on coding
i'm rewriting transparent tcp proxy.
and i find the socket stuff kind of icky.
erratic_: mercutio: I have something you will like one sec
mkb: C sharp has lambda now?
mercutio, C?
mercutio: mkb: yeh.
erratic_: mercutio: I just wrote this today http://paigeadelethompson.blogspot.com/2015/03/doing-more-with-http-proxies.html Ive been meaning to for months now
mercutio: i've been trying to figure out how to captuure syn ack
so like you make a tcp connection out from your computer, over the internet
and a box in the middle does a tcp hijack when it gets the synack back
erratic_: sounds like masscan
kinda
mercutio: preferably it captures the syn, and sends the syn itself
mkb: talking about complexity: there's so much involved with syscalls semantics and most people don't read manpages
mercutio: well basically i want connection refused, etc all to seem normal
so that tcp connection only suceeds if it gets through to the other end
mkb: yeah you need raw sockets for that
mercutio: then the tcp proxy masquerades as youir normal ip
mkb: if they even present everything?
mercutio: there'll be a way
i'm just hoping i don't have to use tun/tap or something :/
mkb: that's a way, but it's probably a slow way
mercutio: well this is meant to be to speed up ~32 megabit connections
there's some other voodoo i want to add into it too :)
mkb: queueing?
mercutio: i'm already using fq_codel
erratic_: hmm
mercutio: the problem is if you download at 4mb/sec
and you're downloading at 4mb/sec atm
and you have new data come in, adn that slows down the previous connection, and makes room for the new connection
then everything is switch towards "not overloading connections"
and the bw management etc means it takes longer to get back up to speed
so when that first 4mb/sec connection finishes
the new connection won't immediately go to 4mb/sec
erratic_: QoS stuff is something I definitely need to get my hands dirty with
BryceBot: That's what she said!!
mercutio: so i want to have 2 megabytes of buffer or such :)
erratic_: lol
mercutio: erratic_: fq_codel is nice and automagical.
-: mkb forgets how to tell BryceBot he did a good job
erratic_: yeah I was just reading about it :)
mercutio: it's long haul where it can still not be ideal.
like if you're 300 msec away
brycec: mkb: "BryceBot: yes" or just "twss"
mercutio: just having a massively long queue with http/https separate from normal traffic helped
meant ssh wouldn't bounce around, and you don't really notice "lag" with http/https
add 50 msec to ssh and you'll notice it more than 50ms to http/https
erratic_: yeah
when I was in greece I was actually using google voice over my vpn
mkb: I use ssh over AT&T u-verse sometimes. It's horrible
erratic_: I didn't really have any complaints
mercutio: i have graphs for my connection
mkb: as far as I can tell AT&T just cuts your connection off for five seconds once a minute
mercutio: fq_codel is amazing :)
BryceBot: That's what she said!!
erratic_: I gotta get off here for a bit guys, I'll catch yall later this evening if you're around
mercutio: well for latency, throughput coujdl still go up :)
hmm i have 0.1 msec jitter on my vdsl
erratic_: mercutio: I'm kinda interested in your proxy will ask later
mkb: mercutio, does it eventually get up to 32 mbit?
mercutio: mkb: does what?
mkb: the throughput on this connection? I don't understand what would keep it down
mercutio: mkb: oh my connection can easily do full bwe
bw
what i'm talking about is that when you've got some usage on your connectoin, and are transferring files remotely.
and you stop your usage, tcp/ip takes ages to recover.
and faster connections usualyl transfer faster, and can respond a little quicker
and buffer a little
in order to "keep the pipe always full"
i mean it's kind of academic
http://pastebin.com/xnnQQfiR
it's not like it gets terrible speeds, buut it could be better :)
mkb: hmm
mercutio: and if it can be better and transparent...
that to me is awesome :)
also my connection is more like 34megabit
but yaeh there is a long and complicated plan surrounding this idea
where if you have bgp having data coming in in multiple locations
and can terminate closer to that point you can get speed up even further
mkb: to which point?
mercutio: more data is sent when you ack, so if you ack closer to the destination
mkb: ah
mercutio: your speed goes up quicker, and you respond quicker to packet loss etc.
and then you just have to relay back to the source
so if you even then relay to a faster system that relays it to the bottleneck
then you have a higher chance of a good connection
damn i'm letting ouut all of my secrests :)
so then if you have multiple ways to relay traffic, then you can have traffic take different paths depending on network health
mkb: that's usually done with MPLS I thought
mercutio: everything kind of ties together.
yes
but it's also pretty dumb normally
mkb: but something has to detect that state and yeah
mercutio: so if you have two ways to send data you can send traffic down both paths
so if you're using ssh or something, the data volume isn't very high, but not receiving data really sucks
but if one path has 25 msec higher ping than the other
and you send down both
and the quicker one drops the packet
the second one will have the data
and you'll just get it 25 msec later.
that kind of thing is down on fibre rings sometimes.
like they'll send in both directions around a ring
and one direction is quicker than the other..
but it means you don't even have to check if it's up or anything
mkb: I haven't thought about routing based on port number
mercutio: you just default to sending both ways.
mkb: that's a good idea though
mercutio: yeh basically there are expensive ways to do things
and to do things better
but if you want to be cheap, and have a working solution, the easiest thing to do is to send down "two paths"
all data.
and obviously if you're sending bulk data that needs to be sequential you probably won't do that
but say you've got 8 paths
you can send down 8 paths with raid :)
sequential has shit loads of issuesu though
mkb: I was thinking more about you have one link with low latency and low bandwidth and one link high latency and handwidth: send SSH/RTP through low latency and everything else through high
mercutio: like different speed connections etc.
yeah
it tends to be low latency / high bandwidth / high cost
vs high latency low bandwidth low cost
now days.
mkb: why is sequential an issue? TCP handles it?
mercutio: terribly
that's why people are ultra careful wtih load balancing atm
it really doesn't handle out of order etc well
if you take away the sequential requirement and send different parts of the file
like bittorrent
then you can speed up builk transfers
and so if you can do something like scp that takes away the sequential requirement..
mkb: globus
http://toolkit.globus.org/toolkit/docs/4.2/4.2.1/data/gridftp/user/
We've got a 100Gbit link at work I'm supposed to test that with
mercutio: yeah i've looked at that
i've played with udt
i was going to use udt.
udt is interestnig.
it has high startup cost.
and a few other annoying things
i decided to just do my own in the end.
but yeh with scp you want low priority / medium priority / high priority
you wnat low priority for huge backups over slow links that you don't want to get in the way
you want high priority if you're doing something like pushing dns updates
and medium priority for tarsnfering medium files around.
mkb: dns updates are small enough that it doesn't matter?
mercutio: but yeah i want to do a few complicated things with routing through intermediate boxes.
mkb: but I understand what you mean
mercutio: and having static authentication stuff.
what happens if you want to push dns updates to a dns server that is having 50% packet loss
like to increase the ttl because people are hammering the site and it's getting congested (say)
mkb: only on the updates? it would lose dns requests too
mercutio: yeah but raising ttl can still reduce impact if it's not ddos
mkb: yeah
mercutio: i know it's kind of a weak example in some ways
but i dunno about you, but when i make a dns change i want "instant" feedback that it's propogated.
and so if it can do the update 200 msec quicker that matters to me.
mkb: so you're exploiting the fact that there's extra bandwidth between th elink and the server (link isn't as big as that ethernet cable I mean)
mercutio: so yeah the other thing i want to look at is forward error correction too.
well some paths may have issues and others not
like you have comcast and level3
oops
cogent and level3
and cogent is experiencing isuses and level3 isn't
and your normal link goes over cogent
but you haev another host that can connect over level3
mkb: 200 msec? that's going to dwarved by the time it takes to notice and think about the connection
mercutio: and you don't get packet loss to that other host, so you bounce via that other host.
maybe.
this is why i don't like coding for other people :)
mkb: heh
mercutio: i don't want to have to justify myself, i just want things to work "better" :)
ok
let's shift back to ssh
mkb: but I think 200 msec is optimistic if you've got 50% packet loss
mercutio: so your ssh connection goes over cogent and cogent has loss
and level3 doesn't
so it automatically routes via the level3 host.
mkb: yeah I understood that part
mercutio: well yeah it depends on your rtt to the remote host etc too.
the thing is i want this stuff to happen semi-automagically.
so you might be able to view some status web page
but it's handling all this routing for you
but the cool thing is that tcp/ip handles things a lot better when your'e close to the destination
so evn if you're going back to normal tcp/ip closer to the destination, and you just have automagic stuff between hosts that you control, you can still make a world of difference.
so yaeh this is an idea from years ago.
mkb: hmm
mercutio: and i managed to get a proof of concept going with squid and web pages.
where squid would direct close to a destination web site.
and then send traffic back
so you had a squid proxy, that then connected to another squid proxy
and it'd do a geo ip lookup in order to decide where to route it to.
but then you find some things like google, well all of google says it's in mountain view, ca.
so doing it with bgp woudl be better.
mkb: I imagine people like akamai and cloudflare have big databases they make of "actual" network locations
mercutio: but then how do you get lots of bgp entry points :)
mkb: ping time is more relevant than geo location anyway
mercutio: true lots of things arent' pingable though
and so you have to look at tcp latency
but yeah there are some smarts that can be done.
akamai has terrible intelligence
same with cloudflare.
cloudflare is just anycast.
akamai holds bgp views of it's path to the source.
i got my system working well for web browsing
i got a list of sites to test.
i'd look through the logs and find which sites loaded slowly.
mkb: maybe they should
mercutio: or which parts of what sites.
i found a few intersting things out.
mkb: so there's a tunnel between squids?
mercutio: akamai often contributed to far higher page load times.
basically akamai's miss cahce performance was terrible, it was way worse than going direct.
so it may take 50 msec sometimse, and 2000 msec other times.
there wasn'te ven a tunnel
just a ip acl
and squid using as parent proxy
so yaeh akamai was one of the biggest annoyances.
as you didn't know where to send stuff
mkb: okay the squid ips were special and your network knew to send over different links?
mercutio: you can kind of guess where the origin site might be
and shift it closer to the origin.
oh it wasn't doing any intelligence
i just chose hosts that i had good connections to
and dropped htem if the network sucked.
mkb: right
mercutio: but it got annoying
because it couldn't get rid of a host if it was misperforming
mkb: I don't understand why this improved things
mercutio: and heaps of "cheap vps's" had terrible networks.
arp's a lot better than most :)
ok
there are a few reasons it improves things
mkb: how is going to the squid on the other side improving things
mercutio: partially it was because i did this yeras ago
mkb: you're picking different routes than the network would have?
mercutio: and i was doing the initial window size of 10 packets
when linux didn't have it by default yet.
mkb: cheap vpses are terrible
mercutio: so it'd send 14.6k of data initially
partially it was because quite a few sites use nt etc
err "windows server"
and have terrible network stacks.
partially it was because especially in the uk some providers upstreams were way faster than others to back here.
the uk is kind of special like that.
like it used to be that ovh's netowrk really sucked to here.
and quite a few people hosted stuff on ovh
err and hetzner.de
i think ovh has got a lot better to here acutally
i think they improved their US routing when they did their candada expansion
err and the other thing i was doing was i was setting a bandwidth limit
and overshooting bandwidth can lead to loss
mkb: yeah
mercutio: and on short amounts of data transfer things arent' recovered from quickly
so i used htb and capped it at my adsl sync speed :)
or a little higher
i think it was capped at like 20 megabit
when i had 21 megabit sync ads
but adsl has high overhead etc.
anyway with all of this work i got average page load time down from around 1400 msec to around 1200 msec.
mkb: nice
mercutio: but if you just look at an average like that you can be like you were before, and say 200 msec doesn't matter.
but i benchmarked on other isp's etc too.
and one isp was 1600 msec, the other one was 1800 msec.
but even then, that's just averagesx.
what really was interesting, was looking at the benchmark run
and getting other people to run the benchmark.
like someone tried running it without adblock.
adn they were at like 2600 msec.
mkb: why squid over using bgp or some other routing protocol to change routing?
mercutio: and i ran it on another isp that i used to find was kind of weird with web performance
and it'd just hang part way through the test.
who was i going to get a bgp feed through?
mkb: ooh I can imagine ads (aside from being big images and full of javascript if not flash) going through a worse route
mercutio: seriously, i treid to get bgp first.
mkb: oh that's a problem
mercutio, AT&T's stuff is like that. I think their router runs some custom TCP stack that gets confused if there's too much going on and drops connections
mercutio: hmm, i've had bgp with arp since 2012.
but yeah this was pre 2012
and yeah i asked about getting full table with them back then :)
back then you couldn't.
so yeah, anyway i seem to have blabbered on for ages.
but akamai was one of the slow parts
BryceBot: That's what she said!!
mercutio: the other was adblock made a huge diff
mkb: no it's interesting
mercutio: but what i found really interesting, is one of the slowest sites on my test was in the same city as me.
mkb: I've just used a hosts file for ages and that seems to improve things
mercutio: and one of the fastest sites wasn't that close to me.
and a lot of web site performance had to do the origin web site.
and processing delays on forums etc could easily be over 200 msec.
this was pre everyone including bloody facebook links
you know if facebook goes down now that a lot of random web sites will get slow?
the other thing is i got a few other people to try it
mkb: I don't have javascript turned on :)
mercutio: and one of the things people seemed to notice was that it felt like it "delayed a little" then showed everything at once.
mkb: psychology has a big impact here
mercutio: now i don't know how much delay there really was.
but pages definitely seemed less progressive.
and one of the problems i used to experience with packet loss is pages would "stutter" when loading
and not only did that stutter go away, but web browsers delay before showing pages seemed to kick in
and it'd kind of have more ready.
and that's partially because it was speeding up things like images.
so it'd reduce reflows etc.
so it felt quite different
but the benchmarks would often not be that different
and that's because it shifted a lot of "usefuL" stuff earlier.
but often there was some stupid slow annoying thing before the page was finished loading.
and often it was lame stuff like tracking
and so a nice benchmark to me would be one that doesn't finish for the whole page to load
but waited for "enough" to load.
mkb: hmm
mercutio: also it made me really anal about doing minor tweaks/changes :)
mkb: network time? I guess you need the browser to parse the HTML and make all the requests
mercutio: because i didn't have enough time to dedicated to get to the next step.
i was working on this before there was a big earthquake in my city.
and i moved cities etc.
also the other thing i noticed is that it matters so much more if something takes 2 seconds instead of 4 seconds, compared to 1400 msec instead of 1600 msec
there's a lot of thresholds involved.
but it helped me want to change the way i measure performance
to fail/pass
like "good enough", "not good enough"
there's this network testing in this country which tests the performance of isp's compared to maximum line speed.
and it measures "peak time" speed vs "maximum speed"
the problem is it pretends to load web pages, but it does it all sequentially.
mkb: I always hate single-number network tests
mercutio: it does bandwidth testing but it looks at the "fastest" bandwidth chunks
speedtest.net is bad like that too
it also tries to measure "idle" connections.
it's a huge peeve with me.
to me, all testing should be done all the time.
if a connection is busy, and you test adn the internet goes badly, then the isp sucks.
it's like if someone is using skype, and you do web browsing and their skype has issues
that means the isp is bad!
mkb: or the router
mercutio: well isp's normally provide the router
i don't care if you get 90% of your speed in peak times.
when you can't have multiple concurrent users.
and it's kind of just a mindset thing
it's like "muscle cars"
mkb: crap ones but that's true
mercutio: sure they may have high horsepower
but that doesn't mean that you'd want to take them rallying
mkb: I told you about AT&T's router's inability to handle more than a few TCP connections at once
mercutio: yaeh it sounds disgusting
mkb: they compound the problem by demanding that you use their router (which does some encryption so you're stuck)
mercutio: god
that sucks
mkb: but muscle cars in a rally race are fun to watch on youtube :)
mercutio: hahaha
truue
buut yeah it's funny how everything ties in together kind of
mkb: you can fix it a lot with PPTP or something
mercutio: like i want to test network health passively too
mkb: since it only sees the one connection
mercutio: yeah i did pptp with the isp that had 1800 ms
to use my proxy
err and pptp without using proxy
and pptp alone helped things
that isp had bad tcp windows and a bad transparent proxy
err a transparent proxy that had bad tcp window sizes
well when i was more into this idea
i was trying to think how i could make money off it
and how much people woudl pay for fsater browsing
and i figured only like $5/month
and that i probably couldn't make any money off it
mkb: you could eat akamai and cloudflare's lunch if they're as bad as you say
mercutio: so it was mostly an academic exercise.
cloudflare is free and huge
akamai has good marketing
akamai looks impressive on paper
the problem with akamai is there's so many akamai caches that don't sahre cache, that cache misses are way too common
if you have less caches with more content and better connections to end servers
actually cloudflare could be alot better if they had faster connection from their proxies to a closer end point that connected to the source
it's all ntt thouguh
well ime
i haven't looked that hard
so if you have a connection to ntt you should be good for cloudflare.
mkb: no tiered cache? just there or go to origin?
mercutio: yeah they don't tier
but tehy don't have that many locations
so if ytou have a busy site and they have 12 locations
mkb: I meant akamai
mercutio: it'll pull from each of those locations
oh akamai may have redumintary tiering
it's a bit of a mess.
ther's heaps of different akamai's.
akamai streaming or something is really bad here.
mkb: I didn't realize the situation was so bad
I guess that's why Google and Amazon do it themselves
mercutio: well it depends what your expectations are
i mean my expectation is that every web site should load in less than 2 seconds total.
and i'd rather 1 second.
and akamai easily pushes sites over 3 seconds.
oh amazon is terrible here too
most amazon stuff is east coast US
mkb: so am I so it works out well for me
hmm
mercutio: curl --compressed http://www.amazon.com/ > /dev/null 0.00s user 0.00s system 0% cpu 2.300 total
curl -x arp.meh.net.nz:3128 --compressed http://www.amazon.com/ > /dev/null 0.00s user 0.00s system 0% cpu 2.262 total
i have a proxy on there heh
oh the otehr thing is that the proxies would use keep alive to stay connected to the remote proxies.
yeah you'd think someone as big as amazon could bring it closer to the user.
mkb: they like only having a few big datacenters it seems
if EC2 is any indication
mercutio: west coast ec2 isn't that popular
mkb: not that they couldn't colo proxies anywhere
mercutio: most things are east coast.
and west coast is seattle
which is kind of bad from here :(
what time does it take for you with curl?
mkb: 1.83, 1.74, 1.87
it's very variable
mercutio: curl --compressed http://www.amazon.com/ > /dev/null 0.01s user 0.01s system 0% cpu 1.472 total
that's from dallas on vultr
it'll be their site
because that's ~36 msec away
mkb: my curl doesn't suppress the progress bar...
mercutio: try curl -q
oh i just cut and paste the time line
and it's zsh so it's single line time
so yeah, that's the other issue, sites that have slow backends :)
time curl --compressed https://typekit.com/ > /dev/null
try this site
mkb: it's not -q..
mercutio: oh
-s ?
so typekit.com is 1 second from here
adn ping is ~10 msec lower than amzon
mkb: 0.19, 0.13, 0.18, 0.33 and still going after >5 seconds...
-: mkb doesn't know what happened there
mercutio: and that site was always fast.
try browsing the site
it's even faster than it used to be.
mkb: now that's significantly slower
too many images
mercutio: sow it looks totally differently than it used to be :)
damnit
my fast site got slower :)
mkb: an openbsd desktop isn't known for that kind of speed though
mercutio: it was one of the sites someone said is slow for them sometimes.
the other one like that was xda developers
which was weird, because it had a very close ip to where one of my proxies was :)
ahh it's moved
it was on steadfast.
but it meant i got to test it from very close :)
mkb: how do you separate complaints into network and client issues?
mercutio: what do you mean
mkb: well that supposes something
I'm guessing you work or worked for an ISP?
mercutio: you mean if someone says a site is slow for them?
yeah i work for an isp.
mkb: and got complaints when the stuff was slow. yeah? sometimes it's a problem with their end or the site
mercutio: hardly anyone complains about speed.
mkb: if it's the site maybe you can fix it but some people have a shitton of viruses installed
mercutio: yeah if someone complains about speed it's probably that they've got a virus or are uploading too much.
or have wireless issues.
i only really get involved if can't access a site or something
hardly anyone complains to isp's about speed except gamers.
that's why some isp's say they hate gamers.
mkb: I'd complain about latency for SSH if I thought it would help any
mercutio: hmm
mkb: most likely I'd get stuck trying to explain to the customer support drone what SSH is
mercutio: yeah i wouldn't complain about latency to at&t
i'd just tunnel it
mkb: and most likely they'd eventually tell me SSH is unsupported
mercutio: heh
actually i seem to remember complaining about ssh latency once.
i'm trying to remember why
mkb: I used to complain when their network would drop but I learned it doesn't help
mercutio: nah escapes me.
i've used heaps of different isp's.
mkb: DSLAM was across the street
mercutio: that can cause issues.
some modems hate dslams being too close
they get overloaded.
mkb: hmm
mercutio: i know paradoxical :)
it's like when someone's shouting in your ear
and it's harder to hear what they're saying than if they shout from further away
mkb: it's been better since we moved
mercutio: the obvious answer is to reduce the volume.
mkb: except the phone service
mercutio: umm you know what can help though
if you hit that again
try adding a phone extension cable
even a 10 metre (30 feet) cable
can make a difference.
i've heard of it helping, seriously.
mkb: wow
mercutio: i dunno if you've heard of people saying not to use extension cables
but lots of things go around where people get "general wisdom" that doesn't always apply.
also i think vdsl copes better with short lines.
compared to adsl.
mkb: there's lots of general wisdom that's based on nothing more than what someone made up once
u-verse is that and I think the network is fine except their damn router
mercutio: well exetension cables can reduce speed by a megabit or more.
mkb: I have plain old fashioned adsl though
mercutio: i had a friend who was using them
there was terrible routing.
there was high latency too
but i dunno if that was just the routing or interleaving.
vdsl is better :)
mkb: I wouldn't be surprised. I meant that the dropping connections was the router
mercutio: what's your next hop latency like?
mkb: 1. hmrtr.b 0.0% 9 0.7 3.2 0.7 19.8 6.3
2. adsl-74-177-71-1.gsp.bellsouth.n 0.0% 9 16.0 16.8 7.3 80.6 24.0
3. 72.157.40.72 0.0% 9 27.5 41.4 17.6 155.9 44.9
4. 12.81.44.64 11.1% 9 18.7 22.2 17.4 28.2 4.8
mercutio: hmm
bloody icmp deprioritisation
it's hard to tell
but 7.3 is fine
then it jumps up y like 10 msec.
and it seeriously looks like you have jitter.
mkb: a lot of it
mercutio: where's that to?
mkb: 4.2.2.1 but the near end so it's always like that
mercutio: i have low jitter to hop 2
so i don't think it is depriorisation :)
i've got 0.5 msec jitter to hop 2
mkb: it's wireless though, that probably contributes
after 100 packets even to the local router it's best 0.6 and worst 1.8
19.8*
mercutio: yeah
the router sucks :)
mkb: oh the router's great
well I should say
mercutio: rtt min/avg/max/mdev = 0.605/0.704/1.820/0.115 ms, ipg/ewma 0.738/0.678 ms
hmm
that's a flood ping
mkb: the router is great. the wifi part may not be
mercutio: buut that's the same hahaha
mkb: the actual router is an openbsd machine
mercutio: oh hangon max of 1.8
or average of 1.8
i compare min to avg
mkb: still trouble round-trip min/avg/max/std-dev = 6.340/6.861/17.765/0.655 ms
but at the wrong time
round-trip min/avg/max/std-dev = 6.352/34.895/122.308/9.227 ms
to the nexthop
mercutio: the diff between min/avg is fine
hmm he next hop maybe has deprioristation to you but not me
weird
i'm down to 0.3 msec jitter now
oh no i'm not
3.1 msec jitter :)
err 3.3 msec
buut yeah if you wqorry too much about jitter you'd go crazy
jitter is mch less of an issue than packet loss on wan linsk normally
mkb: I'm just tired of this game where latency goes up to 5000ms for two minutes intermittantly
mercutio: as much as i like fq_codel, packet loss sucks.
well yeah that's serious buffer bloat.
but yeah that's the kind of thing i think wuold be cool to test for
and would go in the "fail" category
and be counted as downtime.
if only i could get funding for developing such things haha
the problem is to really identify problem locations/areas/etc you need to have lots of users.
so you kind of want tests that can run on windows as an app
mkb: the router is a better place for it
mercutio: and have enough users to rule out local issues.
mkb: at least from the ISPs perspective
mercutio: not from the users pov.
if their "internet they use" is slow
mkb: I mean I'd rather let them run code on their router than install it on my desktop
mercutio: it doesn't reall matter if it's the wireless or the isp
but if you want to isp agnostic.
mkb: if it's the ISP that's running the test I meant
mercutio: would you rather have an extra box or not?
it shouldn't be the isp that runs the test.
it should be indepedant
like the testing here favours the cable isp here
cos the cable isp has burst
mkb: I was thinking about an ISP trying to identify problems with their own network
mercutio: but it's too little burst to be useful.
mkb: just enough for the test :)
mercutio: oh i'm thinking of trying to map the state of the internet around the world.
and identify things like local level3 congestion issues :)
i'm fascinated by reading about the US's congestion issues :)
and peering disputes
it's a lot simpler here.
it's complicated enouuguh in the US that i don't /know/ the situation
mkb: it's getting late here
all this sounds interesting though
mercutio: heh
mkb: goodnight
mercutio: 'night
***: Guest27982 has quit IRC (Ping timeout: 256 seconds)
qbit has joined #arpnetworks
qbit is now known as Guest33555
up_the_irons: damn scrollback looks very interesting yet i'll never have time to read it all