[01:39] *** dwarren has quit IRC (Quit: leaving) [01:39] *** gizmoguy has joined #arpnetworks [01:40] *** dwarren has joined #arpnetworks [02:02] *** LT has joined #arpnetworks [04:17] *** pyvpx_ has joined #arpnetworks [04:18] *** hoggworm_ has joined #arpnetworks [04:18] *** m0unds__ has joined #arpnetworks [04:19] *** jcv_ has joined #arpnetworks [04:21] *** Seju has joined #arpnetworks [04:21] *** bitslip_ has joined #arpnetworks [04:26] *** daca1 has joined #arpnetworks [04:26] *** Seji has quit IRC (*.net *.split) [04:31] *** daca has quit IRC (*.net *.split) [04:31] *** m0unds_ has quit IRC (*.net *.split) [04:31] *** jcv has quit IRC (*.net *.split) [04:31] *** hoggworm has quit IRC (*.net *.split) [04:31] *** pyvpx has quit IRC (*.net *.split) [04:31] *** bitslip has quit IRC (*.net *.split) [04:31] *** hive-mind has quit IRC (*.net *.split) [04:31] *** dj_goku has quit IRC (*.net *.split) [04:31] *** tooth has quit IRC (*.net *.split) [04:31] *** mnathani has quit IRC (*.net *.split) [04:31] *** d^_^b has quit IRC (*.net *.split) [04:31] *** relrod has quit IRC (*.net *.split) [04:34] *** hive-mind has joined #arpnetworks [04:34] *** dj_goku has joined #arpnetworks [04:34] *** tooth has joined #arpnetworks [04:34] *** mnathani has joined #arpnetworks [04:34] *** d^_^b has joined #arpnetworks [04:34] *** relrod has joined #arpnetworks [04:35] *** hive-mind has quit IRC (Max SendQ exceeded) [04:36] *** hive-mind has joined #arpnetworks [05:22] *** pyvpx_ is now known as pyvpx [06:09] *** daca1 is now known as DaCa [06:12] *** gizmoguy has quit IRC (Ping timeout: 252 seconds) [06:15] *** gizmoguy has joined #arpnetworks [06:17] *** jcv_ has quit IRC (Quit: leaving) [06:18] *** jcv has joined #arpnetworks [06:33] *** gizmoguy has quit IRC (Ping timeout: 256 seconds) [06:34] *** gizmoguy has joined #arpnetworks [10:29] *** LT has quit IRC (Quit: Leaving) [13:33] is it just me or is freenode doing a bunch of netsplits / rejoins recently [14:16] it does seem to be the case [14:16] but it's more that frreenode was surprisingly reliable for a while [14:29] *** cloudkitsch has joined #arpnetworks [14:47] *** cloudkitsch has quit IRC (Quit: ZNC - http://znc.sourceforge.net) [16:13] *** qbit has quit IRC (Ping timeout: 245 seconds) [16:26] *** medum has quit IRC (Ping timeout: 245 seconds) [16:43] *** qbit has joined #arpnetworks [16:43] *** qbit is now known as Guest27982 [18:45] *** erratic has joined #arpnetworks [18:45] *** erratic is now known as Guest42160 [18:46] up_the_irons: I'm looking to learn some BGP stuff before tuesday. I was wondering if theres anything I can do with this /48 that I have that I could set up with quagga? [18:46] is it obvious I donno what I'm talking about ? Not the first time, trying to learn [18:47] I guess I'd need to register an AS number or something [18:49] Guest42160: what are you tryin gto learn? [18:50] how to setup / admin bgp [18:50] a) i wouldn't really recommend quagga [18:50] b) most bgp setups are really simple, or really complicated; there's not a lot of inbetween [18:51] yeah I dont have any cisco hardware [18:51] c) bgp is usually used for people using multiple upstream providers for incresed reliability or performance. [18:51] you don't need to use cisco, but as someone who's used multiple open soruce routing platforms i'd recommend openbgpd or bird over quagga. [18:52] ah ok [18:52] hmm [18:52] well thats a start :) [18:52] i used quagga a long time ago before bird or openbgpd existed. [18:52] net-misc/bird [18:52] and it was better than zebra. [18:52] but zebra was terrible. [18:53] and quagga was still terrible. [18:53] and the last i looked it's still not really that wonderful. [18:53] you should be able to get a looking glass connection if you want to get a view of the internet. [18:53] I set it up the other night but I have nothing to really configure with it [18:53] but I checked out the zebra shell and stuff [18:53] but that doesn't necessarily help you learn. [18:53] no [18:53] indeed not [18:53] stay far away from quagga and zebra [18:53] you can use private asn's to make your little own mini network. [18:54] ah cool [18:54] mkb: well i didn't want to cause offense... :) [18:54] it's terrible when someone just finds something new and interesting to play with and someone comes and says it's terrible! [18:54] it tends to make people stop listening :) [18:54] well true [18:54] yeah I mean I just figure Ive got this whole /48 I figure I ought to be able to do something kinda neat with BGP and that [18:55] but I'm just guessing [18:55] a /48 isn't that much space. [18:55] you can't advertise any smalelr [18:55] it's probably easist to play with virtualbox [18:55] you kind of need at least a /48 to do anycas.t [18:55] its an incomprehensible # of addresses lol [18:55] adn you need multiple locations to do much with bgp really [18:55] mercutio, you can on a private network which is what he has anyway [18:55] mkb: well true, but hten anycast doesn't help :) [18:56] ah yeah now I remember [18:56] I wonder if I could use an EC2 instance along with my arpnetworks vps [18:56] and setup anycast with it [18:56] last time I played with a private BGP network with some friends one guy had zebra, and that's where my hatrid of it and linux routing came from [18:56] Guest, not on a public IP [18:57] Guest42160: what's this "by tuesday" you talk of? [18:57] I just want to have some experience to speak of by tuesday because I'm interviewing for a job [18:57] my hatred came from quagga dying and keeping routes in the table. [18:58] I dont think its a huge priority for them but I want to cover all of the bases [18:58] guest: it depends on the work place, but lots of work places don't really get excited by people playing with things at home. [18:58] mercutio, did you know Linux refuses packets that come addressed to you with a source from some link that the routing table wouldn't send packets to? [18:58] because I'm tired of screwing around and I want to get back to work. It's amazon so I figure i'd better not squander any opportunity to beef up [18:59] and so if you can't connect it to a solution you provided for somebody it can seem like you like to waste time doing "non productive" things. [18:59] i look at things as getting experience/exposure in different areas helps divergent thinking. [18:59] mercutio: yeah I am familiar with that attitude and I'm happy to tell anybody who doesn't regard experience as experience to go sodomize themself [18:59] fuckin pisses me off [18:59] but employers aren't necessarily like that :) [19:00] guest: careful about your language when interviewing too. [19:00] I'm sure Amazon will like his use of EC2 at least [19:00] cool calm and collected. [19:00] I know, I just had to vent there for a second [19:00] all good :) [19:00] what kind of job at amazon is it? [19:00] mkb: yes i did know that actually [19:00] mkb: I HATE IT [19:01] mkb: it's rp_filter that does it. [19:01] mkb: AND AT THE SAME TIME IT'LL ARP FOR OTHER INTERFACES BY DEFAULT [19:01] it took me like a week to find [19:01] that whole gentrifier pseudo professional attitude about experience doesn't count unless its for an org is b.s [19:01] what's an org [19:01] huh guest? [19:01] they weren't ethernet fortunately so I didn't know that [19:02] *** Guest42160 is now known as erratic_ [19:02] erratic_: i'm talking about solutions, not about doing things for big businesss or something. [19:03] fwiw i have bgp at home :) [19:03] I did some research into anycast several years ago for a HA project I was working on but I didn't have the budget for it [19:03] but if i was applying for a job i wouldn't say that [19:04] that actually sounds good, and you'll presumably be able to talk intelligently about anycast [19:04] you can say why you chose not to etc. [19:04] also sometimes people like to hear about mistakes people have made [19:04] some people try and be perfect and act as if they never make mistakes [19:05] I don't know if this is a networking position where you'd be expected to or just something else and you're trying to show general knowledge [19:05] but someone who believes they make no mistakes will deal worse with mistakes they do make [19:05] mkb: thats kinda the point, I'm not a total noob I know more about this than I let on but its hard to express a good starting point so I just wanted to start from the beginning when I said "what can I do with quagga and my /48" [19:05] but very few "programmers" have an accurate idea of how networks work [19:05] mkb: yeah that always confused me. [19:06] but then i have no idea how opengl works [19:07] I remember in school when they tried to teach a class on writing a raytracer. It was an utter failure because nobody had the math involved [19:07] mkb: actually lots of "networks" people don't seem to know how networks work too [19:07] networks have more general applicability than opengl anyway [19:07] i did graphics design in high school for like a year? [19:07] nice [19:07] that's kind of like ray tracing right [19:08] this wasn't using computers :/ [19:08] but had stuff about perspective and shit [19:08] open in Firefox http://jsfiddle.net/erratic/n4be8273/11/ [19:08] its a tilemap engine that I made [19:08] erratic_, as far as playing around goes I'd play with openbsd and it's routing domains in virtualbox. it's easier to have all the network access rather than trying to debug some issue between ARP and EC2 [19:08] part of it I followed a tutorial for but the rest (path generation / pathfinding) I had to figure out [19:09] why do you comment //down in a function called MoveDown? [19:09] sorry :) [19:09] mercutio, yeah I think the lack of art training confused people too [19:09] erratic_, nice [19:09] mkb: thank you also fyi playing around is subjective http://paigeadelethompson.blogspot.com/2015/02/my-2014-network.html [19:10] i'm upgradi~ng my home network to 32gigabit [19:10] once my friend sends me my cards :) [19:10] though most of that is dead now, my server back in greece has been down for months since the house lost power and my bf can't be bothered to mess with it [19:10] I have a BGP network operated via gif tunnels over DSL lines :) I know what "playing around" is [19:11] thats awesome :) [19:11] erratic_: i reckon you'd be better in a small business [19:11] err smaller [19:11] doing more general stuff [19:11] I don't know [19:11] I'll be happy whereever I am as long as its interesting [19:11] and pays me [19:11] you seem like you're in the area where you'll be most likely comfortable dealing with "complexity" [19:11] rathe than doing the same things again and again [19:12] generlaly spekaing larger businesses tend to specialise more [19:12] so it's more about doing similar tasks quickly and efficiently [19:12] whereas there's more novelty in smaller businesses. [19:12] (which some people find more stressful) [19:12] you know i have a lot of fun working on my own projects so I think if I can't get enough enjoyment out of work there is always that [19:12] heh [19:12] there is always that [19:12] but I think at this point I just need something thats consistent [19:13] ok [19:13] and I need to stay at a job for awhile [19:13] I'm in a smaller business and programming. I have enough trouble fixing up how other people deal with complexity. [19:13] so I'm not really arguing with you but its like you said I have a hard time finding people who will take me seriously so I can't be too picky [19:13] mkb: have you tried fq_codel? [19:13] i.e. they don't and hope it doesn't matter. It usually does. [19:13] i used to do programming [19:14] i tried to get back into it again last night [19:14] actually shouldn't even say take me seriously, most people seem to regard me as not even worth talking to so [19:14] then i was googling and i found some site called freelancer.com [19:14] nope [19:14] programming is fun [19:14] i found some job making someone's site run quicker [19:14] although I hate doing it for other people [19:14] I use pf's queuing facility [19:14] erratic_: yeh that's what i'm like. [19:15] programmer jobs are aggravating [19:15] well i don't necesarily mind for other poeple [19:15] I've worked in several [19:15] as long as it's what i want to do [19:15] I could enjoy it if I didn't have to deal with other people's code [19:15] but invariably it's not. [19:15] mkb: this site had db issues :/ [19:15] and if I got to rewrite practically every component I come across [19:15] *cough* [19:15] sql injection? [19:15] nah [19:15] single table [19:15] no indexes. [19:15] the last job I had was programming and it sucked, was .NET C# / VB, porting horrible broken VB code, boring ecommerce code [19:16] huge table scan.s [19:16] and sort by rand() [19:16] err order by [19:16] i got it going way way way quicker. [19:16] my current job is converting some code from BDB -> LMDB. Every method I touch gets it's line count at least halved [19:16] but it still could be faster. [19:16] lol nice [19:16] erratic_, that's annoying [19:16] and that's why i odn't like programming for other people [19:17] its* why can I never do that right [19:17] like i reckon web pages should load in 50 microseconds. [19:17] or use 50 microseconds of cpu time [19:17] not the way they use JavaScript now [19:17] yeah jaevascript alone kills it [19:17] but there are heaps of latency interactions etc too. [19:18] mkb: it was awful, I'll never do it again I only took that job because I had to and I was desperate then things changed and I found myself in a position where I didn't have to worry about work for awhile [19:18] you realyl want the whole web site to be within 20 msec of you [19:18] it's an area i'm kind of interested in actually [19:18] and that was good I needed it after what I'd been through plus hating that job [19:18] like moving application logic to the edge [19:18] and having systems that don't rely on a central system [19:18] oh practically nobody can design software [19:19] they start writing code and hope it comes out okay [19:21] heheh [19:22] yeah people have different styles and some people can do better with others and cant do with all any other way but one [19:22] mercutio, so how do you handle db consistency? [19:22] try to remember that [19:22] erratic_, it's not just that. you know fizzbuzz right? most "programmers" can't solve basic problems [19:23] it helps the ones who are willing to try and interested in learning, also having patience [19:23] mkb: well that's one of the complex parts. [19:23] but usually you don't need to. [19:24] you can always have reads so they don't touch and writes so they do [19:24] but say you have a cart shared between systems, and you add something to your cart. [19:24] and you'r still going to the close system, if it can't talk to the remote system it shouldn't matter at that stage. [19:24] you can say there should always be a constant stock availbility. [19:24] but ime stock counts are often wrong anyway [19:25] the cart would be handled client side if you're willing to give up non-JavaScript users (not that you should; there are a few of us) [19:25] like you can purchase something on amazon onyl to find out there's no stock. [19:25] honestly stock counts are usually handled when they go to the storeroom [19:25] so if you're a bit late with finding there's no stock available it doesn't really change things. [19:25] mkb: stock tracking is one of those problems that "seems simple" [19:25] the thing that kills me that I struggle with is in C# when people make a class for everything. This is a common problem with Java too. You can often times elegantly make a class for a lot of things but sometimes its just easier and less chaotic to have redundant properties on classes instead of shared nested objects because you end up with so many files that its hard to find your way around the source tree. [19:25] and you can do everything perfectly from a db/systems pov. [19:26] and then find there's issue with stock being damaged/lost/misplaced etc. [19:26] stolen. [19:26] then someone leaves some a little off the shelf and the cart picks it up and deposits it 10 feet away [19:26] later someone finds it and shelves it there instead [19:26] mkb: but then the box is empty :) [19:27] because some other one was damged, and someone stuck the one from that box with the other one [19:27] because they didn't want to do a new order. [19:27] yeah there are a lot of complexitys. [19:27] and a lot of them can get out of scope. [19:27] my parents run a pawn shop without computers. I've seen storeroom problems :) [19:27] erratic_: what bugs me is when people comment obvious things :) [19:28] people go way too crazy though i think, an example is you have a class called SoftwareBusiness, extends Business, has a nested property of type OrganizationalContact, extends abastract Contact, class Business has a property called Contact, class OrganizationalContact is a typeof Contact ... [19:28] becuase someone taught them to comment regularly. [19:28] i have old school type coding though [19:28] i use iterators called i. [19:28] so for (i = 0; i < .. [19:28] though thinking about how this db would be kept on paper records gives you some clarity I've found [19:28] doesn't seem off to me [19:28] mercutio, people don't do that now? [19:28] so do I unless I'm using an IDE like visual studio or monodevelop that completes variable names, then i use "index" [19:28] whereas the new style is to make up some fancy three word name for it [19:29] mkb: it's pretty uncommon now [19:29] well if you look at java etc code rather than c code. [19:29] oh god I hate java [19:29] the way i see it is that code blocks should fit on the screen. [19:29] if the code block is too large to fit on screen then it's hard to comprehend [19:29] you have to close over final variables because any other way could be "confusing" then you have mutable objects which for some reason aren't "confusing" [19:30] but i'm not that opposed to three page functions if they don't have loops that cover multiple pages. [19:30] it's not just line count [19:30] and having lots of single line procdeures.. [19:30] mkb: Ive gotten so used to C# https://github.com/paigeadele/nMVC/blob/master/nMVC/Core%20Classes/HTTP/RouterManager.cs#L69 [19:30] this BDB -> LMDB thing I mentioned. I've found a million instances like this: int i = 0; i = findsomevaluefunction(...); [19:31] I can practically write stuff like that in my sleep [19:31] but when I tread the first statement I have to think about why it's zet to zero [19:31] haha [19:31] i'm so behind on coding [19:31] i'm rewriting transparent tcp proxy. [19:31] and i find the socket stuff kind of icky. [19:31] mercutio: I have something you will like one sec [19:31] C sharp has lambda now? [19:32] mercutio, C? [19:32] mkb: yeh. [19:32] mercutio: I just wrote this today http://paigeadelethompson.blogspot.com/2015/03/doing-more-with-http-proxies.html Ive been meaning to for months now [19:32] i've been trying to figure out how to captuure syn ack [19:32] so like you make a tcp connection out from your computer, over the internet [19:32] and a box in the middle does a tcp hijack when it gets the synack back [19:33] sounds like masscan [19:33] kinda [19:33] preferably it captures the syn, and sends the syn itself [19:33] talking about complexity: there's so much involved with syscalls semantics and most people don't read manpages [19:33] well basically i want connection refused, etc all to seem normal [19:33] so that tcp connection only suceeds if it gets through to the other end [19:33] yeah you need raw sockets for that [19:33] then the tcp proxy masquerades as youir normal ip [19:33] if they even present everything? [19:34] there'll be a way [19:34] i'm just hoping i don't have to use tun/tap or something :/ [19:34] that's a way, but it's probably a slow way [19:34] well this is meant to be to speed up ~32 megabit connections [19:35] there's some other voodoo i want to add into it too :) [19:35] queueing? [19:35] i'm already using fq_codel [19:35] hmm [19:35] the problem is if you download at 4mb/sec [19:35] and you're downloading at 4mb/sec atm [19:36] and you have new data come in, adn that slows down the previous connection, and makes room for the new connection [19:36] then everything is switch towards "not overloading connections" [19:36] and the bw management etc means it takes longer to get back up to speed [19:36] so when that first 4mb/sec connection finishes [19:36] the new connection won't immediately go to 4mb/sec [19:36] QoS stuff is something I definitely need to get my hands dirty with [19:36] That's what she said!! [19:37] so i want to have 2 megabytes of buffer or such :) [19:37] lol [19:37] erratic_: fq_codel is nice and automagical. [19:37] * mkb forgets how to tell BryceBot he did a good job [19:37] yeah I was just reading about it :) [19:37] it's long haul where it can still not be ideal. [19:37] like if you're 300 msec away [19:37] mkb: "BryceBot: yes" or just "twss" [19:38] just having a massively long queue with http/https separate from normal traffic helped [19:39] meant ssh wouldn't bounce around, and you don't really notice "lag" with http/https [19:39] add 50 msec to ssh and you'll notice it more than 50ms to http/https [19:39] yeah [19:39] when I was in greece I was actually using google voice over my vpn [19:39] I use ssh over AT&T u-verse sometimes. It's horrible [19:39] I didn't really have any complaints [19:40] i have graphs for my connection [19:40] as far as I can tell AT&T just cuts your connection off for five seconds once a minute [19:40] fq_codel is amazing :) [19:40] That's what she said!! [19:40] I gotta get off here for a bit guys, I'll catch yall later this evening if you're around [19:40] well for latency, throughput coujdl still go up :) [19:41] hmm i have 0.1 msec jitter on my vdsl [19:41] mercutio: I'm kinda interested in your proxy will ask later [19:41] mercutio, does it eventually get up to 32 mbit? [19:41] mkb: does what? [19:41] the throughput on this connection? I don't understand what would keep it down [19:42] mkb: oh my connection can easily do full bwe [19:42] bw [19:42] what i'm talking about is that when you've got some usage on your connectoin, and are transferring files remotely. [19:42] and you stop your usage, tcp/ip takes ages to recover. [19:43] and faster connections usualyl transfer faster, and can respond a little quicker [19:43] and buffer a little [19:43] in order to "keep the pipe always full" [19:43] i mean it's kind of academic [19:43] http://pastebin.com/xnnQQfiR [19:44] it's not like it gets terrible speeds, buut it could be better :) [19:44] hmm [19:46] and if it can be better and transparent... [19:46] that to me is awesome :) [19:47] also my connection is more like 34megabit [19:47] but yaeh there is a long and complicated plan surrounding this idea [19:48] where if you have bgp having data coming in in multiple locations [19:48] and can terminate closer to that point you can get speed up even further [19:49] to which point? [19:49] more data is sent when you ack, so if you ack closer to the destination [19:49] ah [19:49] your speed goes up quicker, and you respond quicker to packet loss etc. [19:49] and then you just have to relay back to the source [19:49] so if you even then relay to a faster system that relays it to the bottleneck [19:50] then you have a higher chance of a good connection [19:50] damn i'm letting ouut all of my secrests :) [19:50] so then if you have multiple ways to relay traffic, then you can have traffic take different paths depending on network health [19:50] that's usually done with MPLS I thought [19:51] everything kind of ties together. [19:51] yes [19:51] but it's also pretty dumb normally [19:51] but something has to detect that state and yeah [19:51] so if you have two ways to send data you can send traffic down both paths [19:52] so if you're using ssh or something, the data volume isn't very high, but not receiving data really sucks [19:52] but if one path has 25 msec higher ping than the other [19:52] and you send down both [19:52] and the quicker one drops the packet [19:52] the second one will have the data [19:52] and you'll just get it 25 msec later. [19:53] that kind of thing is down on fibre rings sometimes. [19:53] like they'll send in both directions around a ring [19:53] and one direction is quicker than the other.. [19:53] but it means you don't even have to check if it's up or anything [19:53] I haven't thought about routing based on port number [19:53] you just default to sending both ways. [19:53] that's a good idea though [19:53] yeh basically there are expensive ways to do things [19:54] and to do things better [19:54] but if you want to be cheap, and have a working solution, the easiest thing to do is to send down "two paths" [19:54] all data. [19:54] and obviously if you're sending bulk data that needs to be sequential you probably won't do that [19:55] but say you've got 8 paths [19:55] you can send down 8 paths with raid :) [19:55] sequential has shit loads of issuesu though [19:55] I was thinking more about you have one link with low latency and low bandwidth and one link high latency and handwidth: send SSH/RTP through low latency and everything else through high [19:55] like different speed connections etc. [19:55] yeah [19:56] it tends to be low latency / high bandwidth / high cost [19:56] vs high latency low bandwidth low cost [19:56] now days. [19:56] why is sequential an issue? TCP handles it? [19:56] terribly [19:56] that's why people are ultra careful wtih load balancing atm [19:56] it really doesn't handle out of order etc well [19:57] if you take away the sequential requirement and send different parts of the file [19:57] like bittorrent [19:57] then you can speed up builk transfers [19:57] and so if you can do something like scp that takes away the sequential requirement.. [19:57] globus [19:58] http://toolkit.globus.org/toolkit/docs/4.2/4.2.1/data/gridftp/user/ [19:58] We've got a 100Gbit link at work I'm supposed to test that with [19:59] yeah i've looked at that [19:59] i've played with udt [19:59] i was going to use udt. [19:59] udt is interestnig. [19:59] it has high startup cost. [19:59] and a few other annoying things [19:59] i decided to just do my own in the end. [20:00] but yeh with scp you want low priority / medium priority / high priority [20:00] you wnat low priority for huge backups over slow links that you don't want to get in the way [20:00] you want high priority if you're doing something like pushing dns updates [20:00] and medium priority for tarsnfering medium files around. [20:00] dns updates are small enough that it doesn't matter? [20:00] but yeah i want to do a few complicated things with routing through intermediate boxes. [20:00] but I understand what you mean [20:00] and having static authentication stuff. [20:01] what happens if you want to push dns updates to a dns server that is having 50% packet loss [20:01] like to increase the ttl because people are hammering the site and it's getting congested (say) [20:01] only on the updates? it would lose dns requests too [20:01] yeah but raising ttl can still reduce impact if it's not ddos [20:02] yeah [20:02] i know it's kind of a weak example in some ways [20:02] but i dunno about you, but when i make a dns change i want "instant" feedback that it's propogated. [20:02] and so if it can do the update 200 msec quicker that matters to me. [20:02] so you're exploiting the fact that there's extra bandwidth between th elink and the server (link isn't as big as that ethernet cable I mean) [20:02] so yeah the other thing i want to look at is forward error correction too. [20:03] well some paths may have issues and others not [20:03] like you have comcast and level3 [20:03] oops [20:03] cogent and level3 [20:03] and cogent is experiencing isuses and level3 isn't [20:03] and your normal link goes over cogent [20:03] but you haev another host that can connect over level3 [20:03] 200 msec? that's going to dwarved by the time it takes to notice and think about the connection [20:03] and you don't get packet loss to that other host, so you bounce via that other host. [20:03] maybe. [20:03] this is why i don't like coding for other people :) [20:04] heh [20:04] i don't want to have to justify myself, i just want things to work "better" :) [20:04] ok [20:04] let's shift back to ssh [20:04] but I think 200 msec is optimistic if you've got 50% packet loss [20:04] so your ssh connection goes over cogent and cogent has loss [20:04] and level3 doesn't [20:04] so it automatically routes via the level3 host. [20:05] yeah I understood that part [20:05] well yeah it depends on your rtt to the remote host etc too. [20:05] the thing is i want this stuff to happen semi-automagically. [20:05] so you might be able to view some status web page [20:05] but it's handling all this routing for you [20:05] but the cool thing is that tcp/ip handles things a lot better when your'e close to the destination [20:06] so evn if you're going back to normal tcp/ip closer to the destination, and you just have automagic stuff between hosts that you control, you can still make a world of difference. [20:06] so yaeh this is an idea from years ago. [20:06] hmm [20:06] and i managed to get a proof of concept going with squid and web pages. [20:06] where squid would direct close to a destination web site. [20:07] and then send traffic back [20:07] so you had a squid proxy, that then connected to another squid proxy [20:07] and it'd do a geo ip lookup in order to decide where to route it to. [20:07] but then you find some things like google, well all of google says it's in mountain view, ca. [20:08] so doing it with bgp woudl be better. [20:08] I imagine people like akamai and cloudflare have big databases they make of "actual" network locations [20:08] but then how do you get lots of bgp entry points :) [20:08] ping time is more relevant than geo location anyway [20:08] true lots of things arent' pingable though [20:08] and so you have to look at tcp latency [20:08] but yeah there are some smarts that can be done. [20:09] akamai has terrible intelligence [20:09] same with cloudflare. [20:09] cloudflare is just anycast. [20:09] akamai holds bgp views of it's path to the source. [20:09] i got my system working well for web browsing [20:09] i got a list of sites to test. [20:09] i'd look through the logs and find which sites loaded slowly. [20:09] maybe they should [20:09] or which parts of what sites. [20:10] i found a few intersting things out. [20:10] so there's a tunnel between squids? [20:10] akamai often contributed to far higher page load times. [20:10] basically akamai's miss cahce performance was terrible, it was way worse than going direct. [20:10] so it may take 50 msec sometimse, and 2000 msec other times. [20:10] there wasn'te ven a tunnel [20:10] just a ip acl [20:10] and squid using as parent proxy [20:11] so yaeh akamai was one of the biggest annoyances. [20:11] as you didn't know where to send stuff [20:11] okay the squid ips were special and your network knew to send over different links? [20:11] you can kind of guess where the origin site might be [20:11] and shift it closer to the origin. [20:11] oh it wasn't doing any intelligence [20:11] i just chose hosts that i had good connections to [20:11] and dropped htem if the network sucked. [20:11] right [20:11] but it got annoying [20:12] because it couldn't get rid of a host if it was misperforming [20:12] I don't understand why this improved things [20:12] and heaps of "cheap vps's" had terrible networks. [20:12] arp's a lot better than most :) [20:12] ok [20:12] there are a few reasons it improves things [20:12] how is going to the squid on the other side improving things [20:12] partially it was because i did this yeras ago [20:12] you're picking different routes than the network would have? [20:12] and i was doing the initial window size of 10 packets [20:13] when linux didn't have it by default yet. [20:13] cheap vpses are terrible [20:13] so it'd send 14.6k of data initially [20:13] partially it was because quite a few sites use nt etc [20:13] err "windows server" [20:13] and have terrible network stacks. [20:13] partially it was because especially in the uk some providers upstreams were way faster than others to back here. [20:14] the uk is kind of special like that. [20:14] like it used to be that ovh's netowrk really sucked to here. [20:14] and quite a few people hosted stuff on ovh [20:14] err and hetzner.de [20:15] i think ovh has got a lot better to here acutally [20:15] i think they improved their US routing when they did their candada expansion [20:15] err and the other thing i was doing was i was setting a bandwidth limit [20:15] and overshooting bandwidth can lead to loss [20:16] yeah [20:16] and on short amounts of data transfer things arent' recovered from quickly [20:16] so i used htb and capped it at my adsl sync speed :) [20:16] or a little higher [20:16] i think it was capped at like 20 megabit [20:16] when i had 21 megabit sync ads [20:16] but adsl has high overhead etc. [20:16] anyway with all of this work i got average page load time down from around 1400 msec to around 1200 msec. [20:17] nice [20:17] but if you just look at an average like that you can be like you were before, and say 200 msec doesn't matter. [20:17] but i benchmarked on other isp's etc too. [20:17] and one isp was 1600 msec, the other one was 1800 msec. [20:17] but even then, that's just averagesx. [20:18] what really was interesting, was looking at the benchmark run [20:18] and getting other people to run the benchmark. [20:18] like someone tried running it without adblock. [20:18] adn they were at like 2600 msec. [20:18] why squid over using bgp or some other routing protocol to change routing? [20:18] and i ran it on another isp that i used to find was kind of weird with web performance [20:19] and it'd just hang part way through the test. [20:19] who was i going to get a bgp feed through? [20:19] ooh I can imagine ads (aside from being big images and full of javascript if not flash) going through a worse route [20:19] seriously, i treid to get bgp first. [20:19] oh that's a problem [20:20] mercutio, AT&T's stuff is like that. I think their router runs some custom TCP stack that gets confused if there's too much going on and drops connections [20:20] hmm, i've had bgp with arp since 2012. [20:20] but yeah this was pre 2012 [20:21] and yeah i asked about getting full table with them back then :) [20:21] back then you couldn't. [20:22] so yeah, anyway i seem to have blabbered on for ages. [20:22] but akamai was one of the slow parts [20:22] That's what she said!! [20:22] the other was adblock made a huge diff [20:22] no it's interesting [20:22] but what i found really interesting, is one of the slowest sites on my test was in the same city as me. [20:22] I've just used a hosts file for ages and that seems to improve things [20:23] and one of the fastest sites wasn't that close to me. [20:23] and a lot of web site performance had to do the origin web site. [20:23] and processing delays on forums etc could easily be over 200 msec. [20:23] this was pre everyone including bloody facebook links [20:23] you know if facebook goes down now that a lot of random web sites will get slow? [20:24] the other thing is i got a few other people to try it [20:24] I don't have javascript turned on :) [20:24] and one of the things people seemed to notice was that it felt like it "delayed a little" then showed everything at once. [20:24] psychology has a big impact here [20:24] now i don't know how much delay there really was. [20:25] but pages definitely seemed less progressive. [20:25] and one of the problems i used to experience with packet loss is pages would "stutter" when loading [20:25] and not only did that stutter go away, but web browsers delay before showing pages seemed to kick in [20:25] and it'd kind of have more ready. [20:25] and that's partially because it was speeding up things like images. [20:26] so it'd reduce reflows etc. [20:26] so it felt quite different [20:26] but the benchmarks would often not be that different [20:26] and that's because it shifted a lot of "usefuL" stuff earlier. [20:26] but often there was some stupid slow annoying thing before the page was finished loading. [20:26] and often it was lame stuff like tracking [20:27] and so a nice benchmark to me would be one that doesn't finish for the whole page to load [20:27] but waited for "enough" to load. [20:27] hmm [20:28] also it made me really anal about doing minor tweaks/changes :) [20:28] network time? I guess you need the browser to parse the HTML and make all the requests [20:28] because i didn't have enough time to dedicated to get to the next step. [20:28] i was working on this before there was a big earthquake in my city. [20:28] and i moved cities etc. [20:29] also the other thing i noticed is that it matters so much more if something takes 2 seconds instead of 4 seconds, compared to 1400 msec instead of 1600 msec [20:29] there's a lot of thresholds involved. [20:30] but it helped me want to change the way i measure performance [20:30] to fail/pass [20:30] like "good enough", "not good enough" [20:30] there's this network testing in this country which tests the performance of isp's compared to maximum line speed. [20:30] and it measures "peak time" speed vs "maximum speed" [20:31] the problem is it pretends to load web pages, but it does it all sequentially. [20:31] I always hate single-number network tests [20:31] it does bandwidth testing but it looks at the "fastest" bandwidth chunks [20:31] speedtest.net is bad like that too [20:31] it also tries to measure "idle" connections. [20:31] it's a huge peeve with me. [20:31] to me, all testing should be done all the time. [20:32] if a connection is busy, and you test adn the internet goes badly, then the isp sucks. [20:32] it's like if someone is using skype, and you do web browsing and their skype has issues [20:32] that means the isp is bad! [20:32] or the router [20:32] well isp's normally provide the router [20:32] i don't care if you get 90% of your speed in peak times. [20:33] when you can't have multiple concurrent users. [20:33] and it's kind of just a mindset thing [20:33] it's like "muscle cars" [20:33] crap ones but that's true [20:33] sure they may have high horsepower [20:34] but that doesn't mean that you'd want to take them rallying [20:34] I told you about AT&T's router's inability to handle more than a few TCP connections at once [20:34] yaeh it sounds disgusting [20:34] they compound the problem by demanding that you use their router (which does some encryption so you're stuck) [20:34] god [20:34] that sucks [20:34] but muscle cars in a rally race are fun to watch on youtube :) [20:34] hahaha [20:34] truue [20:35] buut yeah it's funny how everything ties in together kind of [20:35] you can fix it a lot with PPTP or something [20:35] like i want to test network health passively too [20:35] since it only sees the one connection [20:35] yeah i did pptp with the isp that had 1800 ms [20:35] to use my proxy [20:35] err and pptp without using proxy [20:36] and pptp alone helped things [20:36] that isp had bad tcp windows and a bad transparent proxy [20:36] err a transparent proxy that had bad tcp window sizes [20:36] well when i was more into this idea [20:36] i was trying to think how i could make money off it [20:36] and how much people woudl pay for fsater browsing [20:37] and i figured only like $5/month [20:37] and that i probably couldn't make any money off it [20:37] you could eat akamai and cloudflare's lunch if they're as bad as you say [20:37] so it was mostly an academic exercise. [20:37] cloudflare is free and huge [20:37] akamai has good marketing [20:37] akamai looks impressive on paper [20:37] the problem with akamai is there's so many akamai caches that don't sahre cache, that cache misses are way too common [20:38] if you have less caches with more content and better connections to end servers [20:38] actually cloudflare could be alot better if they had faster connection from their proxies to a closer end point that connected to the source [20:38] it's all ntt thouguh [20:38] well ime [20:38] i haven't looked that hard [20:38] so if you have a connection to ntt you should be good for cloudflare. [20:39] no tiered cache? just there or go to origin? [20:40] yeah they don't tier [20:41] but tehy don't have that many locations [20:41] so if ytou have a busy site and they have 12 locations [20:41] I meant akamai [20:41] it'll pull from each of those locations [20:41] oh akamai may have redumintary tiering [20:41] it's a bit of a mess. [20:41] ther's heaps of different akamai's. [20:41] akamai streaming or something is really bad here. [20:41] I didn't realize the situation was so bad [20:42] I guess that's why Google and Amazon do it themselves [20:42] well it depends what your expectations are [20:42] i mean my expectation is that every web site should load in less than 2 seconds total. [20:42] and i'd rather 1 second. [20:42] and akamai easily pushes sites over 3 seconds. [20:42] oh amazon is terrible here too [20:42] most amazon stuff is east coast US [20:43] so am I so it works out well for me [20:43] hmm [20:43] curl --compressed http://www.amazon.com/ > /dev/null 0.00s user 0.00s system 0% cpu 2.300 total [20:43] curl -x arp.meh.net.nz:3128 --compressed http://www.amazon.com/ > /dev/null 0.00s user 0.00s system 0% cpu 2.262 total [20:43] i have a proxy on there heh [20:43] oh the otehr thing is that the proxies would use keep alive to stay connected to the remote proxies. [20:44] yeah you'd think someone as big as amazon could bring it closer to the user. [20:44] they like only having a few big datacenters it seems [20:44] if EC2 is any indication [20:44] west coast ec2 isn't that popular [20:44] not that they couldn't colo proxies anywhere [20:44] most things are east coast. [20:44] and west coast is seattle [20:44] which is kind of bad from here :( [20:45] what time does it take for you with curl? [20:45] 1.83, 1.74, 1.87 [20:45] it's very variable [20:45] curl --compressed http://www.amazon.com/ > /dev/null 0.01s user 0.01s system 0% cpu 1.472 total [20:45] that's from dallas on vultr [20:46] it'll be their site [20:46] because that's ~36 msec away [20:46] my curl doesn't suppress the progress bar... [20:46] try curl -q [20:46] oh i just cut and paste the time line [20:46] and it's zsh so it's single line time [20:47] so yeah, that's the other issue, sites that have slow backends :) [20:47] time curl --compressed https://typekit.com/ > /dev/null [20:47] try this site [20:47] it's not -q.. [20:48] oh [20:48] -s ? [20:48] so typekit.com is 1 second from here [20:48] adn ping is ~10 msec lower than amzon [20:48] 0.19, 0.13, 0.18, 0.33 and still going after >5 seconds... [20:48] * mkb doesn't know what happened there [20:48] and that site was always fast. [20:49] try browsing the site [20:49] it's even faster than it used to be. [20:49] now that's significantly slower [20:49] too many images [20:49] sow it looks totally differently than it used to be :) [20:50] damnit [20:50] my fast site got slower :) [20:50] an openbsd desktop isn't known for that kind of speed though [20:50] it was one of the sites someone said is slow for them sometimes. [20:50] the other one like that was xda developers [20:51] which was weird, because it had a very close ip to where one of my proxies was :) [20:51] ahh it's moved [20:51] it was on steadfast. [20:51] but it meant i got to test it from very close :) [20:51] how do you separate complaints into network and client issues? [20:52] what do you mean [20:52] well that supposes something [20:52] I'm guessing you work or worked for an ISP? [20:52] you mean if someone says a site is slow for them? [20:52] yeah i work for an isp. [20:52] and got complaints when the stuff was slow. yeah? sometimes it's a problem with their end or the site [20:53] hardly anyone complains about speed. [20:53] if it's the site maybe you can fix it but some people have a shitton of viruses installed [20:53] yeah if someone complains about speed it's probably that they've got a virus or are uploading too much. [20:53] or have wireless issues. [20:53] i only really get involved if can't access a site or something [20:54] hardly anyone complains to isp's about speed except gamers. [20:54] that's why some isp's say they hate gamers. [20:54] I'd complain about latency for SSH if I thought it would help any [20:54] hmm [20:54] most likely I'd get stuck trying to explain to the customer support drone what SSH is [20:54] yeah i wouldn't complain about latency to at&t [20:54] i'd just tunnel it [20:55] and most likely they'd eventually tell me SSH is unsupported [20:55] heh [20:55] actually i seem to remember complaining about ssh latency once. [20:55] i'm trying to remember why [20:55] I used to complain when their network would drop but I learned it doesn't help [20:55] nah escapes me. [20:56] i've used heaps of different isp's. [20:56] DSLAM was across the street [20:56] that can cause issues. [20:56] some modems hate dslams being too close [20:56] they get overloaded. [20:56] hmm [20:56] i know paradoxical :) [20:56] it's like when someone's shouting in your ear [20:57] and it's harder to hear what they're saying than if they shout from further away [20:57] it's been better since we moved [20:57] the obvious answer is to reduce the volume. [20:57] except the phone service [20:57] umm you know what can help though [20:57] if you hit that again [20:57] try adding a phone extension cable [20:57] even a 10 metre (30 feet) cable [20:57] can make a difference. [20:58] i've heard of it helping, seriously. [20:58] wow [20:58] i dunno if you've heard of people saying not to use extension cables [20:58] but lots of things go around where people get "general wisdom" that doesn't always apply. [20:59] also i think vdsl copes better with short lines. [20:59] compared to adsl. [20:59] there's lots of general wisdom that's based on nothing more than what someone made up once [20:59] u-verse is that and I think the network is fine except their damn router [20:59] well exetension cables can reduce speed by a megabit or more. [20:59] I have plain old fashioned adsl though [21:00] i had a friend who was using them [21:00] there was terrible routing. [21:00] there was high latency too [21:00] but i dunno if that was just the routing or interleaving. [21:00] vdsl is better :) [21:00] I wouldn't be surprised. I meant that the dropping connections was the router [21:01] what's your next hop latency like? [21:01] 1. hmrtr.b 0.0% 9 0.7 3.2 0.7 19.8 6.3 [21:01] 2. adsl-74-177-71-1.gsp.bellsouth.n 0.0% 9 16.0 16.8 7.3 80.6 24.0 [21:01] 3. 72.157.40.72 0.0% 9 27.5 41.4 17.6 155.9 44.9 [21:01] 4. 12.81.44.64 11.1% 9 18.7 22.2 17.4 28.2 4.8 [21:01] hmm [21:01] bloody icmp deprioritisation [21:01] it's hard to tell [21:01] but 7.3 is fine [21:02] then it jumps up y like 10 msec. [21:02] and it seeriously looks like you have jitter. [21:02] a lot of it [21:02] where's that to? [21:02] 4.2.2.1 but the near end so it's always like that [21:02] i have low jitter to hop 2 [21:03] so i don't think it is depriorisation :) [21:03] i've got 0.5 msec jitter to hop 2 [21:03] it's wireless though, that probably contributes [21:03] after 100 packets even to the local router it's best 0.6 and worst 1.8 [21:03] 19.8* [21:03] yeah [21:03] the router sucks :) [21:04] oh the router's great [21:04] well I should say [21:04] rtt min/avg/max/mdev = 0.605/0.704/1.820/0.115 ms, ipg/ewma 0.738/0.678 ms [21:04] hmm [21:04] that's a flood ping [21:04] the router is great. the wifi part may not be [21:04] buut that's the same hahaha [21:04] the actual router is an openbsd machine [21:05] oh hangon max of 1.8 [21:05] or average of 1.8 [21:05] i compare min to avg [21:07] still trouble round-trip min/avg/max/std-dev = 6.340/6.861/17.765/0.655 ms [21:07] but at the wrong time [21:07] round-trip min/avg/max/std-dev = 6.352/34.895/122.308/9.227 ms [21:07] to the nexthop [21:07] the diff between min/avg is fine [21:07] hmm he next hop maybe has deprioristation to you but not me [21:07] weird [21:08] i'm down to 0.3 msec jitter now [21:08] oh no i'm not [21:08] 3.1 msec jitter :) [21:08] err 3.3 msec [21:08] buut yeah if you wqorry too much about jitter you'd go crazy [21:09] jitter is mch less of an issue than packet loss on wan linsk normally [21:09] I'm just tired of this game where latency goes up to 5000ms for two minutes intermittantly [21:09] as much as i like fq_codel, packet loss sucks. [21:09] well yeah that's serious buffer bloat. [21:10] but yeah that's the kind of thing i think wuold be cool to test for [21:10] and would go in the "fail" category [21:10] and be counted as downtime. [21:11] if only i could get funding for developing such things haha [21:12] the problem is to really identify problem locations/areas/etc you need to have lots of users. [21:12] so you kind of want tests that can run on windows as an app [21:12] the router is a better place for it [21:12] and have enough users to rule out local issues. [21:12] at least from the ISPs perspective [21:13] not from the users pov. [21:13] if their "internet they use" is slow [21:13] I mean I'd rather let them run code on their router than install it on my desktop [21:13] it doesn't reall matter if it's the wireless or the isp [21:13] but if you want to isp agnostic. [21:13] if it's the ISP that's running the test I meant [21:13] would you rather have an extra box or not? [21:13] it shouldn't be the isp that runs the test. [21:13] it should be indepedant [21:14] like the testing here favours the cable isp here [21:14] cos the cable isp has burst [21:14] I was thinking about an ISP trying to identify problems with their own network [21:14] but it's too little burst to be useful. [21:14] just enough for the test :) [21:14] oh i'm thinking of trying to map the state of the internet around the world. [21:14] and identify things like local level3 congestion issues :) [21:15] i'm fascinated by reading about the US's congestion issues :) [21:15] and peering disputes [21:15] it's a lot simpler here. [21:15] it's complicated enouuguh in the US that i don't /know/ the situation [21:16] it's getting late here [21:17] all this sounds interesting though [21:18] heh [21:19] goodnight [21:20] 'night [21:43] *** Guest27982 has quit IRC (Ping timeout: 256 seconds) [21:48] *** qbit has joined #arpnetworks [21:48] *** qbit is now known as Guest33555 [23:48] damn scrollback looks very interesting yet i'll never have time to read it all