well as an end user it wasn't much work :) with windows i'd rather use l2tp/ipsec, but with linux openvpn is easy i'm glad it was easy to set up for you :) as request some weeks ago: http://support.arpnetworks.com/kb/main/is-there-a-firewall-filter-rate-limit-or-similar-device-applied-to-my-traffic *requested let me know if anything isn't clear I guess I can remove that rule from my rule set. up_the_irons: your openbsd 5.4 iso on http://mirrors.arpnetworks.com/ISO_Library/ is timestamped before openbsd 5.4 came out? by like 4 months actually 5.3 is early too. i imagine something has a grossly wrong time somehwere :) if the iso timestamp is early, it is because there is a creation of the iso prior to the actual release; media gits burned, factories generate the cd's and stickers, then finally release day arrives. the official timestamp of 5.4 is: -rw-r--r-- 1 root wheel 243050496 Jul 30 15:38 5.4/amd64/install54.iso so it seems that arpnetworks has simply preserved the timestamp from the mirror site the image was retrieved from Yeah, releases get tagged 2-3 months before actual release date. A couple weeks for testing, and a few archs take months to build packages. VAX... toddf: oh real that's months prior though? i suppose it can take a while to press i suppose it's 3 motnhs not four i was thinking 7=july, 11=november but iot's 30 july to 1st november i was also looking at how 5.2 said feb, .. but 5.1 is actually more cent than 5.2 s/cent/recent/ but 5.1 is actually more recent than 5.2 oh wow s/^o/O/ Oh wow i had to try that :) s/mercutio/Mercutio/ so, it's just on the output, not the nick. all i did was rsync them, i swear ;) yeh well toddf's explanation works btw up_the_irons have you considered using unbound for recursive dns? i have i assume it'd be one more thing to maintain cos you'd have to keep resolving working with the old dns with people using that currently. in fact, unbound *is* running on 208.79.88.9 oh? actually you don't host dns for people do you? so i suppose you could remap your authorative hmm i was using .7 i'm not atm for some reason oh, 89.9 is the secondary normally? oh, nah it does say 88.9 i'd just looked at host -t ns arpnetworks.com not close enough hmm, 88.9 resolves to ns2 which resolves to 89.9 and 88.7 appaers to be faster than 88.9, probably cos most people using primary just run your own locally ;) i haven't run my own recursive or authoritative ns' in years m0unds: :( missing out nah i think authorative is more important to run yourself than recursive recursive you can just cache locally if you do lots of dns yeah, recursive if you're an ISP or have lots of machines in one place i like to pretend fairies answer queries and never give it a second thought and authoratative if you need control of your zone recursive if you have more than one transit provider is more where i'd say it starts whether or not isp well, multiple recursive :) i don't reckon wireless isp's need their own recursive dns rather than just cache if they are single-homed. well single-homed transit, with peering recursive still good. I've seen ISPs go horribly wrong when they have split upstreams and their recursive server on a different upstream to their customers TBH I think everyone should run their own recursive what do you mean a different upstream? so say you have business customers and resi customers and a cheap upstream and a more reliable upstream do you mean having non PI space and having some IP on one transit provider, and the other on another transit provider? shove business customers on the reliable one falling back to the cheap one and visa versa so PI space, right. and you get to use the bandwidth on both transits or at least dual advertising yip yeah google does some really funky stuff when you mess that up oh right i get yah you mean like it'll push stuff over one provider or the other but the DNS picks something for the other isp and then the route sucks for the other isp? yeah so, all the google CDN stuff is based on the IP it thinks the resolver is coming from i think that's generally not too bad as long as you send outgoing via the best path as google will choose an isp that's good for the connecting IP so when you do a lookup to google.com the CDN goes, alright you have a GGC node at X IP nearby and returns that and then you try connect to that IP from the client machine, and the GGC goes, hold on you're not using my transit yeah you have to dual advertise routes at least and falls you back yaeh, that's why you need a dual advertiseement yeah even if you have like much longer as-path~ length yup it'll still let you use it at least from what i've seen you can actually use other isp's to transit to upstream ggc mmm, but it would have been far more efficient to use the GGC node on the upstream you're connected to :) on the alternate transit provider and it'll work no, not really cos generlaly speaking it'll still come down the transit provider that you're connecrted to 's link because most tarnsit providers will prefer transit routes over learned routes but it'll suck from a load balancing point of view the GGC cache stuff is nasty anyway :/ also depends on how you're doing your forward routes problem is it's so much traffic so it matters. yeah, I notice a fault on GGC about once a week :/ yeah i'd advocate doing best path forward path regardless. I've got some fantastic graphs if you give me one second graphmaster gizmoguy 1 .. 2 .. 3 google have been recently sending us to pretty much the furthest location in their network i've had that issue it was amsterdam second class citizens oh look yay and only on some videos they're doing it again today http://amp.wand.net.nz/graph/rrd-smokeping/23235/1384215383/1384388183 i'm in new zealand oh hai oh so are you? me too :) was it amsterdam? google wouldn't tell me and I couldnt' work out from traceroutes oh that doesn't really tell you where you're going necessarily but the biggest spike I see on the graph is 600ms do your videos actually go to there? oh actually I don't really care about that that shows me local (not an ISP :)) if you look at tcpdupm or such when viewing videos and trace to those ip's it can be kind of random where it sends you yeah for sure when I send those graphs to google but i've seen that go to amsterdam and the IPs we were hitting via australia they said I couldn't have got much further down their network with horrible route :) yeah GGC is an interesting beast do you host a GGC box? nope ah yup i work for small isp don't have ggc sweet, where 'bouts in NZ? but have dual upstreams i'm in auckland ah cool not too far from me hamiltron here (small world) heh i was from chch but moved after the earthquakes fair enough i haven't actually had youtube performance issues recently what was wrong with old zealand? hey what http://www.youtube.com/my_speed check out global on that m0unds: i think it's in the netherlands actually http://en.wikipedia.org/wiki/Zeeland Zeeland :: Zeeland (Dutch pronunciation: [ˈzeːlɑnt] ( listen), Zeelandic: Zeêland), also called Zealand in English, is the westernmost province of the Netherlands. The province, located in the south-west of the country, consists of a number of islands (hence its name, meaning "sea-land") and a strip bordering Belgium. Its capital is Middelburg. With a population of about 380,000, its area is about 2,930 km², of which almost 1,140... hamilton representing at 6.99mbit gizmoguy: look at the global grpah though the grey line large spike? lol at google thinking im in Ottawa yeah upwards interesting i wonder what google did then look at the new zelanad graph so it's not new zeealand boosting the global speeds. it seems to be geteting worse not better why are the averge speeds showing a time related trend weird dutch google have said it's higher in weekends err lower in weekdns i can't remember which lower speed? something about people having faster or slower net at home/work interesting i can't rembmer now ah, facinating it probably varies by country if people have faster net at work or home my work tubez a lot faster than my home tubez in korea i imagine it's faster at home for instance cos fibre so common I'm still failing in my quest to get 10gig to my desk but probably "shared" connectino at work and non shared at home i have infiniband gizmoguy: gotta work harder at it hah! nice giving > 10 gigabit it's cheap do it I got close to 10gig at my desk it's not 10 gigabit internet though :/ found some spare fibre pairs found a 10gig internet provider i got nowhere near 10gig anywhere near me it does like 1.4 gigabytes/sec nfs just needed to run some patch leads and buy an optic well, i guess i could walk in the other room and look at 10gig interconnects but meh hahaha but I got lazy gizmoguy: 10 gigabit international in new zealand? only 1 gig international :( going up soon i've found it interesting looking at peoples expectations of international performance in new zelaand. it wasn't that long ago people were happy with 2 megabit/sec international which i thought was a bit off but hey but as you start going up you end up getting less and less improvement but like with UFB coming etc theyl'l soon figure out that single threaded 100 megabit isn't really likely to work out that often to anywhere furhter than australia well 1 gig to states 1 gig to syd though that's going up shortly hell, i was only getting 100 megabit between chicago and arp both on gigabit UFB will be interesting to see rolled out i don't know anyone with ufb yet you're in the UFF area right? hamilton/tauranga/christchurch are so far ahead of the rest of the country in UFB yeah I work closely with two UFF RSPs one in hamilton one in tauranga ahh ok. what do you think of UFF? in waikato we actually have 2 cities fully rolled out with fibre UFF are the worst my area of auckalnd isn't in the UFB thing they contract out everything but there's fibre on my street huawei build the network what latency you been seeing? they contract out to another company to oversee the design of the network latency is pretty good is it getting under 1 msec yet? no way well i was seeing 1.5 huawei, oh boy I haven't tested with someone i know but they don't have fibre to their house actually they're not in a ufb area either it's meant to be about 80% of new zealand isn't it? the fun ones with UFF are the funky ways they handle their vlans across the network m0unds: huawei is big in nz sort of oh i read something about that gizmo i got confused. yeah it's using lots of huawei technologies big because of cost, or because of some other factor? just curious - the guys i know who have had to work with huawei gear hated it cost probably does NZ have import taxes on stuff? yup does it depend on hw cost or where it's imported from? m0unds: new zealand is cheap too m0unds: hw cost i think ah, ok internet is expensive in new zealand here's another good graph mercutio http://wand.net.nz/smokeping/?displaymode=n;start=2013-11-04%2013:44;end=now;target=Off-net.google google.com likes to move around a lot which is strange considering we have 2 GGC nodes upstream of us gah i give up :) ~30ms away is SYD, and ~50ms away is japan on trying to find email haha no worries email is hard uhh is that frmo christchurch? nah, hamilton 50 msec seems rather high to google yes, yes it does we get taken to japan sometimes i wonder if it's going to melbourne yeah i seen that before i have seen apple.com go to malaysia too do you work in hamilton uni? I don't even bother tracking that one :) yeah, I'm at the network research group at waikato uni ahh cool. i thought it was interesting that hamilton had network reserach stuff going on i want to do a bit of research myself in a way :) mmm we do a lot of random stuff which is quite fun cool. doing a heap of openflow/SDN stuff right now you know how there's a standard for specifying maximum link bandwidth in a path? but every hop would have to be able to diminish the bandwidth if necessary? so it needs router support so it went nowehere as do most standards err proposed standrad i think well, IETF drafts I should say :) i want to know how much benefit that has in the real world. does PMTUD not do enough for you? cos like on adsl connections, if you get the transmitting host to rate limit single threaded throughput can often be significantly higher internationally. it's not about mtu it's about not overflowing buffers/queues. Ohh right in between and losing heaps of packets. sorry with you now and then having to retransmit. so like if you have a 20 megabit dsl connection 20 emgabit sync rate and do single threadd download, you'll probably only get 10 megabit but if you cap the speed at 16 megabit you'll probagbly get 12 megabit if you increase the download size, then it matters less in a way because tcp/ip adapts. but lots of downloads are small and cubic is pretty good at ramping up and tbh i care more about the speed of downloading 10mb than 200mb normally generally speaking for smaller downloads i'm more likely to be wanting it "now" and larger ones i can go make coffee or whatever yeah, tcp is fun times gah a general protocol meant to support speeds of 1k up to 100s of gbit so i try downloading my 10mb file from arp and it's going slow atm well cubic is pretty good TCP slow start \o/ fwiw i downtune for infiniband the same way i downtune for gigabit like i set my maxium window sizes down and for some reason something's been going around for a while advocating massive window sizes and saying that tcp/ip will deal but often it actually diminishes performance being set too high static: well cubic takes over with slow start o and actually works pretty well basically it looks at time between packets and uhh something else i don't think it's simple packet pairs though but it's not ack 16 times 32 more packets. oh, also, linux is sneaky, and acks every packet for short connections. for the first 16 packets. SACK helps out a lot sack doesn't end up acking enough packets. for my liking wireless throughput is limited with tcp versus udp quite significantly. because wireless is half duplex. wireless is the worst :) yeah but wireless also buffers randomly etc not showing it to the applications or giving feedback speaking as someone who used to work for a company that build embedded wireless devices s/build/built so if it's going to do that anyway, you may as well just throw more packets at it, and not worry about having the acks in a timely fashion speaking as someone who used to work for a company that built embedded wireless devices closely synced. and you may as well ack less often mercutio: got a masters student working on 802.11ad at the moment that stuff is funky "WiGig" what's 802.11ad? is it like ac but better? 60ghz wireless ahh ok does it use more forward error correction? it gets most of it's awesome from beam forming ahh it's only built for short lengths i thought that was just a buzzword like in a room for example that wasn't really significantly implemented so with wigig you usually have an antenna array like all around your device? and then with phase shifting your array you can point/focus the 60ghz more closely to your device you can make a flat square one I believe wat happens if you have a wireless 802.11ad ball and you throw it around the room I suspect it doesn't work very well it's more for stationary devices ahh as everytime you move it needs to recalculate the beam forming stuff the example given in the 802.11ad docs "Imagine your at an airport just about to board, but you want a bluray on your phone to watch on the plane" just walk up to the movie transmitter pole, buy your movie and have it in a few seconds hey i had a better idea high bandwidth plug in ports for phones like network ! it's also "widely" used for wireless laptop dongles dell has a wireless dongle you can buy so you sort of just place your laptop near it and you get vga/dvi/usb etc i suppose it is kind of cool. infiniband supports iommu video cards support iommu i was thinking it'd be cool to copy straight from video card to infiband before for remote display i think video card iommu support is pretty new though and mostly focused on opencl but i've always liked the idea that you can basically map memory over network i've not played too much with infiniband as in, i've never played with infiniband :( my cards cost $75 USD i think it was for four dual 20 gigabit plus $45 USD shipping or something the next fun toy i'm hoping to have a play with is some infinera kit cables are about US $15 or something i htink 5TB/s on a single fibre pair so it's way cheaper than getting 10 gigabit ethernet and it can do about 15.8 gigabit/sec with single threaded iperf * Disclaimer 3 racks worth of infinera multiplexing gear required at each end of cable with 600k window size what do you use to swtich infiniband? hahha i don't it's back to back ah right you can just use opensm as a subnet manager on linux also the cables are thick. i'm using copper. and samba doesn't do rdma on linux and samba's normally code doesn't seem to work well limiting throughput to 400mb/sec or so with like 100% cpu on samba process err 400 megabytes/sec but nfs will go faster. even without rdma oh and that's using recent i7 cpus at each end and ssd on both sides but yeah, i wonder if samba 4 will speed things up but if 10 gigabit ethernet bcomes common people will be like "i can't get my full ssd speeds over ethernet" lol let's see what's the fastest I can do over 10gig HTTP i really wish 2.5 gigabit ethernet came around try cachefly? http://cachefly.cachefly.net/100mb.test maybe a little small but it's australia Fedora-19-x86_64-DVD.iso Length: 4444913664 (4.1G) [application/octet-stream] 4,444,913,664 779M/s in 6.9s nice where's that to? AKL <--> WLG nice I'll try AKL <--> CHC do you feel like running a random binary? depends on the binary :P i been working on a small curl like program would be curious how it compared :) I'll run it locally on my test 10gig network rather than the prod 10gig NZ network :) http://202.49.71.58:24/microcurl you can su to nobody or such it doesn't support output file name yet only just started adding getopt support :/ but it'll output to stdout just > /dev/null oh and you need to include http:// atm would be curious to see how it compares with the "time" command so just microcurl http://file > /dev/null? time microcurl http://file > /dev/null ? yeah err http:/// > /dev/null you can include port too i usually use port 24 to bypass any transparent proxies not because i'm against transparent proxies but so i can test speds these two machines are directly connected yeah well technically there's a switch in the middle i found that it's about half as much cpu on haswell as curl hopefully not proxying :) on infiniband it doesn't listen to http_proxy atm and doesn't even support -x yet it does use some low granuality timer that is only supported in recent lijnux i haven't checked how it handles that not working yet (that's for showing the speed, which i don't care about granuality with) grabbing an ISO from a mirror only getting 111MB/s :( to test with heh local gigabit yeah :( i been looking at cpu utilisation over network i was first looking at speed over lcoalhost but even with infiniband speed shoudl be the same. well other than it starts up faster than curl Hmm what have I cocked up here which is like 6 msec difference or something 72 to 78 msec or such 80MB/s over 10gig that's a little slow and curl does that too? that's with wget is the file too big for memory? wget uses http_proxy by default you haven't got that set right? locally wgetting on the same machine i get 900MB/s that's slow :/ spindles it'll be as fast as it can read from disk 100 200M 100 200M 0 0 2317M 0 --:--:-- --:--:-- --:--:-- 2325M you should output to null :) oh irght i keep assuming cached speed yeah 100 200M 100 200M 0 0 889M 0 --:--:-- --:--:-- --:--:-- 892M I have to read it from somewhere first :) i got that the first time ok what is going on here gigabit needs no tweaking locally normally it starts fast what does iperf say? and trails off to 80mbit oh do you have broadcom ethernet? intel 10gig cards hmm screw off with broadcom i hit something weird like that with broadcom :/ they're engineering samples though still shouldn't matter in normal adapative coalescing mode? I was playing with coalescing last time I was logged into these machines actually it's probably simpler than that. ahh try ethtool -c3 ? err ethtool -c rx-usecs 3 that's what the gigabit stuff defaults to but i dunno if the 10 gigabit stuff is the same I have to be careful with the coalecscing stuff how come ? last time I played I managed to generate a DIV_BY_ZERO in the intel driver err it's -C and the linux kernel gets unhappy when you do that hahaha i played around with coalescing and dropped cpu usage by heaps on broadcom core2duo I just don't even touch broadcom these days heh they don't let you do as much in hardware as the intel nics it's my colo box it's i've got a colo box at sky tower i could stick intel ct adapter in but it seems to go ok for some reason hp ilke to use broadcom alright 7gbit in iperf TCP interesting 1mbit in iperf UDP what yeah... what on earth was I doing on this machine last time I was logged in history :) fwiw infiniband sucks for udp is it 1 megabit sending speed? or 1 megabit throughput? oh wait cos if it's only sending at 1 megabit I think it's just iperf being retarded did youadd -b 2g or such? :) if I stop the iperf server the client can still connect and reports i'm getting 1mbit/s oh even though you tcpdump and there's 0 traffic firewall on port 5001? yeah will be stupid iperf yeah the udp testing is pretty broken it's also bad if you overshoot it can't receive the end result back because it gets packet loss I wonder if I have any rate limiting in here i would rather it see it do a bit of return traffic to make sure it's still ok to test too because it gets through the first 2GB at full speed and then it drops to 100mbit tc -s qdisc won't be tc going to /dev/null ? oh with iperf possibly iptables or apache nah this is wget / apache now yeah going to /dev/null ahh lulz u needz nginx lol omg node.js man BAREMETAL!!! just instill lighttpd on alternative port? or nginx :/ ok, so these machines run no firewalls i use lighttpd possibly I should fix that lighttpd is faster with static files heh and no TC wonder how apache on this box is configured crapache lulz lulz i never see lighttpd having bad performance :/ tbh I didn't even know there was apache on here so I never set it up :) i feel brain cells dying just typing that for some reason it's way more computationally expensive in linux to reecive files than send m0unds: :) well actually it's cos linux can't do zerocopy receive gizmo, maybe you should use a 2gb file? :) i only use 200mb file for testing normally alright we solved the issue what was it ? I put a 2.7gb file on instead :) haha these are just dev/test boxes not a clue where the rate limit is yah right wget gives me: 828M/s in 3.8s with how much cpu usage? i wonder why you're not getting full speeds wget maxes at 45% with sandy bridge era cpu? probably a mixture of slow spindles/not running 9k mtus/a crappy dell switch in the middle nah these will be older model name : Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz 5600 series? oh wow that old ok ;) :) why fix what's not broken? considering i get 20% cpu usage on gigabit with e3110 is that with tcp timestamps enabled or disabled? I think i broke your program probably written -1200776 kbytes (24.455); average: -49101.4584 it's probably the kernel version doesn't like the timer Linux marvin 2.6.32-5-amd64 #1 SMP Fri May 10 08:43:19 UTC 2013 x86_64 GNU/Linux CLOCK_MONOTONIC_COARSE oh look another machine I haven't upgraded to wheezy it should still be accurate using the "time" command it starts off ok and then it pauses and jumps into the negative umm it was introduced in linux 2.6.23 oops linux 2.6.32 which is what you're using oh that's interesting written -1166052 kbytes (25.291); average: -46105.4135 real 0m26.686s user 0m0.004s sys 0m6.476s umm you know i bet it's a signed 32 bit integer issue haha yeah that's what it feels like it looks like it but it's 64 bit binary but yeah it looks exactly like that and i use 200mb test ifle ok :) so 6.476s cpu what was wget? 3.8s oh 3.8s system cpu? real 0m3.565s user 0m0.184s sys 0m1.628s that's for wget oh hangon so it's being WAYYYY slower yeah i wonder why i'll try with a smaller file % time ./microcurl/microcurl http://192.168.50.254:24/200m > /dev/null written 204800 kbytes (0.153); average: 1338562.092 ./microcurl/microcurl http://192.168.50.254:24/200m > /dev/null 0.00s user 0.08s system 50% cpu 0.152 total hmm that's slower than before oh i shouldn't load it over nfs prboably alright, debian cd1 this time hhmm same diff ~700mb wget: real 0m0.776s user 0m0.028s sys 0m0.364s written 654432 kbytes (1.689); average: 387467.140 real 0m2.303s user 0m0.000s sys 0m2.040s wow # time ./microcurl http://202.49.71.58:24/testfile.zip http protocol. written 204800 kbytes (0.160); average: 1280000.000 ./microcurl http://202.49.71.58:24/testfile.zip 0.00s user 0.16s system 98% cpu 0.163 total hmm oh oops from wrong computer # time ./microcurl http://202.49.71.58:24/testfile.zip http protocol. written 204800 kbytes (1.799); average: 113841.023 ./microcurl http://202.49.71.58:24/testfile.zip 0.01s user 0.55s system 31% cpu 1.799 total wget http://202.49.71.58:24/testfile.zip 0.07s user 0.61s system 38% cpu 1.784 total that's how it compares for core2duo with the old version for me with gigabit (the old version always writes to /tmp/testwrite which needs to be on tmpfs) ./newmicrocurl http://202.49.71.58:24/testfile.zip > /dev/null 0.00s user 0.57s system 31% cpu 1.798 total and that's new version so new version behaving the same what cpu usage difference do you get to my url? how big is testfile? 200 megabytes it doesn't have 10 gigabit ethernet but it has gigabit to APE cpu usage up to 10% or so it's not actually a zip file it's /dev/urandom but I occasionally see it spike to 98% for your process so maybe that's why it's slow weird i wonder why beacuse i'm testing on e3110 what kind of speed you getting? e3110 is basically ecc version of core2duo hold up, just heading out for some tea orig i found a slow down with adsl cos i was checking time too often / outputting to the screen too mcuh but i fixed that fixed the 32bit bug :/ by not linking against musl oh actually no it is still there it's bloody 64 bit integer should be 64 bit ok now i fixed that it seems that int is 32 bytes on 64 bit architectures still. if i strace wget and my microcurl, wget does read of 8k and two writes of 4k, and microcurl does reads of 7300 and writes of 7300 bytes, but sometimes 14600 where 7300 is 1460*5 so if wget is using less cpu i must guess that it's due to getting less packets in a burst and doing read/write of single packet or such mercutio: uint64_t :) oh is "long" wrong? long works ok too i haevnt' done much coding recently but uint64 is how to get a 64bit integer don't use int i am also using %uz in printf i assumed int was 64 bit now nah, backwards compat yo well i see that now :) oh interesting most of my programming was 15 years ago not a lot has changed since then though.- an integer is probably defined as a 32bit number somewhere in the C standard no doubt but 64 bit is one of things that changed :) i updated the url err the code at the url but it's purely for showing accurately i'm still curious why it using so much cpu :/ oh, and i stalked you on linkedin :/ :) add me if you want ok go to NZNOG? nope i was thinking of going next year lame cos of SDN etc :/ come on down but it's expensive for flights and accomodation and so forth true and i doubt my work would pay work pays for everything for me :) yeah if work paid for everything i probably wouldn't think twice about it but yeah, it's a great place to have an informal chat with your upstreams over a few beers but really it's on the interesting side rather than useful side few (read: a lot of beer) heh I find the networking more useful than the talks the talks are all live streamed anyway yeah the talks themselves looked mostly boring by richard @ r2 i see it more of a social thing yeah it is i actually watched some wand thing and it wasn't nearly technical enoguh but it did suggest you were using old cheap hardware and it seems you still are :) lol remember what the talk was on? umm it had a few things we can go a lot more technical offline one of them was about monitoring i think ? but talks are usually fairly informational and debugging problems? yeah it'll be AMP our "active measurement platform" ahh ok basically we have nodes in most ISPs in NZ i want to do my own network monitoring stuff and do DNS/ICMP/Traceroute/HTTP testing i've got many ideas :) and we publish them all to http://erg.cms.waikato.ac.nz/amp like that you should be able to esimate bandwidth without using up lots of bandwidth though all that code is ancient and icky so we have a government grant to rewrite it all but lots of ideas can still take a long time to actually put into effect :) and we're adding event/anomaly detection well i got interested when i emailed truenet truenet suck balls and their responses were like OMG exactly like they're single threading wget we actually went for that contract but we were too expensive you need multiple tcp connections to represent web page downloads but at least our results would have meant something but you also need to statr them at realistic times rather than just 8 at once or such they also were recently doing all their bandwidth tests on 200kb files loaded off trademe.co.nz which means you want to download, do parallel requests, request new data when you see the actual urls etc they've now moved to 1MB files because of UFB i do most of my bandwidth tests on 200kb files i just confirm with 10mb files 200k often shows if a connection is good or bad but they report on stuff like average connection speed :/ but anyway, like the way i see it is there's thresholds it doesn't really matter if it's 10% faster or 10% slower but if it takes 2 secodns or 3 seconds to display a web page it's significant yeah but average to where? i also herad that there's some messy routing stuff involved i've got a few vps's with test downloads so we do full mesh testing international where we can ahh interesting so do you have many locations? each node to every other node we don't do bandwidth tests at the moment though just ping? because amp has been around 10 years or so what i really want to test is tcp latency and people didn't like us using "all" their bandwidth which is kind of messy like ping != tcp latency necessarily I think we have around 30 sites or so ahh ok so you'd like our HTTP test yeah i think i saw it once you have soem kind of traecrouet from many hosts thing? or something some kind of network path thing yup http://erg.wand.net.nz/amp/graph.php?src=ampz-auckland&dst=ampz-fx-aknnr i think http can behave differently i think a lot of things though i'd like more real world data :) scroll to the 'path analysis' graph taht's a graph i saw some uhh something that showed all the different upstreams andi t ihnk something else could be TR http://tr.meta.net.nz/tr.php maybe this is nunrelated one of our researchers wrote it for fun ahh ok perry well it's cool he works for the google now ahh ok adhh yeah it shows the image afterwards my most active URL on wand.net.nz is still this: of the different upstreams http://wand.net.nz/~perry/max_download.php oh god it kinda became the industry standard TCP calculator heh can that be updated? i hate that tool hahaha he wrote it in a night i hate it when people argue about tcp limitations when it's all old data yeah but i mean cubic is VERY VERY common now and bic is around a tiny bit and windows has new stuff that i haven't really played with and linux has all these improvements i think they even decreased the initial timeout down from 3 seconds back when BUBA was common in new zealand i looked into some of those things a little because there was so much packet loss that it was interesting to see how badly things dealt with it and what could be done to improve things and decreasing that 3 second timeout makes a very significant difference in ~5% loss links that said, i put it down myself before linux did :/ :) cos i was looking at openbsd source code for tcp/ip and i adjusted some stuff then i thought that was easy so i try doing the same on linux and i'm like omg i swear, if you want to look at how tcp/ip works in the kernel it's easier to understand with openbsd than linux by a LARGE margin and it also compiles faster :/ well it helps that openbsd doesn't have cubic etc it just has newreno but yeah are you looking at doing your own truenet type testing? the other big problem i saw with truenet is that they only want to test idle connections and i think it's way better to just test all the time, and expect that there's noise like if i'm playing a game, and someone downloads a 1mb file, ond my game gets laggy, i'd rather that be reported and different aqm on different connections can influence positively/negatively we are effectively replacing truenet yeah we have a ISP partner who wants to drop all our software on embedded linux boards and put them on customers internet cool shouldn't it be isp agnostic though? so we partner with local ISPs for testing the stuff we write oh right so the isp provides the hw but you provide the software? it's more the data they provide us as writing network monitoring software with a network is hard :) err without a network* with or without a network is hard :) lol yeah especially without customers yeah the more i thought abuot it the more compilcated it got but i was planning on trying to do something myself i figure you should be able to stick something on something ilke openwrt pretty easily but that you could get wider uptake by making it work on a windows desktop we have a wandboard i'm looking to get our monitoring software running on heh we tried that how'd it go? http://nettest.wand.net.nz/ nobody really ran it so we kinda abandoned it is your stuff open source? some of it is the stuff NBIE pays for, we don't typically release the source code for haha why am i trying to download the windows version on linux oh i have wine :) :) all stuff we develop inhouse is open source Libtrace is our most popular tool we write cool is it bsd licensed? http://research.wand.net.nz/ I think we standardise on GPL ahh have a look at scamper too i dunno if lenny is the most recent or not scamper is awesome libprotoident is cool too but it'll probably work i imagine we can identify ~140 different applications purely on 4 bytes of the packet oh it's running now what haha http://research.wand.net.nz/software/libprotoident.php nice that is 4 initial bytes? or within each pcaket? "Unlike many techniques that require capturing the entire packet payload, only the first four bytes of payload sent in each direction, the size of the first payload-bearing packet in each direction and the TCP or UDP port numbers for the flow are used by libprotoident" ahh so first four hmm one of the other things we do at wand is take a lot of network traffic dumps cept most ISPs don't like giving you full payload so we standardised on the first 4 bytes to keep it anonymous cool i like that and no chance of password leaks etc also it's easier to store :) yes is it very compressable? but libprotoident lets us look at what protocols people are using umm so it's full tcp header http://wand.net.nz/wits/waikato/8/ with 4 bytes data? download some traces and try it out :) Snapping Method Packets truncated four bytes after the end of the transport header, except for DNS dude try xz compression i'm not downloading this direct to hme :/ home come on heh we split them into small tarballs for ya what 6 gb gz? yeah on adsl :) lame I got vdsl at home oh what it's ftp protocol of the future hmmm it's downloading slow too do you not have v6 at home? :O We rate limit ipv4 access to 'encourage' other researchers who want access to our datasets to move to ipv6 i had v6 at home oh i'm not downloading to hoem hangon i'll go via ipv6 i find ipv4 works better :/ i suppose you get to be impractical :) haha yup! 30megabytes/sec now 40 50 60 55 it ramped up slow 80 i wonder if it's disk :) 0 6203M 0 27.1M 0 0 1184k 0 1:29:24 0:00:23 1:29:01 1216k^C 72 6203M 72 4476M 0 0 66.5M 0 0:01:33 0:01:07 0:00:26 84.0M kind of a big difference between thee two :) haha, one v4 one v6? yeah i thought soemthing was broken is that gigabit international? and hangon, that's faster speed than your core2duo got curl -O ftp://wits.cs.waikato.ac.nz/waikato/8/20111104-000000-0.gz 5.20s user 26.06s system 33% cpu 1:32.05 total oh hangon, you had 800 megabit, not 80mb/sec didn't you y'know if i didn't have to pull that file back home i wouldn't really complain about you using gz :) hh halghl, yhu hal 800 megabut, lht 80mb/sec lull't yhu what tracestats: error while loading shared libraries: libtrace.so.3: wrong ELF class: ELFCLASS32 hmmmm oh fixed i hate how ubuntu doesn't include /usr/local/lib in search path by default but that's a really bizzare error message for such ohh i haev a 32 bit version of that library installed by that test client thingy boy, making serial cables is fun old school :) oh i can just use tcpduump rather than get a 6gb dump yea, we use them for control surfaces but we have no spares, so i used the cable kits provided with the controllers and made 4 9-25 null modem cables out of cat 6 hahaha right home time I think see you suckers later later man i'm going home to build a network for a lan this weekend 7 juniper switches :P EX3200s? heh you just have 7 juniper switches lying around? i wish i did, i just have a stack of 2960s mercutio: heh yup i hate 2960s 6x EX2200s nowhere near as much as i do and 1x EX4200\ figured i'd guess middle of the road :) j switches are so nice the 2200s are pretty basic, but i'm just using it as the access network yeah the EX4200 is a dream to work on i wouldn't mind 7 ex4200s and i have an M320 for the lulz in storage i need to get a 2200 or 3200 for home 2200c if you want to get the small one yeah i'd probably shoot for 24 ports for home? yep prewired house, lots of devices i'm using 4 gigabit ports at home and 4 10/100 i'm using 13 err that's how amny ports i haev at least and 3 10/100 on my SRX210HE and one of them connecs the two together i do have an unused mangaed switch but i dunno how you'd end up with 24 devices at home :) ahh ok, so 24 makes sense yeah, more than 12 if it was less, i'd go for 12 for sure because $$$ hahahaha heh you could always go dual 8 port switches :/ nooooo haha one managed one unmanaged nooooo i thought you were trying to be chaep not that cheap heh bang for the buck yeah just get a cheap 24 port switch :/ i have a cheap 24 port switch go cheaper than juniper? oh yeah you haev a 29600? not at home err 2960s at work 2960g but 2960s would work too 2960 takes up lots of power mine are WS-C2960S-48FPS-L so, ~900W with full POE load it'll be like 150w or more without poe :/ yeah hence lots of power :) they're also loud, and hot and crappy i hate cisco gear gcet something with energy efficient ethernet for home maybe? so do i GET A CLOUD CORE ROUTER (kidding) hahahaha i need to have hard-disk storage at home aagain i've been living off just ssd mercutio: you mean like this - http://routerboard.com/CCR1036-12G-4S but i'm like where am i going to put this packet capture heh static: yes you get to run a beta OS yep i have a routerboard as my home router hell, if you buy the middle of the road RB1100AH you end up with two or more ports that just don't work correctly and they'll tell you as much at least that includes a switch m0unds cloud core router doesn't even have a swiitch that's being generous every port goes throguh the cpu hahaha yoeu get two switches i think with different mtu limits etc the lower end routerboards are perfect for home use tbh i had an rb450g and it died a hero's death (bad caps, bad thermal mgmt on the cpu) compared to the other garbage on the market sure xz is so damn slow to compress --- % 523.0 MiB / 2693.7 MiB = 0.194 2.1 MiB/s 21:32 i think it's about 1/6th of the way through? so like two hours or something it was half an hour with -1 and still 50% better than gzip ratio err 33% better \m/ my machine from last year's lan still boots good grief! I have never had so much backlog to catch up upon in this channel you dont type /clear enough I could just ignore it... and I am. It overran my buffer :p (And I never type /clear when I'm afk :P) brycec: sorry me and mercutio worked out we live an hour from each other haha no worries mate :) up_the_irons: Have you considered licensing your automation and various tools for Virtual Machine provisioning to your competition? gizmoguy: i'm still trying to get libtrace to work :/ it doesn't like my lack of encapuslation i found i had an old dump of my own connection around being the geek i am :/ reading from file ben, link-type NULL (BSD loopback) but it's in that format ben: tcpdump capture file (little-endian) - version 2.4 (No link-layer encapsulation, capture length 128) i suppose i should at least be happy there's source so i can try and figure out how it works :) mnathani: why should his compettion have them? also it probably would mean he'd end up with heaps of questions and even less spare time mercutio: report it to me tomorrow and I'll raise a bug for you :) ahh real i'm busy drinking and configuring linux those don't go tegether do they they sure do i thought i should be able to figure out how to do it :) they go very well together i even managed to partition a disk correctly! well, ask me tomorrow if i configured it correctly it's the only way to fly cos it looks like openbsd loopbackup, but with host byte order rather than network byte order from what i gather mercutio: Just a thought for alternate means of revenue generation, since the code has already been written and tested mnathani: have you seen the solusvm forum? gizmoguy: i forgot to ask, are you on untappd? gizmoguy: you remembered to use gdt? m0unds: oh man, I should join everyone else I know does it mercutio: gpt? partition table guid partition table it seems use gdisk to do it doesn't have the silly 4 parition limit or extended partition hack I use parted dunno if that does it or not ahh parted does support yeah it does if you have modern computer yo ucan use uefi too oh it doesn't like infiniband either :) unless i broke it yeah doesn't like ib I'm not sure we've ever tested with IB :) yeah it's nice how there's so many different formats :/ like it probably works on freebsd on ethernet it should work on freebsd we test on lots of different platforms osx support too :P it's ppp on freebsd though yeah it works on ethernet although dunny why EHLO is Invalid_SMTP it actually seems to work with most stuf i want to make graphs i think ;)