[00:24] well as an end user it wasn't much work :) [00:25] with windows i'd rather use l2tp/ipsec, but with linux openvpn is easy [00:31] i'm glad it was easy to set up for you :) [01:51] as request some weeks ago: http://support.arpnetworks.com/kb/main/is-there-a-firewall-filter-rate-limit-or-similar-device-applied-to-my-traffic [01:51] *requested [01:51] let me know if anything isn't clear [02:04] I guess I can remove that rule from my rule set. [05:21] *** jpalmer has joined #arpnetworks [05:21] *** jpalmer has quit IRC (Changing host) [05:21] *** jpalmer has joined #arpnetworks [07:49] *** tburke has quit IRC (Read error: Connection reset by peer) [11:08] up_the_irons: your openbsd 5.4 iso on http://mirrors.arpnetworks.com/ISO_Library/ is timestamped before openbsd 5.4 came out? [11:08] by like 4 months [11:09] actually 5.3 is early too. i imagine something has a grossly wrong time somehwere :) [12:38] if the iso timestamp is early, it is because there is a creation of the iso prior to the actual release; media gits burned, factories generate the cd's and stickers, then finally release day arrives. [12:38] the official timestamp of 5.4 is: -rw-r--r-- 1 root wheel 243050496 Jul 30 15:38 5.4/amd64/install54.iso [12:39] so it seems that arpnetworks has simply preserved the timestamp from the mirror site the image was retrieved from [12:54] Yeah, releases get tagged 2-3 months before actual release date. [12:54] A couple weeks for testing, and a few archs take months to build packages. [12:57] VAX... [13:39] toddf: oh real [13:39] that's months prior though? [13:39] i suppose it can take a while to press [13:40] i suppose it's 3 motnhs not four [13:40] i was thinking 7=july, 11=november [13:40] but iot's 30 july to 1st november [13:41] i was also looking at how 5.2 said feb, .. [13:41] but 5.1 is actually more cent than 5.2 [13:41] s/cent/recent/ [13:41] but 5.1 is actually more recent than 5.2 [13:42] oh wow [13:42] s/^o/O/ [13:42] Oh wow [13:43] i had to try that :) [13:47] s/mercutio/Mercutio/ [13:47] so, it's just on the output, not the nick. [14:02] all i did was rsync them, i swear ;) [14:05] yeh [14:05] well toddf's explanation works [14:06] btw up_the_irons have you considered using unbound for recursive dns? [14:07] i have [14:07] i assume it'd be one more thing to maintain [14:07] cos you'd have to keep resolving working with the old dns with people using that currently. [14:08] in fact, unbound *is* running on 208.79.88.9 [14:08] oh? [14:08] actually you don't host dns for people do you? [14:08] so i suppose you could remap your authorative [14:08] hmm i was using .7 [14:08] i'm not atm for some reason [14:09] oh, 89.9 is the secondary normally? [14:10] oh, nah it does say 88.9 [14:10] i'd just looked at host -t ns arpnetworks.com not close enough [14:11] hmm, 88.9 resolves to ns2 which resolves to 89.9 [14:14] and 88.7 appaers to be faster than 88.9, probably cos most people using primary [15:51] just run your own locally ;) [16:03] *** ThalinVien has quit IRC (Quit: leaving) [16:07] i haven't run my own recursive or authoritative ns' in years [16:07] m0unds: :( [16:07] missing out [16:07] nah [16:07] i think authorative is more important to run yourself than recursive [16:08] recursive you can just cache locally if you do lots of dns [16:08] yeah, recursive if you're an ISP or have lots of machines in one place [16:08] i like to pretend fairies answer queries and never give it a second thought [16:08] and authoratative if you need control of your zone [16:08] recursive if you have more than one transit provider [16:08] is more where i'd say it starts [16:08] whether or not isp [16:09] well, multiple recursive :) [16:09] i don't reckon wireless isp's need their own recursive dns rather than just cache if they are single-homed. [16:09] well single-homed transit, with peering recursive still good. [16:10] I've seen ISPs go horribly wrong when they have split upstreams and their recursive server on a different upstream to their customers [16:10] TBH I think everyone should run their own recursive [16:10] what do you mean a different upstream? [16:10] so say you have business customers and resi customers [16:11] and a cheap upstream and a more reliable upstream [16:11] do you mean having non PI space and having some IP on one transit provider, and the other on another transit provider? [16:11] shove business customers on the reliable one falling back to the cheap one [16:11] and visa versa [16:11] so PI space, right. [16:11] and you get to use the bandwidth on both transits [16:11] or at least dual advertising [16:11] yip [16:11] yeah [16:11] google does some really funky stuff when you mess that up [16:12] oh right i get yah [16:12] you mean like it'll push stuff over one provider or the other [16:12] but the DNS picks something for the other isp [16:12] and then the route sucks for the other isp? [16:12] yeah so, all the google CDN stuff is based on the IP it thinks the resolver is coming from [16:12] i think that's generally not too bad as long as you send outgoing via the best path [16:13] as google will choose an isp that's good for the connecting IP [16:13] so when you do a lookup to google.com the CDN goes, alright you have a GGC node at X IP nearby and returns that [16:13] and then you try connect to that IP from the client machine, and the GGC goes, hold on you're not using my transit [16:13] yeah you have to dual advertise routes at least [16:13] and falls you back [16:13] yaeh, that's why you need a dual advertiseement [16:13] yeah [16:13] even if you have like much longer as-path~ length [16:13] yup [16:13] it'll still let you use it [16:14] at least from what i've seen [16:14] you can actually use other isp's to transit to upstream ggc [16:14] mmm, but it would have been far more efficient to use the GGC node on the upstream you're connected to :) [16:14] on the alternate transit provider and it'll work [16:14] no, not really [16:14] cos generlaly speaking it'll still come down the transit provider that you're connecrted to [16:14] 's link [16:15] because most tarnsit providers will prefer transit routes over learned routes [16:15] but [16:15] it'll suck from a load balancing point of view [16:15] the GGC cache stuff is nasty anyway :/ [16:15] also depends on how you're doing your forward routes [16:15] problem is it's so much traffic [16:15] so it matters. [16:15] yeah, I notice a fault on GGC about once a week :/ [16:15] yeah i'd advocate doing best path forward path regardless. [16:16] I've got some fantastic graphs if you give me one second [16:16] graphmaster gizmoguy [16:16] 1 .. 2 .. 3 [16:16] google have been recently sending us to pretty much the furthest location in their network [16:17] i've had that issue [16:17] it was amsterdam [16:17] second class citizens [16:17] oh look yay [16:17] and only on some videos [16:17] they're doing it again today [16:17] http://amp.wand.net.nz/graph/rrd-smokeping/23235/1384215383/1384388183 [16:17] i'm in new zealand [16:17] oh hai [16:17] oh so are you? [16:17] me too :) [16:17] was it amsterdam? [16:17] google wouldn't tell me [16:17] and I couldnt' work out from traceroutes [16:17] oh that doesn't really tell you where you're going [16:17] necessarily [16:18] but the biggest spike I see on the graph is 600ms [16:18] do your videos actually go to there? [16:18] oh [16:18] actually [16:18] I don't really care about that [16:18] that shows me local [16:18] (not an ISP :)) [16:18] if you look at tcpdupm or such when viewing videos [16:18] and trace to those ip's [16:18] it can be kind of random where it sends you [16:18] yeah for sure [16:18] when I send those graphs to google [16:18] but i've seen that go to amsterdam [16:19] and the IPs we were hitting [16:19] via australia [16:19] they said I couldn't have got much further down their network [16:19] with horrible route [16:19] :) [16:19] yeah GGC is an interesting beast [16:19] do you host a GGC box? [16:19] nope [16:19] ah yup [16:19] i work for small isp [16:19] don't have ggc [16:19] sweet, where 'bouts in NZ? [16:19] but have dual upstreams [16:19] i'm in auckland [16:19] ah cool [16:19] not too far from me [16:20] hamiltron here [16:20] (small world) [16:20] heh [16:20] i was from chch [16:20] but moved after the earthquakes [16:20] fair enough [16:20] i haven't actually had youtube performance issues recently [16:20] what was wrong with old zealand? [16:20] hey what [16:21] http://www.youtube.com/my_speed [16:21] check out global on that [16:21] m0unds: i think it's in the netherlands actually [16:21] http://en.wikipedia.org/wiki/Zeeland [16:21] Zeeland :: Zeeland (Dutch pronunciation: [ˈzeːlɑnt] ( listen), Zeelandic: Zeêland), also called Zealand in English, is the westernmost province of the Netherlands. The province, located in the south-west of the country, consists of a number of islands (hence its name, meaning "sea-land") and a strip bordering Belgium. Its capital is Middelburg. With a population of about 380,000, its area is about 2,930 km², of which almost 1,140... [16:21] hamilton representing at 6.99mbit [16:21] gizmoguy: look at the global grpah though [16:21] the grey line [16:21] large spike? [16:21] lol at google thinking im in Ottawa [16:22] yeah [16:22] upwards [16:22] interesting [16:22] i wonder what google did [16:22] then look at the new zelanad graph [16:22] so it's not new zeealand boosting the global speeds. [16:22] it seems to be geteting worse not better [16:22] why [16:22] are the averge speeds [16:23] showing a time related trend [16:23] weird dutch [16:23] google have said it's higher in weekends [16:23] err lower in weekdns [16:23] i can't remember which [16:23] lower speed? [16:23] something about people having faster or slower net at home/work [16:23] interesting [16:23] i can't rembmer now [16:23] ah, facinating [16:23] it probably varies by country if people have faster net at work or home [16:23] my work tubez a lot faster than my home tubez [16:23] in korea i imagine it's faster at home for instance [16:23] cos fibre so common [16:24] I'm still failing in my quest to get 10gig to my desk [16:24] but probably "shared" connectino at work and non shared at home [16:24] i have infiniband [16:24] gizmoguy: gotta work harder at it [16:24] hah! [16:24] nice [16:24] giving > 10 gigabit [16:24] it's cheap [16:24] do it [16:24] I got close to 10gig at my desk [16:24] it's not 10 gigabit internet though :/ [16:24] found some spare fibre pairs [16:24] found a 10gig internet provider [16:24] i got nowhere near 10gig anywhere near me [16:24] it does like 1.4 gigabytes/sec nfs [16:24] just needed to run some patch leads and buy an optic [16:24] well, i guess i could walk in the other room and look at 10gig interconnects but meh [16:25] hahaha [16:25] but I got lazy [16:25] gizmoguy: 10 gigabit international in new zealand? [16:25] only 1 gig international :( [16:25] going up soon [16:25] i've found it interesting looking at peoples expectations of international performance in new zelaand. [16:26] it wasn't that long ago people were happy with 2 megabit/sec international [16:26] which i thought was a bit off but hey [16:26] but as you start going up you end up getting less and less improvement [16:27] but like with UFB coming etc [16:27] theyl'l soon figure out that single threaded 100 megabit isn't really likely to work out that often [16:27] to anywhere furhter than australia [16:28] well 1 gig to states 1 gig to syd [16:28] though that's going up shortly [16:28] hell, i was only getting 100 megabit between chicago and arp [16:28] both on gigabit [16:28] UFB will be interesting to see rolled out [16:28] i don't know anyone with ufb yet [16:29] you're in the UFF area right? [16:29] hamilton/tauranga/christchurch are so far ahead of the rest of the country in UFB [16:29] yeah [16:29] I work closely with two UFF RSPs [16:29] one in hamilton one in tauranga [16:29] ahh ok. [16:29] what do you think of UFF? [16:29] in waikato we actually have 2 cities fully rolled out with fibre [16:29] UFF are the worst [16:29] my area of auckalnd isn't in the UFB thing [16:29] they contract out everything [16:29] but there's fibre on my street [16:29] huawei build the network [16:30] what latency you been seeing? [16:30] they contract out to another company to oversee the design of the network [16:30] latency is pretty good [16:30] is it getting under 1 msec yet? [16:30] no way [16:30] well [16:30] i was seeing 1.5 [16:30] huawei, oh boy [16:30] I haven't tested [16:30] with someone i know [16:30] but they don't have fibre to their house [16:30] actually they're not in a ufb area either [16:31] it's meant to be about 80% of new zealand isn't it? [16:31] the fun ones with UFF [16:31] are the funky ways they handle their vlans across the network [16:31] m0unds: huawei is big in nz [16:31] sort of [16:31] oh i read something about that gizmo [16:31] i got confused. [16:31] yeah [16:32] it's using lots of huawei technologies [16:33] big because of cost, or because of some other factor? [16:33] just curious - the guys i know who have had to work with huawei gear hated it [16:34] cost probably [16:34] does NZ have import taxes on stuff? [16:34] yup [16:34] *** ThalinVien has joined #arpnetworks [16:35] does it depend on hw cost or where it's imported from? [16:35] m0unds: new zealand is cheap too [16:35] m0unds: hw cost i think [16:35] ah, ok [16:43] internet is expensive in new zealand [16:45] here's another good graph mercutio [16:45] http://wand.net.nz/smokeping/?displaymode=n;start=2013-11-04%2013:44;end=now;target=Off-net.google [16:46] google.com likes to move around a lot [16:46] which is strange considering we have 2 GGC nodes upstream of us [16:46] gah i give up :) [16:46] ~30ms away is SYD, and ~50ms away is japan [16:46] on trying to find email [16:46] haha no worries [16:46] email is hard [16:46] * mercutio makes mental note to organise his email better [16:47] uhh [16:47] is that frmo christchurch? [16:47] nah, hamilton [16:47] 50 msec seems rather high to google [16:47] yes, yes it does [16:47] we get taken to japan sometimes [16:47] i wonder if it's going to melbourne [16:47] yeah i seen that before [16:48] i have seen apple.com go to malaysia too [16:48] do you work in hamilton uni? [16:48] I don't even bother tracking that one :) [16:48] yeah, I'm at the network research group at waikato uni [16:48] ahh cool. [16:48] i thought it was interesting that hamilton had network reserach stuff going on [16:49] i want to do a bit of research myself in a way :) [16:49] mmm we do a lot of random stuff [16:49] which is quite fun [16:49] cool. [16:49] doing a heap of openflow/SDN stuff right now [16:49] you know how there's a standard for specifying maximum link bandwidth in a path? [16:49] but every hop would have to be able to diminish the bandwidth if necessary? [16:49] so it needs router support so it went nowehere [16:50] as do most standards [16:50] err proposed standrad i think [16:50] well, IETF drafts I should say :) [16:50] i want to know how much benefit that has in the real world. [16:50] does PMTUD not do enough for you? [16:50] cos like on adsl connections, if you get the transmitting host to rate limit [16:50] single threaded throughput can often be significantly higher internationally. [16:50] it's not about mtu [16:50] it's about not overflowing buffers/queues. [16:51] Ohh right [16:51] in between [16:51] and losing heaps of packets. [16:51] sorry with you now [16:51] and then having to retransmit. [16:51] so like if you have a 20 megabit dsl connection [16:51] 20 emgabit sync rate [16:51] and do single threadd download, you'll probably only get 10 megabit [16:51] but if you cap the speed at 16 megabit you'll probagbly get 12 megabit [16:52] if you increase the download size, then it matters less in a way [16:52] because tcp/ip adapts. [16:52] but lots of downloads are small [16:52] and cubic is pretty good at ramping up [16:52] and tbh i care more about the speed of downloading 10mb than 200mb normally [16:52] generally speaking for smaller downloads i'm more likely to be wanting it "now" and larger ones i can go make coffee or whatever [16:53] yeah, tcp is fun times [16:53] gah [16:53] a general protocol meant to support speeds of 1k up to 100s of gbit [16:53] so i try downloading my 10mb file from arp and it's going slow atm [16:53] well cubic is pretty good [16:54] TCP slow start \o/ [16:54] fwiw i downtune for infiniband [16:54] the same way i downtune for gigabit [16:54] like i set my maxium window sizes down [16:54] and for some reason something's been going around for a while advocating massive window sizes [16:54] and saying that tcp/ip will deal [16:54] but often it actually diminishes performance being set too high [16:55] static: well cubic takes over with slow start [16:55] o [16:55] and actually works pretty well [16:55] basically it looks at time between packets [16:55] and uhh [16:55] something else [16:55] i don't think it's simple packet pairs though [16:56] but it's not ack 16 times 32 more packets. [16:56] oh, also, linux is sneaky, and acks every packet for short connections. [16:56] for the first 16 packets. [16:57] SACK helps out a lot [16:57] sack doesn't end up acking enough packets. [16:57] for my liking [16:58] wireless throughput is limited with tcp versus udp quite significantly. [16:58] because wireless is half duplex. [16:58] wireless is the worst :) [16:58] yeah but wireless also buffers randomly etc [16:58] not showing it to the applications [16:58] or giving feedback [16:58] speaking as someone who used to work for a company that build embedded wireless devices [16:58] s/build/built [16:58] so if it's going to do that anyway, you may as well just throw more packets at it, and not worry about having the acks in a timely fashion [16:58] speaking as someone who used to work for a company that built embedded wireless devices [16:58] closely synced. [16:59] and you may as well ack less often [16:59] mercutio: got a masters student working on 802.11ad at the moment [16:59] that stuff is funky [16:59] "WiGig" [16:59] what's 802.11ad? [16:59] is it like ac but better? [16:59] 60ghz wireless [16:59] ahh ok [16:59] does it use more forward error correction? [17:00] it gets most of it's awesome from beam forming [17:00] ahh [17:00] it's only built for short lengths [17:00] i thought that was just a buzzword [17:00] like in a room for example [17:00] that wasn't really significantly implemented [17:00] so with wigig [17:00] you usually have an antenna array [17:00] like all around your device? [17:00] and then with phase shifting your array you can point/focus the 60ghz more closely to your device [17:01] you can make a flat square one I believe [17:01] wat happens if you have a wireless 802.11ad ball [17:01] and you throw it around the room [17:01] I suspect it doesn't work very well [17:01] it's more for stationary devices [17:01] ahh [17:01] as everytime you move it needs to recalculate the beam forming stuff [17:01] the example given in the 802.11ad docs [17:02] "Imagine your at an airport just about to board, but you want a bluray on your phone to watch on the plane" [17:02] just walk up to the movie transmitter pole, buy your movie and have it in a few seconds [17:02] hey i had a better idea [17:02] high bandwidth plug in ports for phones [17:02] like network ! [17:02] it's also "widely" used for wireless laptop dongles [17:02] dell has a wireless dongle you can buy [17:03] so you sort of just place your laptop near it [17:03] and you get vga/dvi/usb etc [17:03] i suppose it is kind of cool. [17:03] infiniband supports iommu [17:03] video cards support iommu [17:03] i was thinking it'd be cool to copy straight from video card to infiband before [17:03] for remote display [17:04] i think video card iommu support is pretty new though [17:04] and mostly focused on opencl [17:04] but i've always liked the idea that you can basically map memory over network [17:04] i've not played too much with infiniband [17:04] as in, i've never played with infiniband :( [17:04] my cards cost $75 USD [17:04] i think it was [17:04] for four [17:04] dual 20 gigabit [17:05] plus $45 USD shipping [17:05] or something [17:05] the next fun toy i'm hoping to have a play with is some infinera kit [17:05] cables are about US $15 or something i htink [17:05] 5TB/s on a single fibre pair [17:05] so it's way cheaper than getting 10 gigabit ethernet [17:05] and it can do about 15.8 gigabit/sec [17:05] with single threaded iperf [17:05] * Disclaimer 3 racks worth of infinera multiplexing gear required at each end of cable [17:05] with 600k window size [17:06] what do you use to swtich infiniband? [17:06] hahha [17:06] i don't [17:06] it's back to back [17:06] ah right [17:06] you can just use opensm as a subnet manager on linux [17:06] also the cables are thick. [17:06] i'm using copper. [17:07] and samba doesn't do rdma [17:07] on linux [17:07] and samba's normally code doesn't seem to work well [17:07] limiting throughput to 400mb/sec or so [17:08] with like 100% cpu on samba process [17:08] err 400 megabytes/sec [17:08] but nfs will go faster. [17:08] even without rdma [17:08] oh and that's using recent i7 cpus [17:08] at each end [17:08] and ssd on both sides [17:09] but yeah, i wonder if samba 4 will speed things up [17:09] but if 10 gigabit ethernet bcomes common people will be like "i can't get my full ssd speeds over ethernet" [17:09] lol [17:10] let's see what's the fastest I can do over 10gig HTTP [17:10] i really wish 2.5 gigabit ethernet came around [17:10] try cachefly? [17:10] http://cachefly.cachefly.net/100mb.test [17:10] maybe a little small [17:10] but it's australia [17:11] Fedora-19-x86_64-DVD.iso Length: 4444913664 (4.1G) [application/octet-stream] [17:11] 4,444,913,664 779M/s in 6.9s [17:11] nice [17:11] where's that to? [17:11] AKL <--> WLG [17:11] nice [17:11] I'll try AKL <--> CHC [17:11] do you feel like running a random binary? [17:12] depends on the binary :P [17:12] i been working on a small curl like program [17:12] would be curious how it compared :) [17:12] I'll run it locally on my test 10gig network [17:12] rather than the prod 10gig NZ network :) [17:13] http://202.49.71.58:24/microcurl [17:13] you can su to nobody or such [17:13] it doesn't support output file name yet [17:13] only just started adding getopt support :/ [17:13] but it'll output to stdout [17:13] just > /dev/null [17:13] oh and you need to include http:// atm [17:14] would be curious to see how it compares with the "time" command [17:15] so just microcurl http://file > /dev/null? [17:15] time microcurl http://file > /dev/null ? [17:15] yeah [17:15] err http:/// > /dev/null [17:15] you can include port too [17:15] i usually use port 24 to bypass any transparent proxies [17:16] not because i'm against transparent proxies [17:16] but so i can test speds [17:16] these two machines are directly connected [17:16] yeah [17:16] well technically there's a switch in the middle [17:16] i found that it's about half as much cpu on haswell [17:16] as curl [17:16] hopefully not proxying :) [17:16] on infiniband [17:16] it doesn't listen to http_proxy atm [17:17] and doesn't even support -x yet [17:17] it does use some low granuality timer that is only supported in recent lijnux [17:17] i haven't checked how it handles that not working yet [17:17] (that's for showing the speed, which i don't care about granuality with) [17:19] grabbing an ISO from a mirror [17:19] only getting 111MB/s :( [17:19] to test with [17:19] heh [17:19] local gigabit [17:20] yeah :( [17:20] i been looking at cpu utilisation over network [17:20] i was first looking at speed over lcoalhost [17:20] but even with infiniband speed shoudl be the same. [17:20] well other than it starts up faster than curl [17:20] Hmm what have I cocked up here [17:21] which is like 6 msec difference or something [17:21] 72 to 78 msec or such [17:21] 80MB/s over 10gig [17:21] that's a little slow [17:21] and curl does that too? [17:21] that's with wget [17:21] is the file too big for memory? [17:21] wget uses http_proxy [17:21] by default [17:21] you haven't got that set right? [17:21] locally wgetting on the same machine i get 900MB/s [17:22] that's slow [17:22] :/ [17:22] spindles [17:22] it'll be as fast as it can read from disk [17:22] 100 200M 100 200M 0 0 2317M 0 --:--:-- --:--:-- --:--:-- 2325M [17:22] you should output to null :) [17:22] oh irght [17:22] i keep assuming cached speed [17:22] yeah [17:22] 100 200M 100 200M 0 0 889M 0 --:--:-- --:--:-- --:--:-- 892M [17:22] I have to read it from somewhere first :) [17:22] i got that the first time [17:23] ok what is going on here [17:23] gigabit needs no tweaking locally normally [17:23] it starts fast [17:23] what does iperf say? [17:23] and trails off to 80mbit [17:23] oh [17:23] do you have broadcom ethernet? [17:23] intel 10gig cards [17:23] hmm [17:23] screw off with broadcom [17:23] i hit something weird like that with broadcom [17:23] :/ [17:24] they're engineering samples though [17:24] still shouldn't matter [17:24] in normal adapative coalescing mode? [17:24] I was playing with coalescing last time I was logged into these machines [17:24] actually it's probably simpler than that. [17:24] ahh [17:25] try ethtool -c3 ? [17:25] err [17:25] ethtool -c rx-usecs 3 [17:25] that's what the gigabit stuff defaults to [17:25] but i dunno if the 10 gigabit stuff is the same [17:25] I have to be careful with the coalecscing stuff [17:25] how come ? [17:25] last time I played I managed to generate a DIV_BY_ZERO in the intel driver [17:26] err it's -C [17:26] and the linux kernel gets unhappy when you do that [17:26] hahaha [17:26] i played around with coalescing and dropped cpu usage by heaps [17:26] on broadcom core2duo [17:26] I just don't even touch broadcom these days [17:26] heh [17:26] they don't let you do as much in hardware as the intel nics [17:26] it's my colo box [17:27] it's [17:27] i've got a colo box at sky tower [17:28] i could stick intel ct adapter in [17:28] but it seems to go ok [17:28] for some reason hp ilke to use broadcom [17:28] alright [17:28] 7gbit in iperf TCP [17:29] interesting [17:29] 1mbit in iperf UDP [17:29] what [17:29] yeah... [17:29] what on earth was I doing on this machine last time I was logged in [17:29] history :) [17:29] fwiw infiniband sucks for udp [17:31] is it 1 megabit sending speed? [17:31] or 1 megabit throughput? [17:31] * up_the_irons likes all the network talk [17:31] oh wait [17:31] cos if it's only sending at 1 megabit [17:31] I think it's just iperf being retarded [17:31] did youadd -b 2g [17:31] or such? :) [17:31] if I stop the iperf server [17:32] the client can still connect and reports i'm getting 1mbit/s [17:32] oh [17:32] even though you tcpdump and there's 0 traffic [17:32] firewall on port 5001? [17:32] yeah will be [17:32] stupid iperf [17:32] yeah the udp testing is pretty broken [17:32] it's also bad if you overshoot [17:32] it can't receive the end result back [17:32] because it gets packet loss [17:33] I wonder if I have any rate limiting in here [17:33] i would rather it see it do a bit of return traffic to make sure it's still ok to test too [17:33] because it gets through the first 2GB at full speed [17:33] and then it drops to 100mbit [17:33] tc -s qdisc [17:33] won't be tc [17:33] going to /dev/null ? [17:33] oh with iperf [17:33] possibly iptables or apache [17:33] nah this is wget / apache now [17:33] yeah going to /dev/null [17:33] ahh [17:34] lulz u needz nginx lol omg [17:34] node.js man [17:34] BAREMETAL!!! [17:34] just instill lighttpd on alternative port? [17:34] or nginx :/ [17:34] ok, so these machines run no firewalls [17:34] i use lighttpd [17:34] possibly I should fix that [17:34] lighttpd is faster with static files [17:34] heh [17:34] and no TC [17:34] wonder how apache on this box is configured [17:35] crapache lulz lulz [17:35] i never see lighttpd having bad performance :/ [17:35] tbh I didn't even know there was apache on here so I never set it up :) [17:35] i feel brain cells dying just typing that [17:35] for some reason it's way more computationally expensive in linux to reecive files than send [17:35] m0unds: :) [17:35] well actually it's cos linux can't do zerocopy receive [17:37] gizmo, maybe you should use a 2gb file? :) [17:37] i only use 200mb file for testing normally [17:37] alright [17:37] we solved the issue [17:37] what was it ? [17:38] I put a 2.7gb file on instead :) [17:38] haha [17:38] these are just dev/test boxes [17:38] not a clue where the rate limit is [17:38] yah [17:38] right [17:38] wget gives me: 828M/s in 3.8s [17:38] with how much cpu usage? [17:39] i wonder why you're not getting full speeds [17:39] wget maxes at 45% [17:39] with sandy bridge era cpu? [17:39] probably a mixture of slow spindles/not running 9k mtus/a crappy dell switch in the middle [17:39] nah these will be older [17:39] model name : Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz [17:39] 5600 series? [17:39] oh [17:40] wow [17:40] that old ok ;) [17:40] :) [17:40] why fix what's not broken? [17:40] considering i get 20% cpu usage on gigabit with e3110 [17:40] is that with tcp timestamps enabled or disabled? [17:40] I think i broke your program [17:41] probably [17:41] written -1200776 kbytes (24.455); average: -49101.4584 [17:41] it's probably the kernel version [17:41] doesn't like the timer [17:41] Linux marvin 2.6.32-5-amd64 #1 SMP Fri May 10 08:43:19 UTC 2013 x86_64 GNU/Linux [17:41] CLOCK_MONOTONIC_COARSE [17:41] oh look another machine I haven't upgraded to wheezy [17:42] it should still be accurate using the "time" command [17:43] it starts off ok [17:43] and then it pauses and jumps into the negative [17:43] umm [17:43] it was introduced in linux 2.6.23 [17:43] oops [17:43] linux 2.6.32 [17:43] which is what you're using [17:43] oh that's interesting [17:43] written -1166052 kbytes (25.291); average: -46105.4135 [17:43] real 0m26.686s [17:43] user 0m0.004s [17:43] sys 0m6.476s [17:43] umm [17:43] you know [17:44] i bet it's a signed 32 bit integer [17:44] issue [17:44] haha yeah [17:44] that's what it feels like [17:44] it looks like it [17:44] but it's 64 bit binary [17:44] but yeah it looks exactly like that [17:44] and i use 200mb test ifle [17:44] ok [17:44] :) [17:44] so 6.476s cpu [17:44] what was wget? [17:44] 3.8s [17:44] oh [17:44] 3.8s system cpu? [17:45] real 0m3.565s [17:45] user 0m0.184s [17:45] sys 0m1.628s [17:45] that's for wget [17:45] oh hangon [17:45] so it's being WAYYYY slower [17:45] yeah [17:45] i wonder why [17:45] i'll try with a smaller file [17:45] % time ./microcurl/microcurl http://192.168.50.254:24/200m > /dev/null [17:45] written 204800 kbytes (0.153); average: 1338562.092 [17:45] ./microcurl/microcurl http://192.168.50.254:24/200m > /dev/null 0.00s user 0.08s system 50% cpu 0.152 total [17:46] hmm that's slower than before [17:46] oh i shouldn't load it over nfs prboably [17:46] alright, debian cd1 this time [17:46] hhmm same diff [17:46] ~700mb [17:46] wget: real 0m0.776s user 0m0.028s sys 0m0.364s [17:47] written 654432 kbytes (1.689); average: 387467.140 [17:47] real 0m2.303s [17:47] user 0m0.000s [17:47] sys 0m2.040s [17:47] wow [17:48] # time ./microcurl http://202.49.71.58:24/testfile.zip [17:48] http protocol. [17:48] written 204800 kbytes (0.160); average: 1280000.000 [17:48] ./microcurl http://202.49.71.58:24/testfile.zip 0.00s user 0.16s system 98% cpu 0.163 total [17:48] hmm [17:48] oh oops from wrong computer [17:49] # time ./microcurl http://202.49.71.58:24/testfile.zip [17:49] http protocol. [17:49] written 204800 kbytes (1.799); average: 113841.023 [17:49] ./microcurl http://202.49.71.58:24/testfile.zip 0.01s user 0.55s system 31% cpu 1.799 total [17:49] wget http://202.49.71.58:24/testfile.zip 0.07s user 0.61s system 38% cpu 1.784 total [17:49] that's how it compares for core2duo with the old version for me [17:49] with gigabit [17:50] (the old version always writes to /tmp/testwrite which needs to be on tmpfs) [17:50] ./newmicrocurl http://202.49.71.58:24/testfile.zip > /dev/null 0.00s user 0.57s system 31% cpu 1.798 total [17:50] and that's new version [17:50] so new version behaving the same [17:51] what cpu usage difference do you get to my url? [17:51] how big is testfile? [17:51] 200 megabytes [17:51] it doesn't have 10 gigabit ethernet [17:51] but it has gigabit to APE [17:52] cpu usage up to 10% or so [17:52] it's not actually a zip file it's /dev/urandom [17:52] but I occasionally see it spike to 98% for your process [17:52] so maybe that's why it's slow [17:52] weird [17:52] i wonder why [17:52] beacuse i'm testing on e3110 [17:53] what kind of speed you getting? [17:53] e3110 is basically ecc version of core2duo [17:53] hold up, just heading out for some tea [17:53] orig i found a slow down with adsl cos i was checking time too often / outputting to the screen too mcuh [17:53] but i fixed that [18:00] fixed the 32bit bug :/ [18:01] by not linking against musl [18:01] oh actually no [18:01] it is still there [18:01] it's bloody 64 bit [18:01] integer should be 64 bit [18:02] ok now i fixed that [18:03] it seems that int is 32 bytes on 64 bit architectures still. [18:14] if i strace wget and my microcurl, wget does read of 8k and two writes of 4k, and microcurl does reads of 7300 and writes of 7300 bytes, but sometimes 14600 [18:15] where 7300 is 1460*5 [18:15] so if wget is using less cpu i must guess that it's due to getting less packets in a burst [18:16] and doing read/write of single packet or such [18:28] *** r0ni has joined #arpnetworks [18:34] mercutio: uint64_t :) [18:34] oh [18:34] is "long" wrong? [18:34] long works ok too [18:34] i haevnt' done much coding recently [18:34] but uint64 is how to get a 64bit integer [18:34] don't use int [18:34] i am also using %uz in printf [18:34] i assumed int was 64 bit now [18:34] nah, backwards compat yo [18:34] well i see that now :) [18:34] oh interesting [18:35] most of my programming was 15 years ago [18:35] not a lot has changed since then though.- [18:35] an integer is probably defined as a 32bit number somewhere in the C standard no doubt [18:35] but 64 bit is one of things that changed :) [18:35] i updated the url [18:35] err the code at the url [18:36] but it's purely for showing accurately [18:36] i'm still curious why it using so much cpu :/ [18:36] oh, and i stalked you on linkedin :/ [18:36] :) [18:36] add me if you want [18:36] ok [18:36] go to NZNOG? [18:37] nope [18:37] i was thinking of going next year [18:37] lame [18:37] cos of SDN etc :/ [18:37] come on down [18:37] but it's expensive for flights [18:37] and accomodation and so forth [18:37] true [18:37] and i doubt my work would pay [18:37] work pays for everything for me :) [18:38] yeah if work paid for everything i probably wouldn't think twice about it [18:38] but yeah, it's a great place to have an informal chat with your upstreams over a few beers [18:38] but really it's on the interesting side rather than useful side [18:38] few (read: a lot of beer) [18:38] heh [18:38] I find the networking more useful than the talks [18:38] the talks are all live streamed anyway [18:38] yeah the talks themselves looked mostly boring [18:38] by richard @ r2 [18:38] i see it more of a social thing [18:39] yeah it is [18:39] i actually watched some wand thing [18:39] and it wasn't nearly technical enoguh [18:39] but it did suggest you were using old cheap hardware [18:39] and it seems you still are :) [18:39] lol [18:39] remember what the talk was on? [18:39] umm [18:39] it had a few things [18:39] we can go a lot more technical offline [18:39] one of them was about monitoring i think ? [18:39] but talks are usually fairly informational [18:39] and debugging problems? [18:39] yeah it'll be AMP [18:40] our "active measurement platform" [18:40] ahh ok [18:40] basically we have nodes in most ISPs in NZ [18:40] i want to do my own network monitoring stuff [18:40] and do DNS/ICMP/Traceroute/HTTP testing [18:40] i've got many ideas :) [18:40] and we publish them all to http://erg.cms.waikato.ac.nz/amp [18:40] like that you should be able to esimate bandwidth without using up lots of bandwidth [18:40] though all that code is ancient and icky [18:40] so we have a government grant to rewrite it all [18:40] but lots of ideas can still take a long time to actually put into effect :) [18:40] and we're adding event/anomaly detection [18:41] well i got interested when i emailed truenet [18:41] truenet suck balls [18:41] and their responses were like OMG [18:41] exactly [18:41] like they're single threading wget [18:41] we actually went for that contract [18:41] but we were too expensive [18:41] you need multiple tcp connections to represent web page downloads [18:41] but at least our results would have meant something [18:41] but you also need to statr them at realistic times [18:41] rather than just 8 at once or such [18:41] they also were recently doing all their bandwidth tests on 200kb files loaded off trademe.co.nz [18:42] which means you want to download, do parallel requests, request new data when you see the actual urls [18:42] etc [18:42] they've now moved to 1MB files because of UFB [18:42] i do most of my bandwidth tests on 200kb files [18:42] i just confirm with 10mb files [18:42] 200k often shows if a connection is good or bad [18:43] but they report on stuff like average connection speed :/ [18:43] but anyway, like the way i see it is there's thresholds [18:43] it doesn't really matter if it's 10% faster or 10% slower [18:43] but if it takes 2 secodns or 3 seconds to display a web page it's significant [18:43] yeah but average to where? [18:43] i also herad that there's some messy routing stuff involved [18:44] i've got a few vps's with test downloads [18:44] so we do full mesh testing [18:44] international [18:44] where we can [18:44] ahh interesting [18:44] so do you have many locations? [18:44] each node to every other node [18:44] we don't do bandwidth tests at the moment though [18:44] just ping? [18:44] because amp has been around 10 years or so [18:45] what i really want to test is tcp latency [18:45] and people didn't like us using "all" their bandwidth [18:45] which is kind of messy [18:45] like ping != tcp latency necessarily [18:45] I think we have around 30 sites or so [18:45] ahh ok [18:45] so you'd like our HTTP test [18:45] yeah i think i saw it once [18:45] you have soem kind of traecrouet from many hosts thing? [18:45] or something [18:45] some kind of network path thing [18:45] yup [18:46] http://erg.wand.net.nz/amp/graph.php?src=ampz-auckland&dst=ampz-fx-aknnr [18:46] i think http can behave differently [18:46] i think a lot of things though [18:46] i'd like more real world data :) [18:46] scroll to the 'path analysis' graph [18:47] taht's a graph [18:47] i saw some uhh [18:47] something that showed all the different upstreams [18:47] andi t ihnk something else [18:47] could be TR [18:47] http://tr.meta.net.nz/tr.php [18:47] maybe this is nunrelated [18:47] one of our researchers wrote it [18:47] for fun [18:47] ahh ok [18:47] perry [18:47] well it's cool [18:47] he works for the google now [18:48] ahh ok [18:48] adhh yeah it shows the image afterwards [18:48] my most active URL on wand.net.nz is still this: [18:48] of the different upstreams [18:48] http://wand.net.nz/~perry/max_download.php [18:48] oh god [18:48] it kinda became the industry standard TCP calculator heh [18:48] can that be updated? [18:48] i hate that tool [18:48] hahaha [18:48] he wrote it in a night [18:48] i hate it when people argue about tcp limitations [18:49] when it's all old data [18:49] yeah but i mean [18:49] cubic is VERY VERY common now [18:49] and bic is around a tiny bit [18:49] and windows has new stuff [18:49] that i haven't really played with [18:49] and linux has all these improvements [18:50] i think they even decreased the initial timeout down from 3 seconds [18:50] back when BUBA was common in new zealand i looked into some of those things a little [18:50] because there was so much packet loss that it was interesting to see how badly things dealt with it [18:50] and what could be done to improve things [18:50] and decreasing that 3 second timeout makes a very significant difference in ~5% loss links [18:51] that said, i put it down myself before linux did :/ [18:51] :) [18:51] cos i was looking at openbsd source code for tcp/ip [18:52] and i adjusted some stuff [18:52] then i thought that was easy [18:52] so i try doing the same on linux [18:52] and i'm like omg [18:52] i swear, if you want to look at how tcp/ip works in the kernel it's easier to understand with openbsd than linux by a LARGE margin [18:52] and it also compiles faster :/ [18:53] well it helps that openbsd doesn't have cubic etc [18:53] it just has newreno [18:53] but yeah are you looking at doing your own truenet type testing? [18:54] the other big problem i saw with truenet is that they only want to test idle connections [18:54] and i think it's way better to just test all the time, and expect that there's noise [18:57] like if i'm playing a game, and someone downloads a 1mb file, ond my game gets laggy, i'd rather that be reported [18:58] and different aqm on different connections can influence positively/negatively [19:20] we are effectively replacing truenet yeah [19:20] we have a ISP partner who wants to drop all our software on embedded linux boards and put them on customers internet [19:20] cool [19:21] shouldn't it be isp agnostic though? [19:21] so we partner with local ISPs for testing the stuff we write [19:21] oh right [19:22] so the isp provides the hw [19:22] but you provide the software? [19:22] it's more the data they provide us [19:22] as writing network monitoring software with a network is hard :) [19:22] err [19:22] without a network* [19:22] with or without a network is hard :) [19:22] lol [19:22] yeah especially without customers [19:22] yeah the more i thought abuot it the more compilcated it got [19:22] but i was planning on trying to do something myself [19:23] i figure you should be able to stick something on something ilke openwrt pretty easily [19:23] but that you could get wider uptake by making it work on a windows desktop [19:23] we have a wandboard i'm looking to get our monitoring software running on [19:23] heh [19:23] we tried that [19:23] how'd it go? [19:23] http://nettest.wand.net.nz/ [19:23] nobody really ran it [19:23] so we kinda abandoned it [19:24] is your stuff open source? [19:24] some of it is [19:24] the stuff NBIE pays for, we don't typically release the source code for [19:24] haha [19:24] why am i trying to download the windows version on linux [19:24] oh i have wine :) [19:24] :) [19:24] all stuff we develop inhouse is open source [19:24] Libtrace is our most popular tool we write [19:24] cool [19:25] is it bsd licensed? [19:25] http://research.wand.net.nz/ [19:25] I think we standardise on GPL [19:25] ahh [19:25] have a look at scamper too [19:25] i dunno if lenny is the most recent or not [19:25] scamper is awesome [19:25] libprotoident is cool too [19:25] but it'll probably work i imagine [19:26] we can identify ~140 different applications purely on 4 bytes of the packet [19:26] oh it's running now what haha [19:26] http://research.wand.net.nz/software/libprotoident.php [19:26] nice [19:26] that is 4 initial bytes? [19:26] or within each pcaket? [19:26] "Unlike many techniques that require capturing the entire packet payload, only the first four bytes of payload sent in each direction, the size of the first payload-bearing packet in each direction and the TCP or UDP port numbers for the flow are used by libprotoident" [19:27] ahh so first four [19:27] hmm [19:27] one of the other things we do at wand is take a lot of network traffic dumps [19:27] cept most ISPs don't like giving you full payload [19:27] so we standardised on the first 4 bytes [19:27] to keep it anonymous [19:27] cool [19:27] i like that [19:27] and no chance of password leaks etc [19:27] also it's easier to store :) [19:28] yes [19:28] is it very compressable? [19:28] but libprotoident lets us look at what protocols people are using [19:28] umm [19:28] so it's full tcp header [19:28] http://wand.net.nz/wits/waikato/8/ [19:28] with 4 bytes data? [19:28] download some traces and try it out :) [19:29] Snapping Method Packets truncated four bytes after the end of the transport header, except for DNS [19:29] dude [19:29] try xz compression [19:29] i'm not downloading this direct to hme :/ [19:29] home [19:29] come on [19:29] heh [19:29] we split them into small tarballs for ya [19:29] what [19:29] 6 gb gz? [19:30] yeah [19:30] on adsl :) [19:30] lame [19:30] I got vdsl at home [19:30] oh what it's ftp [19:31] protocol of the future [19:31] hmmm [19:31] it's downloading slow too [19:31] do you not have v6 at home? :O [19:31] We rate limit ipv4 access [19:32] to 'encourage' other researchers who want access to our datasets to move to ipv6 [19:32] i had v6 at home [19:32] oh [19:32] i'm not downloading to hoem [19:32] hangon i'll go via ipv6 [19:32] i find ipv4 works better :/ [19:34] i suppose you get to be impractical :) [19:34] haha yup! [19:34] 30megabytes/sec now [19:34] 40 [19:34] 50 [19:34] 60 [19:35] 55 [19:35] it ramped up slow [19:35] 80 [19:35] i wonder if it's disk :) [19:35] 0 6203M 0 27.1M 0 0 1184k 0 1:29:24 0:00:23 1:29:01 1216k^C [19:35] 72 6203M 72 4476M 0 0 66.5M 0 0:01:33 0:01:07 0:00:26 84.0M [19:36] kind of a big difference between thee two :) [19:36] haha, one v4 one v6? [19:37] yeah [19:37] i thought soemthing was broken [19:38] is that gigabit international? [19:38] and hangon, that's faster speed than your core2duo got [19:38] curl -O ftp://wits.cs.waikato.ac.nz/waikato/8/20111104-000000-0.gz 5.20s user 26.06s system 33% cpu 1:32.05 total [19:39] oh hangon, you had 800 megabit, not 80mb/sec didn't you [19:43] y'know if i didn't have to pull that file back home i wouldn't really complain about you using gz :) [19:43] hh halghl, yhu hal 800 megabut, lht 80mb/sec lull't yhu [19:44] what [19:49] tracestats: error while loading shared libraries: libtrace.so.3: wrong ELF class: ELFCLASS32 [19:49] hmmmm [19:50] oh fixed [19:50] i hate how ubuntu doesn't include /usr/local/lib in search path by default [19:50] but that's a really bizzare error message for such [19:50] ohh [19:51] i haev a 32 bit version of that library installed by that test client thingy [19:58] boy, making serial cables is fun [20:01] old school :) [20:01] oh [20:01] i can just use tcpduump [20:01] rather than get a 6gb dump [20:08] yea, we use them for control surfaces [20:08] but we have no spares, so i used the cable kits provided with the controllers and made 4 9-25 null modem cables out of cat 6 [20:08] hahaha [20:11] right home time I think [20:11] see you suckers later [20:12] later man [20:12] i'm going home to build a network for a lan this weekend [20:12] 7 juniper switches :P [20:13] EX3200s? [20:14] heh [20:14] you just have 7 juniper switches lying around? [20:14] * m0unds wants to play with J gear [20:15] i wish i did, i just have a stack of 2960s [20:15] mercutio: heh yup [20:15] i hate 2960s [20:15] 6x EX2200s [20:15] nowhere near as much as i do [20:15] and 1x EX4200\ [20:15] figured i'd guess middle of the road :) [20:15] j switches are so nice [20:15] the 2200s are pretty basic, but i'm just using it as the access network [20:15] yeah [20:15] the EX4200 is a dream to work on [20:16] i wouldn't mind 7 ex4200s [20:16] and i have an M320 for the lulz in storage [20:16] i need to get a 2200 or 3200 for home [20:16] 2200c if you want to get the small one [20:16] yeah [20:16] i'd probably shoot for 24 ports [20:16] for home? [20:16] yep [20:16] prewired house, lots of devices [20:16] i'm using 4 gigabit ports at home [20:16] and 4 10/100 [20:16] i'm using 13 [20:17] err that's how amny ports i haev at least [20:17] and 3 10/100 on my SRX210HE [20:17] and one of them connecs the two together [20:17] i do have an unused mangaed switch [20:17] but i dunno how you'd end up with 24 devices at home :) [20:17] ahh ok, so 24 makes sense [20:17] yeah, more than 12 [20:18] if it was less, i'd go for 12 [20:18] for sure [20:18] because $$$ [20:18] hahahaha [20:18] heh [20:18] you could always go dual 8 port switches :/ [20:18] nooooo [20:18] haha [20:18] one managed one unmanaged [20:18] nooooo [20:18] i thought you were trying to be chaep [20:18] not that cheap [20:18] heh [20:19] bang for the buck [20:19] yeah [20:19] just get a cheap 24 port switch :/ [20:19] i have a cheap 24 port switch [20:19] go cheaper than juniper? [20:19] oh [20:19] yeah [20:19] you haev a 29600? [20:19] not at home [20:19] err 2960s [20:19] at work [20:19] 2960g [20:19] but 2960s would work too [20:20] 2960 takes up lots of power [20:20] mine are WS-C2960S-48FPS-L [20:20] so, ~900W with full POE load [20:20] it'll be like 150w or more without poe :/ [20:20] yeah [20:20] hence lots of power :) [20:20] they're also loud, and hot [20:20] and crappy [20:20] i hate cisco gear [20:20] gcet something with energy efficient ethernet for home maybe? [20:21] so do i [20:21] GET A CLOUD CORE ROUTER [20:21] (kidding) [20:21] hahahaha [20:21] i need to have hard-disk storage at home aagain [20:22] i've been living off just ssd [20:22] mercutio: you mean like this - http://routerboard.com/CCR1036-12G-4S [20:22] but i'm like where am i going to put this packet capture heh [20:22] static: yes [20:23] you get to run a beta OS [20:23] yep [20:24] i have a routerboard as my home router [20:24] hell, if you buy the middle of the road RB1100AH you end up with two or more ports that just don't work correctly [20:24] and they'll tell you as much [20:24] at least that includes a switch m0unds [20:24] cloud core router doesn't even have a swiitch [20:24] that's being generous [20:24] every port goes throguh the cpu [20:24] hahaha [20:25] yoeu get two switches i think [20:25] with different mtu limits etc [20:27] the lower end routerboards are perfect for home use tbh [20:27] i had an rb450g and it died a hero's death (bad caps, bad thermal mgmt on the cpu) [20:27] compared to the other garbage on the market [20:27] sure [20:34] xz is so damn slow to compress [20:35] --- % 523.0 MiB / 2693.7 MiB = 0.194 2.1 MiB/s 21:32 [20:35] i think it's about 1/6th of the way through? [20:35] so like two hours or something [20:35] it was half an hour with -1 [20:35] and still 50% better than gzip ratio [20:35] err 33% better [21:05] *** ese has joined #arpnetworks [21:20] \m/ my machine from last year's lan still boots [21:26] good grief! I have never had so much backlog to catch up upon in this channel [21:29] you dont type /clear enough [21:35] I could just ignore it... and I am. It overran my buffer :p [21:35] (And I never type /clear when I'm afk :P) [21:35] brycec: sorry [21:36] me and mercutio worked out we live an hour from each other [21:36] haha no worries mate :) [21:41] up_the_irons: Have you considered licensing your automation and various tools for Virtual Machine provisioning to your competition? [21:41] gizmoguy: [21:41] i'm still trying to get libtrace to work :/ [21:41] it doesn't like my lack of encapuslation [21:42] i found i had an old dump of my own connection around [21:42] being the geek i am :/ [21:42] reading from file ben, link-type NULL (BSD loopback) [21:42] but it's in that format [21:43] ben: tcpdump capture file (little-endian) - version 2.4 (No link-layer encapsulation, capture length 128) [21:44] i suppose i should at least be happy there's source so i can try and figure out how it works :) [21:44] mnathani: why should his compettion have them? [21:44] also it probably would mean he'd end up with heaps of questions [21:44] and even less spare time [21:47] mercutio: report it to me tomorrow and I'll raise a bug for you :) [21:47] ahh real [21:47] i'm busy drinking and configuring linux [21:47] those don't go tegether do they [21:48] they sure do [21:48] i thought i should be able to figure out how to do it :) [21:48] they go very well together [21:48] i even managed to partition a disk correctly! [21:48] well, ask me tomorrow if i configured it correctly [21:48] it's the only way to fly [21:48] cos it looks like openbsd loopbackup, but with host byte order rather than network byte order from what i gather [21:48] mercutio: Just a thought for alternate means of revenue generation, since the code has already been written and tested [21:49] mnathani: have you seen the solusvm forum? [21:49] gizmoguy: i forgot to ask, are you on untappd? [21:49] gizmoguy: you remembered to use gdt? [21:49] m0unds: oh man, I should join [21:49] everyone else I know does it [21:50] mercutio: gpt? [21:51] partition table [21:51] guid partition table it seems [21:51] use gdisk to do it [21:51] doesn't have the silly 4 parition limit [21:51] or extended partition hack [21:52] I use parted [21:52] dunno if that does it or not [21:52] ahh parted does support [21:52] yeah it does [21:52] if you have modern computer yo ucan use uefi too [22:02] *** ese has quit IRC (Ping timeout: 245 seconds) [22:04] oh it doesn't like infiniband either :) [22:05] unless i broke it [22:06] yeah doesn't like ib [22:06] I'm not sure we've ever tested with IB :) [22:06] yeah [22:07] it's nice how there's so many different formats :/ [22:07] like it probably works on freebsd on ethernet [22:08] it should work on freebsd [22:08] we test on lots of different platforms [22:08] osx support too :P [22:08] it's ppp on freebsd though [22:09] yeah it works on ethernet [22:09] although dunny why EHLO is Invalid_SMTP [22:11] it actually seems to work with most stuf [22:12] i want to make graphs i think ;) [22:29] *** ThalinVien has quit IRC (*.net *.split) [22:29] *** tabthorpe has quit IRC (*.net *.split) [22:51] *** r0ni has quit IRC (Quit: Textual IRC Client: www.textualapp.com) [23:37] *** ese has joined #arpnetworks [23:43] *** ese has quit IRC (Ping timeout: 272 seconds) [23:55] *** ese has joined #arpnetworks