[00:19] I think this was the thing where the connection between ARP and his Comcast line was slow... [00:19] over IPv6 [00:20] we determined that it could have been HE [00:21] http://irclogger.arpnetworks.com/irclogger_log/arpnetworks?date=2014-12-11,Thu&sel=34#l30 [00:23] ^ that's the testing we did [00:24] *** kevr_ has quit IRC (Quit: ZNC - http://znc.in) [00:25] *** kevr has joined #arpnetworks [00:38] ah i c [00:39] is that ntt->verizon ipv4 congestion still happening? [00:41] i wonder if he sends a lot of traffic to comcast [00:41] considering they both do a lot of ipv6 [00:45] it may just be a congested link [00:47] which means it may not get fixed until next yera [00:47] very little NTT -> Verizon congestion shows up on smokeping now [00:47] cool. [00:47] it was kind of disturbing that happened for so long [00:47] That's what she said!! [00:47] yea [00:48] well not as disturbing as comcast.. [00:48] it was replaced with Amazon EC2 -> Verizon congestion :P [00:48] err cogent [00:48] damnit, i go tthe two mixed up. [00:48] heh. [00:48] amazon is lame :) [00:48] i used to have terrible speeds here from amazon east coast. [00:48] which seems to be where most stuff is [00:49] looks like the Verizon <-> Comcast congestion is gone too? [00:49] i have no way to test that [00:49] but it'd depend on city too i imagine [00:49] http://unixcube.org/who/acf/tmp/comcast-net.png [00:50] what's that [00:50] uhh [00:50] an ad from Comcast [00:50] afaik it's legit [00:50] probably they want their TWC merger to go through... [00:50] congestion is such a complicated issue [00:51] netflix kind of hilighted it. [00:51] That's what she said!! [00:51] but it's happened many times over the years. [00:52] i kind of like the fact that cogent let it happen in a way. [00:52] cos it brought it to attention more. [00:52] I think it was also a stupid thing for them to do [00:52] but at&t, verizon, comcast etc are all evil [00:52] also they still don't IPv6 peer with HE [00:53] it's fine as long as your isp isn't using cogent :) [00:53] I wonder how much of that congestion we had been seeing there was from Netflix [00:53] it's complicatged. [00:54] but after private peering normal cogent stuff got better [00:54] also I believe NTT was a transit provider for Netflix [00:54] but who's to know how much the traffic hurt their network internally [00:54] Comcast's network? [00:55] i haven't been following US news, what's happening now with regards to government net neutrility? [00:55] nah cogent's network [00:55] they were doing qos [00:55] oh that's right [00:55] I think they were doing that just for peering though [00:55] not because of their backbone [00:56] the real solution is to run links at < 50% utilisation [00:56] and upgrade them when they're nearing 50% [00:56] so that you can handle failures, and spikes. [00:56] That's what she said!! [00:56] right. but cogent was being their usual asshole self and didn't want to negotiate that [00:56] probably not both at once :) [00:56] well it should be funded by both sides equally [00:57] well, I guess a lot of people were in that boat this time [00:57] i think it's reasonably to not carry traffic a long distance. [00:57] the only way the net is going to work well is if large companys can't charge smaller companies to send them data. [00:57] or charge unreasonable fees at least. [00:58] otherwise only large companies can exist, and bandwidth prices go up, and bandwidth is seen as being in short supply and valuable. [00:58] which means conservation of bandwidth and slowing economy. [00:58] that has only happened with eyeball networks so far [00:58] lots of competition for transit still [00:59] well verizon, at&t, comcast etc are all just as bad as each other. [00:59] yea, they're the bad ones [00:59] I don't know if it's completely unreasonable to request payment for traffic though [00:59] for carrying traffic sure [00:59] it follows the "sender pays" model we've always had [00:59] like if a user is in new york, and a sender is in los angeles, it's reasonable to expect them to carry the traffic to new york. [01:00] right [01:00] or to pay someone else to carry it to new york. [01:01] but if a user is in east/south side of canada, it's not unreasonable to want to dump it in new york too [01:01] and expect the isp to pick it up from there. [01:01] but that's where it gets tricky. [01:01] how many places should you offload data. [01:02] like amazon don't pay to send data to new zealand, they offload it in the US... [01:03] interesting enough, microsoft pay to send data to new zealand, and offload it in NZ.. [01:03] and microsoft have free peering [01:03] so I suppose it depends on the source / destination of the traffic? [01:03] billing would have to change a lot.. [01:04] well basically senders should send data as close to the end user as possible in an ideal world. [01:04] but you can't expect them to be in every little city. [01:04] yea [01:04] but if you're doing more than 500 megabit in a city, it's not unreasonable to offload data there. [01:05] and to make that work well, you need to reduce the cost of mpls links, and remote data centre ports etc. [01:05] and i think that's where things are slowly moving towards. [01:06] so you think things could resolve themselves over time? [01:06] not necessarily. [01:06] if you can charge $10,000 for a gigabit, or charge $0 for a gigabit, what would you rather? [01:06] right. but that's if the telecom monopolies control all the links [01:07] otherwise the pricing should become competitive [01:07] but if you want to send 16 megabit video to 1000 users, it's reasonable to have your own connection into that city. [01:07] so how to ensure people are being "reasonable"? [01:07] and it's up to you if you want to do it as 10 gigabit, or gigabit with smarts for uplink capacity to the city. [01:07] by encouraging "local peering" [01:08] isp's like verizon/comcast should have to advertise ip's within 100 miles or osmething. [01:08] and provide free peering. [01:08] ah I see [01:08] distance is just the easiest metric. [01:09] i heard somewhere that cogent were being extra nasty about where they were offloading data [01:09] and i've heard of other companies doing similar. [01:09] "extra nasty"? [01:09] "unreasonable" [01:09] ah, so like they pick up the traffic from a customer in LAX, then offload it onto Verizon in LAX? [01:09] like if you want to dump lots of data, it's good to do it somewehre like dallas, california, new york, illinois, flordia etc. [01:09] where there are major interconnects already in place. [01:10] why is that nasty though? [01:10] That's what she said!! [01:10] it's not so reasonable to dump lots of traffic in kansas or such. [01:10] oh yes [01:10] because in smaller areas there's less infrastructure [01:10] or less central points. [01:10] why would they want to do that though? [01:10] I can't imagine it would be cheaper [01:10] kanasas is a backup route anyway i think [01:10] because netflix can host their shit anywhere., [01:10] but yeah it's not cheaper. [01:11] it's just more rude. [01:11] so they're paying more to be rude [01:11] well i think it's more complicated than that [01:11] I would imagine.. [01:12] apparently President Obama wants to regulate consumer Internet access as a telecom service [01:12] that's the latest plan here [01:12] some target like 500 megabit is just a good way to make sure there are more interconnects. [01:12] but basically there shoudl be shit loads of interconnects, and transit providers giving cheap transit to those locations. [01:13] what city are you in? [01:13] Orcutt, CA [01:13] tiny city between LAX and SFO [01:13] Verizon goes via LAX, Comcast goes via SJC [01:13] so santa maria is the nearest location? [01:13] yea [01:14] so santa maria looks like somewhere an interconnect should happen [01:14] how would that happen? [01:14] there is no datacenter or anthing... [01:14] hmm checking population [01:14] so yeah it's a bit over 100k [01:14] there is a private DC nearby in San Luis Obispo, CA [01:14] here it happens in telephone exchanges. [01:14] That's what she said!! [01:15] Verizon DSL traffic all gets tunneled to the router in SLO [01:15] but Verizon doesn't peer there [01:15] dsl is legacy [01:15] what happens when you get fibre [01:15] I think there aren't transit providers there actually [01:15] I think Verizon stopped rolling out fiber [01:15] too expensive, they didn't really care anyway [01:16] http://fibrebuild.fibrenet.it/en/fibre-net-a-successful-completion-for-the-restoration-of-santa-maria-assunta-abbey-in-san-gimignano/ [01:16] oh [01:16] wrong fibre [01:16] :P [01:16] verizon should terminate ports closer :/ [01:17] 100,000 population is enough to terminate ports locally [01:17] change is probably very difficult for them [01:17] so yeah it means they need to provision bng's locally. [01:17] yeah. [01:17] they're big, and have old things [01:17] I think they might do static routes all the way to LA [01:17] but yeah ok ideally they should be terminating on bng's closer. [01:17] where is SLO? [01:17] bng? [01:18] pppoe termination [01:18] or ppp [01:18] as well as dhcp etc. [01:18] 40 miles from here [01:18] opposite direction of LA though [01:18] http://www.cisco.com/c/en/us/td/docs/routers/asr9000/software/asr9k_r4-3/bng/configuration/guide/b_bng_cg43xasr9k/b_bng_cg43asr9k_chapter_01.html [01:18] still not that bad. [01:18] ok that odesn't make it easier. [01:18] hmm [01:18] 40 miles i suppose that's not too bad then. [01:19] ok so say SLO should have interconnects then [01:19] yea [01:19] they even have a DC, etc... [01:19] is there other providers other than verizon? [01:19] AT&T [01:19] so verizon and at&t should peer there too. [01:19] but I believe that datacenter leases lines to LAX and SJC [01:19] and traffic shouldn't have to go via sjc/lax [01:19] no real providers have presences there [01:19] oh also Comcast [01:20] yea. that would be good [01:20] and anyone else that wnats to peer [01:20] I'd guess adding a ton of peering points like that would vastly increase their network complexity [01:20] and if netflix wants to send data to users it should host something there [01:20] like their own cache [01:20] possibly adding more points of failure? [01:20] it's not really more complex. [01:21] it's more things to go wrong, but also more redundancy [01:21] I suppose [01:21] certainly Verizon would have to change a lot of their crap [01:21] if users stay connected and fibre is servered they can still talk to each other [01:21] but yeah this is ideal speaking [01:21] rather than money speaking [01:22] chances are that gigabit peering would be fine for the location atm [01:22] but it'd be nicer to provision 10 gigabit switches, and get some provider to pay for them :/ [01:22] err like ciscso or juniper or something [01:23] but yeah even with isps's like at&t, verizon etc, local traffic would probably be fine on gigabit atm [01:23] until users get faster connections [01:23] https://www.google.com/maps/@36.3807012,-119.2606125,7z/data=!4m2!6m1!1szKX6_3rmHouE.k3rceCRgcUQs [01:23] can you see that? [01:23] but as connections go up to gigabit etc, it'd be nice to be able to shift data quickly between providfers. [01:23] that's my Comcast -> ARP path [01:23] hmm i don't see a path [01:24] you may have to do copy link url [01:24] it seemed to get rid of the end bits hmm [01:24] and changes to maps.google.com [01:24] https://www.google.com/maps/d/edit?mid=zKX6_3rmHouE.k3rceCRgcUQs [01:24] that should work [01:25] you can see San Luis Obispo on there too [01:25] my Verizon traffic goes to there, then straight to LAX [01:25] but the latencies are about equal, because of the DSL FEC stuff [01:26] also Verizon DSL has huge queues [01:26] rtt goes up to 10 seconds when downloading a file [01:27] netflix's idea is that you host your own google caches. [01:27] err own netflix caches. [01:27] using your power etc? [01:27] it's better to have something that covers whoever wants to send you data [01:27] and they pay for transit to the location [01:28] we don't realyl have buffer bloat issues in this country, its' weird. [01:28] because a lot of people are running short queues, and it can impact international performance. [01:28] I think google has done that with YouTube for a while [01:28] yeah that worked [01:29] is there no cable from santa maria to santa barbara to santa clarita etc. [01:29] I'm sure there is [01:29] maybe it's not Comcast's though [01:29] or their routing just sucks [01:29] there is a cable owned by Level3 that goes that way.. [01:30] so yeah taking a wide stab in the dark, modesto, fresno, santa maria, santa barbara, santa clarita, lancaster etc should ll have interconnections. [01:30] i wonder what lancaster's population is like [01:30] 156,633 [01:30] in 2010 [01:30] so yeah, big enough [01:31] that certainly would be good [01:31] new zealand has actually got quite good peering in general. [01:31] and the US has quite bad peering :) [01:31] yea [01:31] although a friend of mine runs a wireless isp and there's no peering in his vity. [01:31] city. [01:32] but it's like 50k people or something [01:32] with some surrounding towns [01:32] hmm 74k [01:33] no public data centres etc, closest is telephone exchange [01:33] I think Verizon, AT&T, etc... make peering in telephone exchanges way too expensive [01:33] yeah. [01:33] well those are already there. [01:33] there is a law that says CLECs have to rent you space in their COs [01:33] and that's where government could step in. [01:33] it's expensive here too. [01:33] but your stuff has to be 48v powered [01:34] but people have connections in there anyway. [01:34] that's not a bad thing. [01:34] and it's super expensive, and all your equipment has to be approved by them [01:34] they probably have dslams there anyway? [01:34] they do [01:34] so yeah, as well as a dslam, adding a bng there. [01:35] I got to visit a US CO once [01:35] those telephone switches are huge [01:35] apparently juniper are adding bng functionality to mx80s. [01:36] I wonder how much motivation the CLECs have to do any sort of network improvement [01:36] at least here it can take ages to get stuff permitted to be installed too. [01:36] I'd guess most of their revenue comes from cell backhaul, MPLS, etc... [01:37] *ILEC [01:39] hard to know. [01:39] that reminds me [01:39] when I was having that Verizon packet loss issue [01:39] with the NTT peering [01:40] there was also internal Verizon packet loss [01:40] this country is surprising me with how much network improvements have been happening [01:40] apparently 200 megabit is the new target. [01:40] it wasn't that long ago that 20 megabit was rare. [01:40] Comcast is doing OK here [01:40] also Google is trying to compete [01:40] oh is it you i have been smokepinging for ages? [01:41] idk. my IP probably has changed :P [01:41] herh [01:41] maybe [01:41] but it'll be someone else nearby :/ [01:41] yea [01:41] I actually got Verizon to fix their things [01:41] by calling them [01:41] hmm snloca [01:42] formerly my Verizon IP [01:42] cool. [01:42] snloca == San Luis Obispo [01:42] yeah. [01:42] so yeah it was stable for me for ages :) [01:42] but i think my uplink was via verizon [01:43] oh it's via level3 now [01:44] so i think level3 are passing to verizon in los angeles [01:45] and that seems long enough distance wise, that if two people had fibre connecting locally would be faster. [01:45] err would be noticably faster. [01:45] that dsl interleaving sucks :/ [01:46] yea [01:47] I tried to get them to turn it off once.. they didn't [01:47] why wouldn't they? [01:48] "not supported" [01:48] lame [01:48] is it vdsl or adsl? [01:48] adsl [01:49] no excuses then :) [01:49] does vdsl have the same interlaving? [01:49] they don't offer vdsl here [01:49] and is it fixed delay, or depending on sync rate? [01:49] I'm close enough to the CO to be able to get it though [01:50] that sucks. [01:50] i have vdsl. [01:50] I'm not sure about that [01:50] I've never been able to see my modem stats [01:50] but I can't remember it ever changing [01:50] but adsl here shifted from fixed intearleaving delay to a kind of adaptive one [01:50] and if you reduced your snr margin you could reduce delay. [01:50] hmm interesting [01:50] oh you can't see your modem stats. [01:50] I think it's fixed here [01:51] i've been snr tweaking adsl for years. [01:51] most people here have a modem with integrated router. I don't [01:51] normal sync rates are around 16 to 18 megabit here [01:51] but with snr tweaking can be 21 to 23. [01:51] how do you accomplish that? [01:51] on broadcom chipsets you just telnet into modem and do "adsl configure --snr 65480 or such" [01:52] if you have snr margin of 12 [01:52] and it'll go to like 3 or 4 or something. [01:52] or low normal numbers to do less than 6db diff [01:52] what? how do you adjust that in software? [01:52] it's like the lower the better [01:52] adjusting tx power? [01:52] unless you go to like 65000+ :) [01:53] it doesn't adjust power [01:53] like i have adsl as well as vdsl [01:53] http://pastebin.com/i8MqgwDF [01:54] so that training margin of 8 means it's less of a tweak than i used to do [01:54] http://pastebin.com/gzEaEvpp [01:54] and it ends uip with sync of 22 megabit'ish. [01:55] oh I see [01:55] with no interleaving. [01:55] adjusting "training margin" enables faster line rates [01:55] what kind of latency do you get? [01:55] yeh. [01:55] about 10 msec on adsl and 5 msec on vdsl [01:55] oh nice [01:56] looks like I get 30 ms with interleaving [01:56] i want lower :) [01:56] I get 8 ms on DOCSIS [01:56] i used to get 20 msec with interleaving. [01:56] docsis2? [01:57] i think a lot of that 8 msec is getting to los angeles or san jose [01:57] no [01:57] it's the jitter that hurts with docsis though [01:57] just to the local node [01:57] that seems high. [01:58] ummm... [01:58] also my vdsl is screwed atm because i have bad wiring [01:58] "Cable Modem Status Offline" [01:58] and it added 8 msec interleaving downstream :( [01:58] after fixing cable. [01:58] looks like DOCSIS 3 [01:58] docsis3 is actually a lot better than docsis2 [01:59] 1. c-67-180-12-1.hsd1.ca.comcast.ne 0.0% 10 9.0 10.5 7.9 22.7 4.8 [01:59] 7.9 ms best [02:00] you have two lines? [02:00] Verizon and Comcast [02:00] my Verizon line is cheap, and bundled with phone [02:00] but it sucks [02:01] so it's like backup? [02:01] yea [02:01] I don't think I've ever seen it completey fail [02:02] I've seen the Comcast one fail several times actually [02:02] heh [02:02] there's only one cable network in new zealand, and it used to fail heaps. [02:02] but have much lower delay than adsl. [02:02] until the interleaving stuff got sorted. [02:03] that's how it is here now [02:03] then it was kind of worse to my mind [02:03] but it's not *too* much failure.. only like four times a year, for a couple of hours :P [02:03] That's what she said!! [02:03] then more stuff shifted to docsis 3 [02:03] and prices came down [02:03] and i think it's ok now. [02:03] they also had nasty transparent proxies [02:03] that would severely limit download speeds. [02:04] like 100 megabit connections that would do 300k/sec to the UK [02:04] oh super [02:04] yeah i did some tests with iperf for someone. [02:05] err with someone [02:05] and udp was fine. [02:05] Comcast tried injecting TCP FIN s for BitTorrent connections once [02:05] people got really pissed [02:05] but tcp/ip was shit. [02:05] but http was even worse. [02:05] i think that was partially queuing issues [02:06] i care more than i should about such things. [02:06] but on connections with short queues, often artificially capping the speed you send at them will speed up downloads. [02:06] especially on rate limited faster connections. [02:06] I got that working on my Verizon DSL line actually [02:06] like if you have 200 megabit connection sold at 30 megabit etc. [02:07] well that was to work around delays? [02:07] work around the huge queues [02:07] yeah. [02:07] here queues are less than 20 msec normally [02:07] like uhh [02:08] axel -a to local server jumps it up from 13.6 msec to 26.7 msec [02:08] not too bad [02:08] and 10 to 28 on adsl. [02:08] actually adsl was sometimes a bit lower than that too [02:09] yeah it's not too bad except that it can stall transfers and give losss easily [04:25] *** RandalSchwartz has quit IRC (*.net *.split) [04:25] *** jpalmer has quit IRC (*.net *.split) [04:25] *** Hien_ has quit IRC (*.net *.split) [04:25] *** staticsafe has quit IRC (*.net *.split) [04:25] *** up_the_irons has quit IRC (*.net *.split) [04:25] *** NiTeMaRe has quit IRC (*.net *.split) [04:25] *** tellnes has quit IRC (*.net *.split) [04:25] *** meingtsla has quit IRC (*.net *.split) [04:25] *** twobithacker has quit IRC (*.net *.split) [04:25] *** hazardous has quit IRC (*.net *.split) [04:25] *** eryc has quit IRC (*.net *.split) [04:25] *** JC_Denton has quit IRC (*.net *.split) [04:25] *** medum has quit IRC (*.net *.split) [04:25] *** anisfarhana has quit IRC (*.net *.split) [04:25] *** pjs has quit IRC (*.net *.split) [04:25] *** mike-burns has quit IRC (*.net *.split) [04:25] *** dwarren has quit IRC (*.net *.split) [04:25] *** jlgaddis has quit IRC (*.net *.split) [04:26] *** RandalSchwartz has joined #arpnetworks [04:26] *** jpalmer has joined #arpnetworks [04:26] *** Hien_ has joined #arpnetworks [04:26] *** staticsafe has joined #arpnetworks [04:26] *** up_the_irons has joined #arpnetworks [04:26] *** tellnes has joined #arpnetworks [04:26] *** meingtsla has joined #arpnetworks [04:26] *** twobithacker has joined #arpnetworks [04:26] *** sinisalo.freenode.net sets mode: +o up_the_irons [04:27] *** hazardous has joined #arpnetworks [04:27] *** eryc has joined #arpnetworks [04:27] *** JC_Denton has joined #arpnetworks [04:27] *** medum has joined #arpnetworks [04:27] *** anisfarhana has joined #arpnetworks [04:27] *** pjs has joined #arpnetworks [04:29] *** mike-burns has joined #arpnetworks [04:29] *** dwarren has joined #arpnetworks [04:29] *** jlgaddis has joined #arpnetworks [04:29] *** sinisalo.freenode.net sets mode: +o mike-burns [04:29] *** NiTeMaRe has joined #arpnetworks [05:46] up_the_irons: https://github.com/iojs/build/issues/1#issuecomment-67944799 <-- re the iojs build stuff we talked about a while back [05:46] dunno if that's enough to justify it for ya :D [05:54] up_the_irons: Did you do something? Because now throughput is just as fast on v6/v4. Or maybe it's a time of day thing? [10:49] *** awyeah has quit IRC (Quit: ZNC - http://znc.in) [10:51] *** awyeah has joined #arpnetworks [11:14] mhoran: i wouldn't be surprised if it's time of day related. [11:15] Slower now, but not as slow as last night / last time we looked into it. [12:00] well it is xmas. [12:00] i imagine less business users using the net at least. [12:01] i dunno if it was business or residential peak [12:07] setup smokeping with curl :) [12:27] Where in the world is it already xmas??? 11:59:49 mercutio | well it is xmas. [12:27] it's "xmas period" [12:27] it's xmas eve here? [12:27] lots of people don't work today here at least. [12:33] it's just another work day for me. [12:33] stonehenge doesn't pay sick days or holidays. :) [12:33] or vacation [13:50] *** awyeah has quit IRC (Quit: ZNC - http://znc.in) [13:52] *** awyeah has joined #arpnetworks [13:59] The life of the self-employed, eh? [14:31] are you working christmas day too? [14:38] http://updog.pw/ [14:41] well, part of it, yes. [14:41] as an atheist, christmas doesn't really mean much [14:42] except "lots of things closed needlessly" :) [14:43] I don't see why, as an atheist, you'd eschew a holiday and a joyful times. Sure, it has "Christ" in the name, but in this day and age that's pretty much the extent of religious value in Xmas [14:43] Why not celebrate yule and the winter harvest instead? :) [14:44] Maybe festivus! [14:44] Oh crap, that's today! [14:45] And you got me nothing? You scoundrel! [14:45] Time for the airing of grievances! [14:45] * brycec names his socks "grievance" [14:46] which you wear on your feets of strength? :) [14:46] lol [14:47] I'm not touching that with an aluminum pole. :) [14:49] twss [14:49] Okay! twss! 'I'm not touching that with an aluminum pole. :)' [14:50] heh [14:50] usually it overtags [14:50] I wonder why it undertagged that one [14:51] What's really odd is that training the filter actually lowered the bayesian score. (0.92451488735822 -> 0.9150846827481) [14:53] even weirder [14:54] * RandalSchwartz transfers from the sky club to the departure gate... [16:33] mhoran: i didn't do anything [16:38] qbit: interesting thread [16:39] "also the build bot server identifiers have the provider names in them and they get referenced and seen quite a lot" [16:39] qbit: ^^ what does he mean by that? I mean, who sees it? [16:39] qbit: remind me of the machine size you would need [16:39] nodejs nerds [16:40] dual cpu with 2g ram should work [17:06] i can probably swing that, but after the holidays. if it uses too much proc IO / disk, we'll have to talk about a different solution (maybe cheap dedi) [17:16] lol [17:16] Anyone that looks at the nodejs version, I guess... [17:16] But it's definitely less visible than, say, the hostname of the build machine in the system uname [17:16] ^ which gets printed on boot, on login, etc [17:18] yeah [17:19] (In fact, it doesn't even print for node --version, so I have no idea where that "glory" is supposed to be seen) [17:19] But rest assured up_the_irons, you'll be FAMOUS [17:19] LOL [17:20] maybe the build server status is reported in some IRC / HipChat / Slack channel [17:21] At my last job, it was very, very frequently told to us by [potential] customers that they're going to buy a hundred units a month (etc), and unsurprisingly they never really did. Consequently, I have become extraordinarily jaded when it comes to lines like that. [17:23] I wonder if the FreeBSD people would let you donate the use of a dedi as a build machine [17:23] like brycec was saying, those hostnames really do appear places [17:23] I'm under the impression that OpenBSD and FreeBSD are very well-equipped when comes to the x86 architectures. [17:24] the netbsd automatic build site is hosted from Columbia U [17:24] and has really crappy connectivity [17:25] at least NetBSD cross compiles everything. so it's all done on amd64 [17:25] [17:26] :P [17:26] What you see as a strength, the OpenBSD project sees as a weakness re:cross-compiling. if an architecture cannot self-host, it's unsupported. [17:26] And I tend to agree [17:26] the problem is that everything takes forever to compile on vax [17:27] no argument there :p [17:27] But at least you don't have compiler issues [17:27] http://www.openbsd.org/images/rack2009.jpg [17:27] *all* the architectures [17:27] * up_the_irons pets his vax machine [17:27] (as of 2009) [17:27] (things have changed since then, new ones, some retired, etc) [17:29] Also, that image is missing zaurus [17:29] lol [17:29] oh shit, nevermind it's there [17:29] just tiny [17:30] mtr -4 vax.openbsd.org [17:30] I think I can ping the vax? [17:35] HA! There are two architectures I don't see in the photo (that would have been supported in 2009) - mac68k and m88k [17:37] maybe the use other 68k machines to build them? [17:38] What other 68k machines are in that photo? [17:38] (To be clear, I'm simply disproving "17:26:55 acf_ | *all* the architectures") [17:38] oh wow you're right [17:39] I wonder if there in some other place [17:39] or if they build them on qemu or something [17:39] oh wow you're right [17:39] twss [17:39] Okay! twss! 'oh wow you're right' [17:39] If they can only build on qemu, then there's no reason to support the arch :p [17:39] I suspect those build machines are simply "elsewhere" [23:55] openbsd suppports 68k? [23:55] my amiga is so slow with unix [23:55] it reminds me of sun3s [23:56] i can't actually remember which i stick on it, it may have been old openbsd. openbsd doesn't support 68k anymore afaik at least not amiga [23:56] oh mac68k.