[00:44] In fact there are many comparisons out there, mercutio [00:46] I didn't need something as complex and all-managing as Chef or Puppet. I just needed a convenient way to do shit on multiple hosts (eg: apt-get upgrades) and ansible fulfills that role (with parallellisation, etc) and it was very simple to setup [00:46] get on up [00:46] and DANCE [00:46] * BryceBot dances :D-< [00:46] * BryceBot dances :D|-< [00:46] * BryceBot dances :D/-< [00:46] shutup BryceBot [00:46] NOYOUSHUTUP brycec [00:47] It's true that Ansible has a feature similar to Chef/Puppet recipes, the Anisble Playbook, but I don't have deployments large enough for that to be useful. (I'm 99% maintaining servers, ones and twos of various tasks - not exactly rolling out new deployments) [01:17] brycec: maybe if you're looking for comparisions for that [01:18] brycec: i just thought it'd be nifty if some blog i could read every now and then informed me of such things :) [01:18] i'm a bit apprehensive about automation as it makes simple things complicated, and leaves complicated things complicated. [01:19] i can understand wanting to do it if needing to for number of servers though. [01:20] a friend was mentioning ubuntu's autoupdate thing before, and how he wanted to use it because he never gets around to doing updates. [01:21] i have no idea how safe/reliable it is though, as i've never touched it myself. [01:58] *** JC_Denton has quit IRC (Ping timeout: 240 seconds) [01:59] *** JC_Denton has joined #arpnetworks [02:00] *** JC_Denton is now known as Guest61953 [02:01] *** LT has joined #arpnetworks [02:08] We use puppet quite heavily at work. We've got something approaching 100 machines (both virtual and physical) under puppet control. For us it's the ability to, for example, go from the requirement to build a new customer-facing DNS resolver to it being live and hooked in to BGP in about 10 minutes time. [02:10] What we need to do next is make better use of packages, i.e. building our own for deployment of software onto machines in a repeatable way. At the moment we have puppet checking our some internal git and svn repos and building code. [02:13] Another thing we need is "lifecycle management" to keep machines up to date with packages. Just putting "apt-get update;apt-get upgrade" in cron is too blunt a tool, but 100 machines is too many for our team to be able to ssh in to each one and do things manually each time there is an openssl update. [02:13] get on up [02:13] and DANCE [02:13] * BryceBot dances :D-< [02:13] * BryceBot dances :D|-< [02:13] * BryceBot dances :D/-< [02:21] plett: you can put apt-get update; apt-get -d upgrade in [02:21] get on up [02:21] and DANCE [02:21] * BryceBot dances :D-< [02:21] * BryceBot dances :D|-< [02:21] * BryceBot dances :D/-< [02:21] where -d is download only [02:21] then you can do apt-get install split across all the machines. [02:22] i generally don't think 100 machines is too many to do manually because you still want to test it manually anyway. [02:23] also the ssl thing was more complicated because you have to edit apache config to disable ssl 3 [02:24] but yeah, i can understand wanting to automate package updates on 100 servers. virtualisation tends to encourage more hosts. [02:31] mercutio: What I think we need is the ability to group servers, and update packages on one test member of that group and confirm that it all still works, then approve the same updates to other members of the same group [02:31] sounds good [02:31] I think RedHat's SpaceWalk project does that, but I've not had the time to look at it [02:31] basically staging server. [02:32] i think the whole area could be improved radically [02:32] it seems thigns went backwards to a degree [02:32] like it used to be that people would often have /usr read only nfs mounted. [02:32] and shared between machines, then /usr/local with local changes. [02:33] i think in some ways if the "OS" can't even modify it's basic self working stuff and it's all read only and externally controlled that could be good [02:33] yeah [02:33] Yeah. With all our servers, we might have 3 or 4 live members (say spamassassin boxes for email filtering) and one test box which we try out config changes on, we would just use that to test package updates on [02:33] rumpkernels that are readonly :) [02:33] mercutio: It sounds like you are describing Docker [02:34] ahh i haven't seen that. [02:34] see i need a blog to inform me of these things :) [02:34] Docker is interesting. Fundamentally, it's a chroot on steroids for Linux, kind of like FreeBSD jails [02:34] interesting. [02:35] i played with chroots a while ago [02:35] it takes a bit of maintenance [02:36] As well as a chroot for file isolation, it can have its own IP addresses distinct from the host it is on, can be limited in disk/network/memory etc via cgroups and all that stuff [02:36] i used to use linux vserver [02:37] But that is just how it works (and most of that comes directly from Linux Containers), what Docker has built on top of that is layers of abstraction [02:37] yeah i think linux itself got updated quite a lot [02:37] i haven't actually played with cgroups yet [02:37] but i think there's some automatic stuff [02:37] disk throttling is complicated. [02:38] Say you wanted to build a spamassassin server as a Docker app. You would take a base image of your chosen OS which had already been packaged as a docker app, spin up a VM, install whatever packages and generic config you need to make spamassassin work, then snapshot it and distribute it [02:39] i haven't tried dedicated spamassassin vm's [02:39] i've always mixed postfix and spamassassin. [02:39] The clever bit is that your additional bits are stuck on top of the base image, kind of like unionfs [02:40] the complicated bit was pushing user lists to other boxes [02:40] so it can drop mail for unknown users. [02:40] So when the base OS image gets an update, you pull down the new image and your spamassassin dock automatically has the new files in it [02:41] the other neat thing about docker is it's all versioned - if an update breaks something you can just pull the previous version to rollback [02:41] lt: now that is cool. [02:41] Yes. That too [02:41] unless spamassassin updates it's database format or such [02:42] the underlying chroot/container bit isn't that exciting, but the toolkit they've built on top of it quite neat [02:43] i so can't keep up with all of these things [02:43] maybe i should mkae a note of stuff to read up on :) [02:43] Same here. Docker is still on my list of things that look cool and which I should play with at some point [02:43] docker, puppet, ansible [02:44] like 15 years ago (wow that long) i used to read freshmeat [02:44] and it'd tell me about all kinds of new software. [02:44] there was some other linux blog too that went away [02:45] I think Docker might be good enough for us to be able to kill off a load of single purpose VMs and turn them into bundled apps all running on one machine [02:45] Fewer things to administrate, etc [02:45] yeah [02:45] i actually started running more things on the same vm's again [02:45] cos it was getting hell, and there's better security now days generally [02:46] virus filtering is probably on the side of "less sure about" though. [02:46] err virus/spam [02:46] as there's so many different programs that can all look at mail [02:46] The Docker guys still don't recommend using it for where you need security isolation between docks, do they? [02:46] i use spamassassin, dkim, razor, dcc, etc. [02:54] some people say the same about virtualisation [02:54] because cpu bugs could allow breaking out or something [02:54] well and there is a much larger attack surface, in general. interfaces/code/whathaveyou [02:54] well yeah* and there. [02:55] but I'm one of those nutty openbsd users who doesn't admin scale-sized stuff, or much at all in production. [02:55] i like openbsd [02:55] docker/lxc seems to be performant and popular [02:56] but after I read a thesis and ran a NetBSD kernel in my web browser, locally...I kinda got hooked on this anykernel idea. [02:56] anykernel? [02:57] rumpkernel.org [02:57] cool added to my to read list :) [02:57] thanks. [02:57] the :login; paper is pretty neat read [03:14] I would definitely say that docker isn't a security thing - seems to be very much driven by people who want ease of deployment for webapps with security as a relatively low priority. that said it can't be any worse security than running it all on a single host [03:36] *** staticsafe has quit IRC (Ping timeout: 265 seconds) [03:36] *** xales has quit IRC (Ping timeout: 272 seconds) [03:45] *** meingtsla has quit IRC (Ping timeout: 265 seconds) [03:49] *** meingtsla has joined #arpnetworks [04:02] *** staticsafe has joined #arpnetworks [04:03] *** xales has joined #arpnetworks [06:51] Yeah [09:29] *** vissborg has quit IRC (Max SendQ exceeded) [09:32] *** vissborg has joined #arpnetworks [10:26] *** LT has quit IRC (Quit: Leaving) [11:07] *** forg0tten has quit IRC (Remote host closed the connection) [13:01] *** Guest61953 is now known as JC_Denton [13:04] *** Seju has quit IRC (Ping timeout: 255 seconds) [14:16] did you hear of flocker LT? [14:16] oh he left [14:17] well plett was talking about it too [14:32] mercutio: flocker? No, I've not heard of that [14:34] Their website seems short on details though [14:35] It's a haproxy equivalent frontend to direct network connections to the right docker instance? And ZFS (on Linux?) for the image storage? [15:02] *** Seji has joined #arpnetworks [15:33] it seems they want to do database and stuf fmigration etc [15:34] i found it off zfs yes :) [15:34] https://clusterhq.com/ [15:35] i was trying to find out about the new zfs on linux features first. [15:35] there have been a few recent zfs changes with things like faster zfs sends etc. [15:36] and bookmarks, where you don't have to keep old snapshots around and can do your snapshotting on a storage/backup server [15:36] *** sga0_ has joined #arpnetworks [15:36] snapshotting tends to lead to fragmentation, but now that ssds are getting so much cheaper/common etc you can have hard-disk system with snapshots. [15:38] *** sga0_ has quit IRC (Read error: Connection reset by peer) [15:38] *** sga0__ has quit IRC (Ping timeout: 255 seconds) [16:12] *** sga0 has joined #arpnetworks [16:14] apparently the word around the net is that facebook is down [16:16] Loads for me over curl [16:16] ipv6 even [16:16] heh [16:16] and in my browser [16:16] you have no account? [16:16] up_the_irons: you around? [16:16] my browser wasn't convenient, mercutio [16:16] weird loads for me in curl [16:16] oh it's back now [16:17] (I have an account, it loads fine0 [16:17] facebook is down? [16:18] it seems to be back again [16:18] to a lot of people facebook is the net :) [16:18] facebook is using spdy apparently [16:18] "OH GOD THE INTERNET IS DOWN!!!" [16:19] it still goes slow for me though [16:19] oh up_the_irons are you using spdy yet? [16:19] now that you force https [16:19] may as well spdy [16:19] http://spdycheck.org/ [16:19] http://spdycheck.org/#facebook.com [16:19] mercutio: not using spdy [16:20] yeah spdycheck suggests not [16:20] There's much more to spdy than just being https though, it's all about ordering the load of resources and such [16:20] brycec: yeh but it also goes over https [16:20] so if using https already it seems logical step [16:20] I know https is a requirement :p [16:20] i mean if you need to get a cert and all [16:20] it's a lot more work [16:21] i've been reading about mod_pagespeed etc [16:21] *** Guest8160 has quit IRC (Ping timeout: 244 seconds) [16:21] apparently it can do mobile versions of sites automatically too [16:21] I looked into this before... basically implementing SPDY without actually optimising the site is completely pointless [16:21] as well as doing a whole lot of google optimisations [16:21] oh [16:22] did you look into mod_pagespeed? [16:22] https://developers.google.com/speed/pagespeed/module [16:22] I might have - it was a couple years ago [16:22] apparently it's available as nginx and apache modules [16:23] http://nginx.org/en/docs/http/ngx_http_spdy_module.html [16:23] it can do things like use webp versions of images [16:23] i dont' have a proper web site or ssl cert [16:23] i just have directories and stuff with files [16:24] for linking etc. [16:24] spdy enabled! that was easy... [16:24] free ssl certs ftw :) [16:24] (startcom being my preference) [16:26] i think chrome has changed something, because web sites in general all seem to load pretty quick now [16:26] i used to find https etc sites were slow [16:27] can you get a free wildcard [16:27] i'm using ip's atm hah [16:27] but i don't haveto [16:27] and subdomains. [16:27] free wildcard? I'm not aware of any, but I've heard of some dirt-cheap ones ($7 USD) [16:27] $7US/year? [16:27] i suppose i could handle that. [16:28] to play with spdy. [16:29] (I'd have to poke someone to get the name... something like dirtycheapssl or the like) [16:29] That's what she said!! [16:29] BryceBot: no [16:29] Oh, okay... I'm sorry. '(I'd have to poke someone to get the name... something like dirtycheapssl or the like)' [16:29] haha [16:29] lowendtalk discussion [16:29] this should tell me [16:30] $7 wildcard per year is not happening lol [16:30] oh old discussion [16:30] i hate ssl tax [16:30] well cert tax [16:30] im seeing $45 yearly for an AlphaSSL wildcard [16:31] too much [16:31] i might just load a cert into my browser/hosts. [16:32] i suppose i could get lots of non wildcard [16:33] *** Guest8160 has joined #arpnetworks [16:35] i wish spdy could work without https [16:48] *** Seji has quit IRC (Ping timeout: 240 seconds) [16:50] *** Seji has joined #arpnetworks [17:36] *** derjur has joined #arpnetworks [18:51] hmmm... is there a self-serve for PTR records? [18:55] yeah [18:55] in the control panel [19:20] had a water line break above my pantry today. good stuff. [19:22] ouch [19:24] mercutio: so it is! thank you [19:24] yeah, can't get a plumber out til thursday [19:24] ouch [19:24] so you turned water off? [19:24] yep [19:25] damn that sucks [19:25] is it a holiday there or something [19:25] it's a slow-ish leak [19:25] nah, just busy [19:25] maybe try a diff one/ [19:25] this was the shortest [19:25] called 6 different ones [19:25] haha [19:29] oh :( [19:37] need parallel plumbers! [19:40] And parallel plumbing! [19:46] my plumbing isn't webscale :( [19:48] heh [19:48] plumbing problems are pretty common really [19:54] EPIPE [19:54] haha [19:55] i didn't notice the leak til i opened the pantry to feed my cats and their food had water in it [19:56] they were sulking til i was able to get the water shut off and poke holes in the ceiling so i could run to the store and get more food [20:04] the real problm there is "cats" :) [21:11] why's that a problem? [22:40] *** dj_goku has quit IRC (Remote host closed the connection) [22:41] *** dj_goku has joined #arpnetworks [22:41] *** dj_goku has quit IRC (Changing host) [22:41] *** dj_goku has joined #arpnetworks [23:37] *** dj_goku has quit IRC (Ping timeout: 265 seconds) [23:46] *** dj_goku has joined #arpnetworks