In fact there are many comparisons out there, mercutio I didn't need something as complex and all-managing as Chef or Puppet. I just needed a convenient way to do shit on multiple hosts (eg: apt-get upgrades) and ansible fulfills that role (with parallellisation, etc) and it was very simple to setup get on up and DANCE shutup BryceBot NOYOUSHUTUP brycec It's true that Ansible has a feature similar to Chef/Puppet recipes, the Anisble Playbook, but I don't have deployments large enough for that to be useful. (I'm 99% maintaining servers, ones and twos of various tasks - not exactly rolling out new deployments) brycec: maybe if you're looking for comparisions for that brycec: i just thought it'd be nifty if some blog i could read every now and then informed me of such things :) i'm a bit apprehensive about automation as it makes simple things complicated, and leaves complicated things complicated. i can understand wanting to do it if needing to for number of servers though. a friend was mentioning ubuntu's autoupdate thing before, and how he wanted to use it because he never gets around to doing updates. i have no idea how safe/reliable it is though, as i've never touched it myself. We use puppet quite heavily at work. We've got something approaching 100 machines (both virtual and physical) under puppet control. For us it's the ability to, for example, go from the requirement to build a new customer-facing DNS resolver to it being live and hooked in to BGP in about 10 minutes time. What we need to do next is make better use of packages, i.e. building our own for deployment of software onto machines in a repeatable way. At the moment we have puppet checking our some internal git and svn repos and building code. Another thing we need is "lifecycle management" to keep machines up to date with packages. Just putting "apt-get update;apt-get upgrade" in cron is too blunt a tool, but 100 machines is too many for our team to be able to ssh in to each one and do things manually each time there is an openssl update. get on up and DANCE plett: you can put apt-get update; apt-get -d upgrade in get on up and DANCE where -d is download only then you can do apt-get install split across all the machines. i generally don't think 100 machines is too many to do manually because you still want to test it manually anyway. also the ssl thing was more complicated because you have to edit apache config to disable ssl 3 but yeah, i can understand wanting to automate package updates on 100 servers. virtualisation tends to encourage more hosts. mercutio: What I think we need is the ability to group servers, and update packages on one test member of that group and confirm that it all still works, then approve the same updates to other members of the same group sounds good I think RedHat's SpaceWalk project does that, but I've not had the time to look at it basically staging server. i think the whole area could be improved radically it seems thigns went backwards to a degree like it used to be that people would often have /usr read only nfs mounted. and shared between machines, then /usr/local with local changes. i think in some ways if the "OS" can't even modify it's basic self working stuff and it's all read only and externally controlled that could be good yeah Yeah. With all our servers, we might have 3 or 4 live members (say spamassassin boxes for email filtering) and one test box which we try out config changes on, we would just use that to test package updates on rumpkernels that are readonly :) mercutio: It sounds like you are describing Docker ahh i haven't seen that. see i need a blog to inform me of these things :) Docker is interesting. Fundamentally, it's a chroot on steroids for Linux, kind of like FreeBSD jails interesting. i played with chroots a while ago it takes a bit of maintenance As well as a chroot for file isolation, it can have its own IP addresses distinct from the host it is on, can be limited in disk/network/memory etc via cgroups and all that stuff i used to use linux vserver But that is just how it works (and most of that comes directly from Linux Containers), what Docker has built on top of that is layers of abstraction yeah i think linux itself got updated quite a lot i haven't actually played with cgroups yet but i think there's some automatic stuff disk throttling is complicated. Say you wanted to build a spamassassin server as a Docker app. You would take a base image of your chosen OS which had already been packaged as a docker app, spin up a VM, install whatever packages and generic config you need to make spamassassin work, then snapshot it and distribute it i haven't tried dedicated spamassassin vm's i've always mixed postfix and spamassassin. The clever bit is that your additional bits are stuck on top of the base image, kind of like unionfs the complicated bit was pushing user lists to other boxes so it can drop mail for unknown users. So when the base OS image gets an update, you pull down the new image and your spamassassin dock automatically has the new files in it the other neat thing about docker is it's all versioned - if an update breaks something you can just pull the previous version to rollback lt: now that is cool. Yes. That too unless spamassassin updates it's database format or such the underlying chroot/container bit isn't that exciting, but the toolkit they've built on top of it quite neat i so can't keep up with all of these things maybe i should mkae a note of stuff to read up on :) Same here. Docker is still on my list of things that look cool and which I should play with at some point docker, puppet, ansible like 15 years ago (wow that long) i used to read freshmeat and it'd tell me about all kinds of new software. there was some other linux blog too that went away I think Docker might be good enough for us to be able to kill off a load of single purpose VMs and turn them into bundled apps all running on one machine Fewer things to administrate, etc yeah i actually started running more things on the same vm's again cos it was getting hell, and there's better security now days generally virus filtering is probably on the side of "less sure about" though. err virus/spam as there's so many different programs that can all look at mail The Docker guys still don't recommend using it for where you need security isolation between docks, do they? i use spamassassin, dkim, razor, dcc, etc. some people say the same about virtualisation because cpu bugs could allow breaking out or something well and there is a much larger attack surface, in general. interfaces/code/whathaveyou well yeah* and there. but I'm one of those nutty openbsd users who doesn't admin scale-sized stuff, or much at all in production. i like openbsd docker/lxc seems to be performant and popular but after I read a thesis and ran a NetBSD kernel in my web browser, locally...I kinda got hooked on this anykernel idea. anykernel? rumpkernel.org cool added to my to read list :) thanks. the :login; paper is pretty neat read I would definitely say that docker isn't a security thing - seems to be very much driven by people who want ease of deployment for webapps with security as a relatively low priority. that said it can't be any worse security than running it all on a single host Yeah did you hear of flocker LT? oh he left well plett was talking about it too mercutio: flocker? No, I've not heard of that Their website seems short on details though It's a haproxy equivalent frontend to direct network connections to the right docker instance? And ZFS (on Linux?) for the image storage? it seems they want to do database and stuf fmigration etc i found it off zfs yes :) https://clusterhq.com/ i was trying to find out about the new zfs on linux features first. there have been a few recent zfs changes with things like faster zfs sends etc. and bookmarks, where you don't have to keep old snapshots around and can do your snapshotting on a storage/backup server snapshotting tends to lead to fragmentation, but now that ssds are getting so much cheaper/common etc you can have hard-disk system with snapshots. apparently the word around the net is that facebook is down Loads for me over curl ipv6 even heh and in my browser you have no account? up_the_irons: you around? my browser wasn't convenient, mercutio weird loads for me in curl oh it's back now (I have an account, it loads fine0 facebook is down? it seems to be back again to a lot of people facebook is the net :) facebook is using spdy apparently "OH GOD THE INTERNET IS DOWN!!!" it still goes slow for me though oh up_the_irons are you using spdy yet? now that you force https may as well spdy http://spdycheck.org/ http://spdycheck.org/#facebook.com mercutio: not using spdy yeah spdycheck suggests not There's much more to spdy than just being https though, it's all about ordering the load of resources and such brycec: yeh but it also goes over https so if using https already it seems logical step I know https is a requirement :p i mean if you need to get a cert and all it's a lot more work i've been reading about mod_pagespeed etc apparently it can do mobile versions of sites automatically too I looked into this before... basically implementing SPDY without actually optimising the site is completely pointless as well as doing a whole lot of google optimisations oh did you look into mod_pagespeed? https://developers.google.com/speed/pagespeed/module I might have - it was a couple years ago apparently it's available as nginx and apache modules http://nginx.org/en/docs/http/ngx_http_spdy_module.html it can do things like use webp versions of images i dont' have a proper web site or ssl cert i just have directories and stuff with files for linking etc. spdy enabled! that was easy... free ssl certs ftw :) (startcom being my preference) i think chrome has changed something, because web sites in general all seem to load pretty quick now i used to find https etc sites were slow can you get a free wildcard i'm using ip's atm hah but i don't haveto and subdomains. free wildcard? I'm not aware of any, but I've heard of some dirt-cheap ones ($7 USD) $7US/year? i suppose i could handle that. to play with spdy. (I'd have to poke someone to get the name... something like dirtycheapssl or the like) That's what she said!! BryceBot: no Oh, okay... I'm sorry. '(I'd have to poke someone to get the name... something like dirtycheapssl or the like)' haha lowendtalk discussion this should tell me $7 wildcard per year is not happening lol oh old discussion i hate ssl tax well cert tax im seeing $45 yearly for an AlphaSSL wildcard too much i might just load a cert into my browser/hosts. i suppose i could get lots of non wildcard i wish spdy could work without https hmmm... is there a self-serve for PTR records? yeah in the control panel had a water line break above my pantry today. good stuff. ouch mercutio: so it is! thank you yeah, can't get a plumber out til thursday ouch so you turned water off? yep damn that sucks is it a holiday there or something it's a slow-ish leak nah, just busy maybe try a diff one/ this was the shortest called 6 different ones haha oh :( need parallel plumbers! And parallel plumbing! my plumbing isn't webscale :( heh plumbing problems are pretty common really EPIPE haha i didn't notice the leak til i opened the pantry to feed my cats and their food had water in it they were sulking til i was able to get the water shut off and poke holes in the ceiling so i could run to the store and get more food the real problm there is "cats" :) why's that a problem?