mnathani: you may want to try just using a http proxy if you do multiple installs it'll easily cache the bulk of files, and won't waste bandwidth doing heaps of updates of packages you don't use. mercutio: would squid be the best option for the http proxy? I've used apt-cacher-ng in the past. It's targeted with Apt (Debian, Ubuntu, etc) obviously, but also supports/caches Yum/rpms mnathani: squid is an easy option (In my environments, I was supporting both) i use trafficserver myself, but trafficserver uses raw partitions, whereas squid can just use disk space as it requires. does apt-cacher-ng run as server code on an another box, or is it client software? i transparently proxy anyone who uses dhcp :/ (acng for that matter also just uses the filesystem like normal) mnathani: It's a proxy server apt-cacher-ng may work fine, never tried it. just set the CentOS boxes to use ip.address:3142 as a proxy Never tried transparent proxying with it though. i set explicit proxies in places too. well it's useful if you download some archive in one place then want it in anothe rplace you can just download it again and it'll go uber fast, rather than having to scp it does transparent proxying dhcp users require a specific dhcp option to be used? nope It wouldn't be transparent if the client had to be configured to do it (but does require a specific network setup) it just means you transparently proxy whatever range of ip's you're giving out over dhcp what 'server' or 'router' takes care of the transparent proxy or deciding what IPs to proxy The router/gateway would i have a linux box that does that, and runs the dhcp server and acts as gateway for nat it's used as a desktop too though does it have 2 physical NICs? Most likely by a pf or iptables rule that redirects all connections from a given range of IP's to a port on the proxy, whether that be the same system or a separate server. if you're running dhcp then you can just run dhcp server on linux box easily. I use apt-cacher-ng, but only for Debian apt repos, I've never tried it with yum clients mnathani: nah only one well it's got infiniband and ethernet so yeah it goes in and out the same interface. separate Vlans? nope infiniband for storage? well ip over infiniband, and i use it from my windows box. for remote files, and for proxy :/ i have proxy on ssd hah it still doesn't seem to really go faster than 30 megabytes/sec often lots of small files don't relaly get speed up that quickly. but even over wireless it tends to go more than 10mb/sec i think it's cos there's a mix of cached/uncached.. i'm updating ubuntu on my chromebook atm. it's bloody slow at installing updates. mercutio: will the proxy work even if different machines request files from different mirrors? to cache the content I mean acng handles that transparently. Traditional proxying will not. That's what she said!! BryceBot: no Oh, okay... I'm sorry. 'acng handles that transparently. Traditional proxying will not.' mnathani_: nope mnathani_: not unless you have a rewrite rule i set everything to the same mirrors That's a bear to do with CentOS since it defaults to using mirrorlists. Either I edit every repo file on each machine, or I set the proxy= (And I went with proxy, obviously) brycec: does that thing you use do rewrite? 13:43:55 brycec | acng handles that transparently. Traditional proxying will not. ahh It has a text list of mirrors, and all requests matching that list go into a general "centos" folder (or "debian", etc) (and yeah, that's part of the default config too) how does it handle missing files on mirrors etc? that being much less of an issue now days than it used to be No idea with debian i used to repeat entries for when that happens That's what she said!! i haven't been doing so recently though from what I can tell, the client would have to retry from a different mirror. The only thing "rewritten" is where the cached data gets stored and pulled from, it doesn't affect the url fetched (mirror used) oh https://twitter.com/hashtag/facebookdown