[01:15] *** HighJinx has joined #arpnetworks [01:16] *** GTAXL has quit IRC (Ping timeout: 264 seconds) [02:18] *** tigerpaw has quit IRC (Quit: purrr) [03:34] *** toddf has quit IRC (Ping timeout: 255 seconds) [03:40] *** DDevine has joined #arpnetworks [04:25] *** hien has joined #arpnetworks [04:36] *** hien has quit IRC (Quit: leaving) [06:06] *** toddf has joined #arpnetworks [06:06] *** ChanServ sets mode: +o toddf [08:42] *** koan_ is now known as koan [08:54] *** DDevine has quit IRC (Ping timeout: 264 seconds) [08:57] *** Olipro has quit IRC (Remote host closed the connection) [09:00] *** Guest32912 has joined #arpnetworks [09:00] *** Guest32912 has quit IRC (Client Quit) [09:01] *** Olipro_ has joined #arpnetworks [09:03] *** Olipro_ is now known as Olipro [13:36] *** tigerpaw has joined #arpnetworks [13:52] *** tigerpaw- has joined #arpnetworks [13:54] *** tigerpaw has quit IRC (Ping timeout: 276 seconds) [14:23] *** nomadlogic has joined #arpnetworks [16:32] *** tigerpaw- has quit IRC (Quit: rawr?) [16:33] *** tigerpaw has joined #arpnetworks [16:35] *** nomadlogic has quit IRC (Ping timeout: 264 seconds) [16:55] *** nomadlogic has joined #arpnetworks [19:39] *** DDevine has joined #arpnetworks [20:54] *** nomadlogic has quit IRC (Ping timeout: 264 seconds) [21:20] *** ballen has joined #arpnetworks [21:20] *** ChanServ sets mode: +o ballen [21:34] *** DDevine has quit IRC (Remote host closed the connection) [22:05] *** DDevine has joined #arpnetworks [22:05] http://www.xtranormal.com/watch/7023615/episode-2-all-the-cool-kids-use-ruby?page=2 [22:06] heh. [22:07] heh [22:07] it's true [22:08] Thought it would be appropriate here. [22:08] :p [22:12] oi up_the_irons I'll get the IP addresses sorted out within a few hours. [22:12] Anyone know of a distributed filesystem with ZFS-like compression? [22:13] why not ZFS? [22:14] RandalSchwartz: I need a filesystem that can span multiple servers [22:15] i.e. Lustre, GPFS, etc [22:15] zfs can export NFS [22:15] well aware [22:15] and so [22:16] NFS is actually somewhat of a slow protocol, plus until 4.1 becomes prevalent theres no way to solve a single name space issue, load balancing between storage servers [22:16] and so [22:16] I'm looking for something that'll take me to 500TB or so [22:16] you need to specify why you are disqualifying it [22:16] No single name space between multiple machines [22:16] no automatic load balancing between servers [22:17] a chatty protocol, that wastes bandwidth [22:17] so you're blindly prejudiced. ok. :) [22:17] prove to me that its not then :-) [22:17] just as long as we know. [22:18] I use NFS extensively right now for 200 or so TB of space [22:18] my main goal is to solve the single silo effect, of having multiple NFS servers [22:19] * DDevine hates that silo effect. [22:19] where inevitably one server will be filled more than another and require massive rebalancing [22:19] and I don't have the luxury of making the application layer take care of load balancing [22:21] Also RandalSchwartz, I'm not blind in anyway on this matter, unless you feel like backing up that statement [22:28] ballen: I don't know much this stuff, but why can't you just use 4.1? [22:28] lack of support [22:29] mainly on the Solaris side, which is odd since its their protocol [22:30] I'm trying to find a enterprise type solution, similar to Isilon, GPFS, that allows me to use compression. As my data gets a 2-3x compression ratio at GZIP-1. Also I have a few SuperMicro 36 drive servers, that are purpose built for ZFS RAIDZ [22:31] it of course doesn't have to be commercial, OSS is fine as well [22:31] just a couple of requirements that no one seems to have figured out [22:31] checking out MooseFS: http://www.moosefs.org/ [22:31] Yeah. [22:31] which may run on top of ZFS [22:32] I've also tried GlusterFS which seems to be a bit more popular, but fails to run on Sol11 [22:32] What sort of data are you working with? [22:32] genomic [22:32] OCFS? [22:32] looked at its docs [22:32] its a Linux only FS I believe [22:33] does anyone have enough experience with Btrfs to say thats its production ready? [22:33] ballen: Surely NFS would be quick enough for that purpose... You don't really need super quick response times right? [22:33] ballen: I don't have enough experience with BTRFS yet, but I haven't beeb bitten. [22:34] latency isn't a huge issue no [22:34] I think SLES11 had a "technology preview" of BTRFS. [22:34] But that would be a fairly old version now. [22:34] but then I fall into, if I have 10 NFS servers with 42TB each, thats 10 silos of 42TBs [22:34] pNFS is promising [22:34] Yeah that sucks. [22:34] to solve that [22:35] there was some work in OpenSolaris with it [22:35] but can't get anything out of Oracle as when they'll implement it [22:35] Is there a real need for ZFS rather than just EXT? [22:35] So let me describe how this hardware is setup [22:36] its 36 drives in a single 4U [22:36] 2 for OS in a mirror [22:36] theres 5 SAS2 controllers [22:36] each with 2 SAS iPass connectors (i.e. 4 drives) [22:36] each controller gets 8 drives, 4 on the last [22:37] RAIDZ2 is setup then in 8 drive groups [22:37] across the controllers [22:37] so that if a controller was lost that filesystem would keep going [22:38] so if I moved to EXT3 or a classic FS, I'd have that to contend with [22:38] note that hardware design guarantees each drive full SAS2 bandwidth all the way through the PCIe bus [22:38] probably overkill [22:38] what inexpensive [22:39] but* [22:39] Yeah that is a little tricky... I think LVM could manage that though. [22:39] possibly [22:39] The performance hit from LVM is fairly minimal. [22:39] plus I don't want to give up on-the-fly compression [22:40] otherwise I might as well just buy double the hardware [22:40] Yeah I've never done on the fly compression so I don't know what the go with it is. [22:40] double the hardware means double the problems and upkeep. [22:40] well since my data is primary very large text files, they compress quite well [22:41] yep, plus double the space and power [22:41] Yeah excellent compression [22:41] basically 2.5x avg [22:41] some datasets are a bit better in at 3.2x [22:42] so I've been researching, looking how to leverage ZFS as a storage backend [22:42] for a GlusterFS like setup [22:43] Ideally, Ceph is the option, but its much much to immature [22:43] Yeah on Linux you would have to use BTRFS or Reiser4. [22:43] Right [22:44] Reiser4 is mature though, not sure about support these days though. [22:44] I haven't used BTRFS + compression enough to know how it effects CPU load [22:44] or rather, how it effects throughput due to extra CPU load [22:45] Nor how good BTRFS' raid5/6 implementation is. [22:46] Yeah but you can just do the RAID stuff outside of the FS. [22:47] thats true [22:54] I have the feeling my ISP doesn't respect DNS TTLs. [22:54] Which is odd because overall they have a great setup. [22:57] that would be odd [22:59] Eh maybe it was just a temporary thing. Today they seem to have honored the TTLs perfectly. [23:41] *** nomadlogic has joined #arpnetworks [23:45] *** nerdd_ has quit IRC (Ping timeout: 240 seconds) [23:46] *** nerdd has joined #arpnetworks