↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |
Who | What | When |
---|---|---|
*** | HighJinx has joined #arpnetworks
GTAXL has quit IRC (Ping timeout: 264 seconds) | [01:15] |
............. (idle for 1h2mn) | ||
tigerpaw has quit IRC (Quit: purrr) | [02:18] | |
................ (idle for 1h16mn) | ||
toddf has quit IRC (Ping timeout: 255 seconds) | [03:34] | |
DDevine has joined #arpnetworks | [03:40] | |
.......... (idle for 45mn) | ||
hien has joined #arpnetworks | [04:25] | |
hien has quit IRC (Quit: leaving) | [04:36] | |
................... (idle for 1h30mn) | ||
toddf has joined #arpnetworks
ChanServ sets mode: +o toddf | [06:06] | |
................................ (idle for 2h36mn) | ||
koan_ is now known as koan | [08:42] | |
DDevine has quit IRC (Ping timeout: 264 seconds)
Olipro has quit IRC (Remote host closed the connection) Guest32912 has joined #arpnetworks Guest32912 has quit IRC (Client Quit) Olipro_ has joined #arpnetworks Olipro_ is now known as Olipro | [08:54] | |
....................................................... (idle for 4h33mn) | ||
tigerpaw has joined #arpnetworks | [13:36] | |
.... (idle for 16mn) | ||
tigerpaw- has joined #arpnetworks
tigerpaw has quit IRC (Ping timeout: 276 seconds) | [13:52] | |
...... (idle for 29mn) | ||
nomadlogic has joined #arpnetworks | [14:23] | |
.......................... (idle for 2h9mn) | ||
tigerpaw- has quit IRC (Quit: rawr?)
tigerpaw has joined #arpnetworks nomadlogic has quit IRC (Ping timeout: 264 seconds) | [16:32] | |
..... (idle for 20mn) | ||
nomadlogic has joined #arpnetworks | [16:55] | |
................................. (idle for 2h44mn) | ||
DDevine has joined #arpnetworks | [19:39] | |
................ (idle for 1h15mn) | ||
nomadlogic has quit IRC (Ping timeout: 264 seconds) | [20:54] | |
...... (idle for 26mn) | ||
ballen has joined #arpnetworks
ChanServ sets mode: +o ballen | [21:20] | |
DDevine has quit IRC (Remote host closed the connection) | [21:34] | |
....... (idle for 31mn) | ||
DDevine has joined #arpnetworks | [22:05] | |
DDevine | http://www.xtranormal.com/watch/7023615/episode-2-all-the-cool-kids-use-ruby?page=2 | [22:05] |
mike-burns | heh. | [22:06] |
RandalSchwartz | heh
it's true | [22:07] |
DDevine | Thought it would be appropriate here.
:p oi up_the_irons I'll get the IP addresses sorted out within a few hours. | [22:08] |
ballen | Anyone know of a distributed filesystem with ZFS-like compression? | [22:12] |
RandalSchwartz | why not ZFS? | [22:13] |
ballen | RandalSchwartz: I need a filesystem that can span multiple servers
i.e. Lustre, GPFS, etc | [22:14] |
RandalSchwartz | zfs can export NFS | [22:15] |
ballen | well aware | [22:15] |
RandalSchwartz | and so | [22:15] |
ballen | NFS is actually somewhat of a slow protocol, plus until 4.1 becomes prevalent theres no way to solve a single name space issue, load balancing between storage servers
and so I'm looking for something that'll take me to 500TB or so | [22:16] |
RandalSchwartz | you need to specify why you are disqualifying it | [22:16] |
ballen | No single name space between multiple machines
no automatic load balancing between servers a chatty protocol, that wastes bandwidth | [22:16] |
RandalSchwartz | so you're blindly prejudiced. ok. :) | [22:17] |
ballen | prove to me that its not then :-) | [22:17] |
RandalSchwartz | just as long as we know. | [22:17] |
ballen | I use NFS extensively right now for 200 or so TB of space
my main goal is to solve the single silo effect, of having multiple NFS servers | [22:18] |
DDevine | DDevine hates that silo effect. | [22:19] |
ballen | where inevitably one server will be filled more than another and require massive rebalancing
and I don't have the luxury of making the application layer take care of load balancing Also RandalSchwartz, I'm not blind in anyway on this matter, unless you feel like backing up that statement | [22:19] |
DDevine | ballen: I don't know much this stuff, but why can't you just use 4.1? | [22:28] |
ballen | lack of support
mainly on the Solaris side, which is odd since its their protocol I'm trying to find a enterprise type solution, similar to Isilon, GPFS, that allows me to use compression. As my data gets a 2-3x compression ratio at GZIP-1. Also I have a few SuperMicro 36 drive servers, that are purpose built for ZFS RAIDZ it of course doesn't have to be commercial, OSS is fine as well just a couple of requirements that no one seems to have figured out checking out MooseFS: http://www.moosefs.org/ | [22:28] |
DDevine | Yeah. | [22:31] |
ballen | which may run on top of ZFS
I've also tried GlusterFS which seems to be a bit more popular, but fails to run on Sol11 | [22:31] |
DDevine | What sort of data are you working with? | [22:32] |
ballen | genomic | [22:32] |
DDevine | OCFS? | [22:32] |
ballen | looked at its docs
its a Linux only FS I believe does anyone have enough experience with Btrfs to say thats its production ready? | [22:32] |
DDevine | ballen: Surely NFS would be quick enough for that purpose... You don't really need super quick response times right?
ballen: I don't have enough experience with BTRFS yet, but I haven't beeb bitten. | [22:33] |
ballen | latency isn't a huge issue no | [22:34] |
DDevine | I think SLES11 had a "technology preview" of BTRFS.
But that would be a fairly old version now. | [22:34] |
ballen | but then I fall into, if I have 10 NFS servers with 42TB each, thats 10 silos of 42TBs
pNFS is promising | [22:34] |
DDevine | Yeah that sucks. | [22:34] |
ballen | to solve that
there was some work in OpenSolaris with it but can't get anything out of Oracle as when they'll implement it | [22:34] |
DDevine | Is there a real need for ZFS rather than just EXT? | [22:35] |
ballen | So let me describe how this hardware is setup
its 36 drives in a single 4U 2 for OS in a mirror theres 5 SAS2 controllers each with 2 SAS iPass connectors (i.e. 4 drives) each controller gets 8 drives, 4 on the last RAIDZ2 is setup then in 8 drive groups across the controllers so that if a controller was lost that filesystem would keep going so if I moved to EXT3 or a classic FS, I'd have that to contend with note that hardware design guarantees each drive full SAS2 bandwidth all the way through the PCIe bus probably overkill what inexpensive but* | [22:35] |
DDevine | Yeah that is a little tricky... I think LVM could manage that though. | [22:39] |
ballen | possibly | [22:39] |
DDevine | The performance hit from LVM is fairly minimal. | [22:39] |
ballen | plus I don't want to give up on-the-fly compression
otherwise I might as well just buy double the hardware | [22:39] |
DDevine | Yeah I've never done on the fly compression so I don't know what the go with it is.
double the hardware means double the problems and upkeep. | [22:40] |
ballen | well since my data is primary very large text files, they compress quite well
yep, plus double the space and power | [22:40] |
DDevine | Yeah excellent compression | [22:41] |
ballen | basically 2.5x avg
some datasets are a bit better in at 3.2x so I've been researching, looking how to leverage ZFS as a storage backend for a GlusterFS like setup Ideally, Ceph is the option, but its much much to immature | [22:41] |
DDevine | Yeah on Linux you would have to use BTRFS or Reiser4. | [22:43] |
ballen | Right | [22:43] |
DDevine | Reiser4 is mature though, not sure about support these days though. | [22:44] |
ballen | I haven't used BTRFS + compression enough to know how it effects CPU load
or rather, how it effects throughput due to extra CPU load Nor how good BTRFS' raid5/6 implementation is. | [22:44] |
DDevine | Yeah but you can just do the RAID stuff outside of the FS. | [22:46] |
ballen | thats true | [22:47] |
DDevine | I have the feeling my ISP doesn't respect DNS TTLs.
Which is odd because overall they have a great setup. | [22:54] |
ballen | that would be odd | [22:57] |
DDevine | Eh maybe it was just a temporary thing. Today they seem to have honored the TTLs perfectly. | [22:59] |
......... (idle for 42mn) | ||
*** | nomadlogic has joined #arpnetworks
nerdd_ has quit IRC (Ping timeout: 240 seconds) nerdd has joined #arpnetworks | [23:41] |
↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |