#arpnetworks 2014-06-02,Mon

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
***grepidemic has quit IRC (Ping timeout: 252 seconds)
grepidemic has joined #arpnetworks
[05:02]
bytbox has quit IRC (Read error: Connection reset by peer) [05:14]
............................ (idle for 2h19mn)
ziyourenxiang has joined #arpnetworks [07:33]
.... (idle for 18mn)
staticsafe|2 has quit IRC (Read error: Connection reset by peer) [07:51]
.... (idle for 19mn)
pseudorandom has joined #arpnetworks
pseudorandom has quit IRC (Client Quit)
pseudorandom has joined #arpnetworks
[08:10]
ziyourenxiang has quit IRC (Quit: ziyourenxiang) [08:28]
..... (idle for 20mn)
staticsafe|2 has joined #arpnetworks [08:48]
.......................................... (idle for 3h25mn)
pjs has joined #arpnetworks [12:13]
................. (idle for 1h22mn)
mnathanihow does freenx compare with spice, RDP and VNC ? [13:35]
........ (idle for 36mn)
brycecDoesn't exactly compare... FreeNX is targeted at hosted desktops (i.e. being its own X server and everything) versus VNC which just scrapes the X server.
That said, FreeNX is pretty efficient.
It's also kinda old and muddled by licensing.
brycec used to run a FreeNX session on his home rig and connect to it from his barely-capable work desktop in order to get shit done
[14:11]
***aboutGod has joined #arpnetworks [14:21]
aboutGod has left [14:26]
................................. (idle for 2h41mn)
m0undsone of my fav record labels went out of business and they did a "grab box" of random selections - 30 albums for $40. ended up with 27 releases i didn't already have in physical form. made my day. [17:07]
brycecsuck for them though [17:08]
m0undsthey had some serious financial mismanagement, expanded too much
the money they're making from selling these is going towards funding re-pressing and re-releasing rare stuff with high demand
[17:10]
brycecAh cool [17:10]
m0undsyeah, not sure how they're not in the red doing this particular thing [17:10]
brycecI just assumed they went under due to obsolescence [17:11]
m0undsnah
indie label, catered to a pretty specific set of audiences
they spent a ton of money on art and the "quality" of releases, which cut into their bottom line
and they had a hard time getting stuff out on time because of that too
http://en.wikipedia.org/wiki/Hydra_Head_Records
[17:11]
BryceBotHydra Head Records :: Hydra Head Records is an independent record label which specializes in heavy metal music, founded in New Mexico by Aaron Turner (the frontman of Isis) in 1993. It has two imprints; Hydra Head Noise Industries, which specialises in experimental and noise music, and another entitled Tortuga Records. Hydra Head was founded in 1993 as a distribution company while Turner was still in high school. In 1995, he moved to Boston... [17:12]
m0undsthey moved their digital fulfillment to bandcamp, which is cool because i like getting stuff in FLAC [17:14]
brycecbrycec nods [17:17]
...... (idle for 26mn)
mercutiodo people still use ogg vorbis?
or is it basically mp3 or flac now
back in the day i did ab tetsing with wav vs ogg vs mp3 and i couldn't notice the difference between ogg and wav, but could notice mp3
[17:43]
brycecI'm so out of touch with "people" I have no clue...
I know that my car and my home stereo receiver support OGG, which is cool
They also support FLAC, which is cooler.
But 99.9% of my music consumption is streamed from the likes of Spotify (mp3) and Google (whatever format I've uploaded, including flac and ogg)
What I do keep in my collection is, where possible, flac, because lossless.
[17:46]
m0undsi avoid ogg - typically do FLAC, then transcode to m4a for my truck or phone
but i don't keep the transcoded songs on my nas because it's 12TB and only 10% full
haha
[17:51]
mercutio12tb nas? [17:53]
brycec"only 10% full" so you still have PLENTY of room for more [17:53]
m0undsyes [17:53]
mercutio6 3tb drives? [17:53]
m0undsyes [17:53]
brycectank 16.2T 1.76T 14.5T 10% 1.00x ONLINE - [17:53]
mercutioi've only got half of that storage space :) [17:53]
brycec(11 2TB drives on a RAID6) [17:54]
mercutioAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
raid 7.69T 2.27T 5.42T 29% 1.00x ONLINE -
[17:54]
m0undsmine's the result of a pricing error [17:54]
mercutioheh i'm still using more space than you [17:54]
m0undsit's an iomega NAS that runs EMC's lifeline linux distro, and was priced $700 instead of $4700 [17:54]
brycecmercutio: I'm currently migrating 4.6TB off my old NAS :P [17:55]
mercutiomine's a hp
dl320g7
[17:55]
brycecMine's homebuilt [17:55]
mercutioit can take 4 hard-disks [17:55]
brycecMine holds 20 *maniacal laughing follows* [17:56]
mercutiobut i dunno, hard-disks will get bigger [17:56]
m0undshahaha [17:56]
mercutio20?
wow
[17:56]
brycecNorco 4020 [17:56]
mercutioi'm actually surprised by how fast it is [17:56]
brycecI tend to plan 5-10 years ahead [17:56]
mercutioit's an i3, but even copying from the raid array back to itself is fast [17:56]
m0undsoh, the list price came down - it's $2999 now for this one, hahah [17:57]
mercutiomost raid systems i see tend tob e slow
fs is kind of cool i reckon
[17:57]
brycec500MB/s from one pool (of SSDs) to the big raid6 (7200RPM) [17:57]
m0undsi would have just done a freenas box if this didn't work [17:57]
mercutiomy ssd pool at home can do over 1000mb/sec :)
with 3 ssds
[17:58]
brycecYep, same here - my desktop has a pair of SSD's raid1'd [17:58]
mercutioi imagine zfs must be moving the parity around [17:58]
brycecit's ludicrous [17:58]
mercutioi was going to do raid 10
but raidz is actually damn good on ssd's
because the seek times are so quick anyawy, it doesn't matter
although the root is mdadm raid 10
[17:59]
brycecYep [17:59]
mercutiobut it's tiny anyway
i kind of want to see linux deal better with ssd's
[17:59]
brycec(my "big fast pool" uses 3 SSDs - 1 for cache, and 2 mirrored for log)
I dunno, I see Linux doing SSDs better than anything else
[18:00]
mercutioall the ssd's out atm really need you to pile on high request depths to get good performance, but most linux stuff goes sequentially, which is faster for hard-disks... [18:00]
brycecJust set the IO scheduler to noop [18:01]
m0undsyea, noop is best when you have fast flash media [18:01]
mercutiowell the problem is, you want to know in advance everything yo'ure going to request
and push it all off in one go
[18:01]
m0undsnoop or deadline [18:01]
mercutioi use deadline [18:01]
m0undsi used deadline on android devices because the io was wildly inconsistent
stupid samsung
[18:02]
mercutiobut when i was experimenting i couldn't really tell hte difference between cfq and deadline
with zfs you can also improve performance on ssd's by letting it do more outstanding requests
[18:02]
bryceciozone hilighted the differences for me [18:02]
mercutiobrycec: that's nothing liek real world access though.
i used to use apt-get as a benchmark for disk performance
it was sometime after that that i learn that there are a lot of synchronous blocks in it
which is why it's painfully slow on some ssd's.
[18:03]
brycec"you want to know in advance everything yo'ure going to request" -- Readahead isn't an issue with SSDs. My RAM is only marginally faster than going to the SSD. [18:04]
mercutiomy ram speed is what 23gb/sec?
my ssd isn't anywhere near that
adn latency is huge difference
if each request takes 1 msec, and you do 1000 requests that's a second.
but if you do 8 of those requests in parallel, it's not going to take 1/8 of a second probably, but it very well could take 1/4
network latency is actually on average shorter than ssd latency still btw
for a lan
it's not that there's 0 latency, it's just reduced latency, but it does'nt look in the short term sequential latency is going to go down a lot, but parallel latency is already pretty good.
there's other combined issues, like when using apt-get it'll download all of it's files, then extract them bit by bit. if you have a fast network connection, it seems pointless to even write the archive to disk.
but it'll actually wait until it's got all the files before even starting to uncompress.
openbsd for instance can actually read from network/extract at once
i dunno ,i still have a dream of network storage being used by everyone. which means things should work with high latency fast :)
networks are only juts getting fast enough for "cloud" storage though.
and they're not really reliable enough yet.
but yeah, i think hints should be there, liek when you run make and it compiles a whole lot of c files, it should hint about the c files at a lower priority.
[18:04]
***avj has quit IRC (Quit: ircII EPIC5-1.1.6 -- Are we there yet?) [18:12]
mercutiosure it may only make things 10% faster on a ssd, but then you run over nfs or something, and it may be 20%. [18:13]
............. (idle for 1h4mn)
***avj has joined #arpnetworks [19:17]
............................ (idle for 2h15mn)
pseudorandom has quit IRC (Quit: Leaving) [21:32]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)