***: HighJinx has joined #arpnetworks
Zetas has quit IRC (Ping timeout: 246 seconds)
Zetas has joined #arpnetworks
easymac_ has joined #arpnetworks
easymac has quit IRC (Read error: Connection reset by peer)
easymac_ is now known as easymac
easymac is now known as Guest30789
Guest30789 has quit IRC (Read error: Connection reset by peer)
easymac_ has joined #arpnetworks
easymac_ has quit IRC (Client Quit) up_the_irons: mnathani: smaller accounts do not get less CPU. everyone vps has 1 core. The scheduling of cores per vps is determined by the standard Linux scheduler (we don't do anything fancy to it)
toddf: isync eh? i'll have to check that out plett: I keep meaning to experiment with "multiple master" mail servers for high availability. Two MXs which both deliver into local Maildir folders and then offlineimap/isync/dsync/something_else to keep the Maildirs in sync ***: gdcbjg364 has left up_the_irons: plett: i'm doing multiple master servers for a client (but not the offlineimap stuff). they haven't implemented the 2nd MX, so i can't comment about it yet. plett: up_the_irons: How are they intending to do the syncing between servers? up_the_irons: plett: oh, actually, these are just relay hosts
for spam, virus, etc... filtering
plett: so i guess it doesn't really apply to your setpu
*setup plett: I eventually want to have multiple A/AAAA records for my IMAP server and clients can use one or the other without needing to care which one they connect to, and to have the setup survive the failure of one of the servers without losing mail up_the_irons: plett: that would be sweet plett: up_the_irons: I can't see any reason why it can't work either. The only complex bit is the IMAP level syncing and whether that is reliable for syncing bi-directional changes up_the_irons: plett: that's not an "only" ;)
plett: that's probably the most complex part
like, which is the most up-to-date?
but, yeah, if you make it work, please share :) plett: The only corner cases I can think up are ones where clients are making changes on both servers and both change message flags in some way
But for mail delivery it should work well up_the_irons: yeah mike-burns: http://goo.gl/Kvue0 - my favorite slide from this year's OpenBSD PF talk (so far, still reading through). up_the_irons: in case anyone has issues with a CentOS 6.3 custom install, I just wrote this: http://support.arpnetworks.com/kb/vps/centos-63-experiences-segmentation-faults-and-crashes-during-install ***: dr_jkl has quit IRC (Read error: Operation timed out)
dr_jkl has joined #arpnetworks up_the_irons: Finally official: https://www.arpnetworks.com/dedicated
or, without the s, http://arpnetworks.com/dedicated mike-burns: Fancy! up_the_irons: ok, who's looking at it from Sweden... -: up_the_irons looks around josephb: Nice :-)
link to console on wiki doesn't work
also, you might want to explain a bit more about dual uplinks.
both ports in same vlan, must use lacp? seperate vlans/subnets? etc mike-burns: Busted; I looked at it from Sweden... up_the_irons: i knew it!
josephb: tnx for catching that broken link; fixed josephb: cool :) up_the_irons: josephb: "Dual 1 Gbps (GigE) Uplinks within VLAN" or "Dual 1 Gbps (GigE) Uplinks (Same VLAN)" or... ? I have very limited space, but want to explain as much as possible in a few words :)
no restrictions on layer 2 proto, but we recommend active / failover bonding (bonding in Linux, lagg in FreeBSD, ... something else in OpenBSD)
"Dual 1 Gbps (GigE) Uplinks in VLAN"
shorter ;) josephb: up_the_irons: I think linking it off to a wiki article might be better
"Redundant Uplinks (GigE)" up_the_irons: josephb: yeah.. josephb: and a link to something where you have space to explain it might work.
but I'm a detail guy and this is a sales page :) up_the_irons: yeah, that was on my plan actually, but for a bit later josephb: for what it's worth you could probably leave out the wiki link for console. and just leave it as straight sales pitch up_the_irons: i tend to find that generates sales inquiry emails and i just have to say "read this". so, when i link it, those emails stop :)
i'd link other things too, but i simply haven't written those articles yet
this is more of a "soft launch" anyway :) josephb: yeah, for sure. Looks great up_the_irons: thanks! toddf: plett: I used to think I wanted to do the reziliant mail server approach by having a sql backing store for the mail itself, then use db replication for the mail itself. now I see things like dsync that could easily be extended to sync multiple mail systems, but it still doesn't feel quite right.
isync and offlineimap work because they cache info about the local mail so it is know if local deleted a message and can be translated to upstream, flags are updated with sane logic (if someone read the mail somewhere, guess it is read, eh? etc..) .. all via the imap protocol .. one could do similar with physical access, its simply a question of how much disk io are you willing to spend to synch how much mail? unless you have a very efficient ...
... mechanism, probably at delivery time, you're going to run into scalability issues the larger you scale, imho plett: toddf: Yeah. Syncing mail via IMAP like this is inherently un-scalable. I wonder how much of Dovecot you'd have to re-implement to be able to use a database backend toddf: openbsd terminology would be trunk(4) for failover and bonding, you'll have to be clear that it is setup as LACP otherwise bonding would cause issues
I have several dell systems with ipmi. guess I need to figure out how to access remote power reboot and serial console that way too .. would be nice
in theory you would just do a plugin with a different storage mechanism up_the_irons: toddf: yeah, i have conserver hooked up with ipmi and can hit "console kvr24" (for example), and get a serial login :) toddf: any chance I can hit you up for the 'console kvr24 {' section (minus ip address info etc) ?
or is it a 'go read conserver + ipmi in google' type of thing? up_the_irons: toddf: see pm toddf: http://dovecot.org/pipermail/dovecot/2006-June/013818.html <-- timo's official answer to the 'can I store email in a sql database with dovecot' .. seems to suggest a shared filesystem is better than a non existent free multimaster database, maybe with postgres sporting multimaster as well as mysql these days (yes? pgpool at least .. or am I dreaming?) maybe this answer would be different today dunno plett: toddf: Last time I had to do IPMI on a Dell was with an R200. The magic incantation was: ipmitool -E -I lan -H hostname -U root isol activate
toddf: http://blog.dovecot.org/2012/02/dovecot-clustering-with-dsync-based.html looks like an official answer to a different question ***: root has quit IRC (Ping timeout: 246 seconds)
root has joined #arpnetworks
heavysixer has joined #arpnetworks
ChanServ sets mode: +o heavysixer toddf: plett: interesting bit indeed brycec: "Arp Metal" eh? Time to turn this up to 11! (I'm guessing these will be the retired KVM servers?)
up_the_irons: to clarify a bit, maybe refer to the uplinks as "Dual 1 Gbps (GigE) Uplink ports"? To me that sounds like the server has dual-NICs rather than, somewhere upstream, there are redundant providers.
up_the_irons: Also - what types of RAID are supported by the hardware RAID card? You might add that... you might not, it's up to you.
The prices seem reasonable though :)
Now if only I had $129/mo...
up_the_irons: Also - do any IP addresses come with the dedicated servers?
^ a question sure to be asked, and best answered by the webpage ;) ***: heavysixer has quit IRC (Remote host closed the connection) twobithacker: hmmm the small one is bigger than my existing colo box jdoe: toddf: yeah, dsync looks hackish. I thought it was a bit better than cron a script to run as every user. toddf: sounds like from the above url from plett .. dsync can be run in an automated fashion based on when things change not automatically for every user .. aka a queue of dsync runs .. instead of scheduling the thundering herd .. jdoe: can it? The wiki page doesn't describe anything like that :/
oh I see, plugins. toddf: 'do your dirty work with plugins. if we like your plugin enough, we might refine it and include it in the base distro...' is how I read that jdoe: "modules" is probably more accurate.
it's part of the core package, just not enabled by default.
their sieve support (pigeonhole) is probably more like what you're describing. ***: heavysixer has joined #arpnetworks
ChanServ sets mode: +o heavysixer
dzup has quit IRC (Ping timeout: 256 seconds)
dzup has joined #arpnetworks
dzup has quit IRC (Remote host closed the connection) Lefty: up_the_irons: when you get a chance, can you look at my beta VM? It's not allowing inbound SSH, but I do have connectivity from the box outwards and there aren't any firewalls loaded
uuid is ad2baab0-f17b-012f-0d9d-525400972102 ***: dzup has joined #arpnetworks
niner has joined #arpnetworks
Ehtyar has joined #arpnetworks up_the_irons: brycec: all good points, thanks!
brycec: they are not the retired KVM servers, btw (none of them are retired actually), but a new set of servers just for this new product line (with newer E3-1240V2 chips)
Lefty: i'm sure your beta VM is fine, must be something in the vps causing trouble Lefty: up_the_irons: yeah, I don't imagine it's on the kvm server itself
but it's odd up_the_irons: brycec: "Dual 1 Gbps (GigE) NICs" <-- might be better; i just tried "Ports" but even then it might not be totally clear. Yet, a NIC is almost always associated with a network card on a server. ***: mcc0nnell has joined #arpnetworks up_the_irons: i'm gonna write up a KB article about the NICs now, cuz it seems to be a source of questions ***: jdeuce has quit IRC (Remote host closed the connection) up_the_irons: toddf: with trunk(4) failover mode, do you know if OpenBSD sends a gratuitous ARP should the master port change (that is, fail over to the other NIC)? mercutio: up_the_irons: with trunk you can do active active
well if you're connecting to a switch with lagg up_the_irons: mercutio: but i don't want active active mercutio: ok
err lacp
what's the difference between lacp and lagg?
you could probably test to see if it does that arp up_the_irons: mercutio: of course i can, but i'd rather just ask if someone already knows ;)
it would save time
mercutio: lagg is an interface in FreeBSD. LACP is a protocol. mercutio: well true
ahh that's where i got that from
problem with lacp is a lot of switches won't do lacp betwen switches up_the_irons: ah mercutio: i used to think there was little benefit for redundancy cos of that
but intel cards have had a few transmit hangs over the years up_the_irons: mercutio: i have one Metal customer using Linux bonding in active-backup mode (same as "failover" in Free/Open BSD) with a lot of success. he did tons of testing, works great.
OK guys, let me know if this clarifies things a bit on the dual uplink bullet point:
http://support.arpnetworks.com/kb/dedicated-servers/about-the-dual-1-gbps-gige-nics-on-arp-metal-dedicated-servers
and now that article is linked from the sales page
http://arpnetworks.com/dedicated ***: CRowen is now known as Cn
mcc0nnell has left "Leaving" mercutio: up_the_irons: over multiple swithces? up_the_irons: mercutio: yes, i don't want switch failure to bring down a box mercutio: understand up_the_irons: for VPS', i can engineer it any way i want on the host machine, but for a dedicated box, i need to expose more of the inner workings of that stuff to the customer mercutio: i wonder if loadbalance works up_the_irons: theoretically, the other modes should work too, i've just never tested them mercutio: yeah up_the_irons: brb mercutio: i can't reallya fford a dedicated myself as much as i'd like it :) ***: Lefty has quit IRC (Ping timeout: 255 seconds)
niner has quit IRC (Quit: Leaving)
Lefty has joined #arpnetworks mnathani: Are the dedicated servers Vmware ESX capable? ***: notion has quit IRC (Ping timeout: 245 seconds)
twobithacker has quit IRC (Ping timeout: 245 seconds)
RandalSchwartz has quit IRC (Ping timeout: 246 seconds)
CaZe has quit IRC (Ping timeout: 240 seconds)
toddf has quit IRC (Ping timeout: 246 seconds)
up_the_irons has quit IRC (Ping timeout: 246 seconds)
bGeorge has quit IRC (Ping timeout: 260 seconds)
mike-burns has quit IRC (Ping timeout: 260 seconds)
mikeputnam has quit IRC (Ping timeout: 240 seconds)
teneightypea has quit IRC (Ping timeout: 272 seconds)
medum has quit IRC (Ping timeout: 256 seconds)
kraigu has quit IRC (Ping timeout: 240 seconds)
medum has joined #arpnetworks
CaZe has joined #arpnetworks CaZe: Had to turn off ipv6 :/ milki: hiccup! andol: Yeah, what happened to the arpnetworks IPv6? ***: lazard has quit IRC (Read error: Connection reset by peer)
qbit has quit IRC (Ping timeout: 246 seconds)
lazard has joined #arpnetworks -: andol is also seeing some IPv4 packet loss on some routes andol: examples: www.nearlyfreespeech.net, halleck.arrakis.se jbergstroem: i have packet loss too ***: jpalmer has quit IRC (Ping timeout: 252 seconds)
henderb has quit IRC (Ping timeout: 246 seconds) milki: :P ***: henderb has joined #arpnetworks
jpalmer has joined #arpnetworks jbergstroem: i don't share your choice of smiley. i would go for :S atm milki: hehe
it kind of defeats the purpose of being in the support chan if my connection is from my vps >.<
my, my irc bot timed out again jbergstroem: i should set up some additional pings on my rrd graphing
22 packets transmitted, 13 received, 40% packet loss, time 21073ms
from my vps andol: https://halleck.arrakis.se/smokeping/images/arrakis/leto4_last_10800.png ***: rpaulo has joined #arpnetworks
qbit has joined #arpnetworks rpaulo: hi
ipv6 looks down milki: andol: colours! ***: up_the_irons has joined #arpnetworks
ChanServ sets mode: +o up_the_irons
notion has joined #arpnetworks jbergstroem: up_the_irons: seeems to be acouple of people in here experiencing packet loss rpaulo: back up milki: lol ***: bGeorge has joined #arpnetworks jbergstroem: oh, its fixed now ***: teneightypea has joined #arpnetworks milki: yes, seeing as up_the_irons is back in the channel... up_the_irons: │20:08:11 @up_the_irons │ wow, IPv6 router is slooooooooooooooooooow ***: toddf has joined #arpnetworks
ChanServ sets mode: +o toddf milki: lol andol: up_the_irons: IPv4 a well ***: rpaulo has quit IRC (Client Quit) up_the_irons: https://twitter.com/arpnetworks/status/260941977959415808 jbergstroem: I also experienced IPv4 packet loss ***: twobithacker has joined #arpnetworks up_the_irons: i didn't see any IPv4 issues though -: milki doesnt even know how to use ipv6 properly yet up_the_irons: s3.lax (our IPv6 router) was dropping a lot of packets ***: qbit_ has joined #arpnetworks staticsafe: interesting, a cyberverse VPS i was monitoring was showing loss as well ***: qbit has quit IRC (Quit: leaving)
qbit_ has quit IRC (Client Quit) andol: up_the_irons: This is what I am experiecing between leto (arpnetworks) and my other vps halleck - https://halleck.arrakis.se/smokeping/smokeping.cgi?target=arrakis.leto4 ***: kraigu has joined #arpnetworks
mikeputnam has joined #arpnetworks milki: colours! andol: up_the_irons: Also saw it against www.nearlyfreespeech.net, even if that seems fine now. ***: qbit has joined #arpnetworks staticsafe: andol: silly question, but by any chance are you running smokeping on FreeBSD9? andol: staticsafe: Nope, that would be an Ubuntu 12.04 staticsafe: hm ok
am having some issues that I can't get fixed
http://smokeping.asininetech.com/smokeping.cgi?target=_charts like so jbergstroem: up_the_irons: http://stats.pingdom.com/31n57wtu0gs4/660615
relevant perhaps? andol: staticsafe: file system permissions issues? staticsafe: andol: nope, there is a thread about it on the smokeping mailing list
im gonna try to debug it again now up_the_irons: jbergstroem: yeah maybe. smells to be of bad path / peer. probably will clear up on its own. i don't see anything out of the ordinary on my end, so, i'm still looking around. root: up_the_irons do you have 2tb drives available for the dedicateds? ***: mike-burns has joined #arpnetworks
ChanServ sets mode: +o mike-burns staticsafe: oh wth
www user can write just fine up_the_irons: root: no, 2tb drives fail a lot
root: anything above 1tb pretty much blows; platter sizes and stuff go above a certain threshold of reliablity root: Hrm. I've been running 8 2TB Seagate Constellation ES drives for a long time with no issues
I would consider moving to arpnetworks for our dedicated, but can't make due with less than 4 x 2TB on a server
in RAID10 up_the_irons: root: man, i have piles of busted seagate drives
stopped using them a while ago milki: bad luck o.O root: What kind? Constellation ES? up_the_irons: $239.99 each, ouch root: I hate Seagate, but those Constellation ES drives have been rock solid and fast. staticsafe: http://pastie.org/pastes/5106895/text?key=vaugnxeraqkpybeshs6r5q - can anybody make anything out of this? this is the fcgiwrap process, trying to debug a smokeping issue up_the_irons: root: how long have you had them? root: Enterprise drives... what are you using?
About a year and a half, estimating up_the_irons: not bad, guess those ES are better root: I wouldn't recommend them to you over HGST enterprise grade drives, but they are not the crap that is Seagate desktop drives. up_the_irons: root: so yeah, i can't shell the $239 each for the 2TB drives. we have a batch of 1TB's in stock for the dedicateds. Maybe at some pointer later we can. root: Well
Can you do a 8x1TB system?
in RAID10
That could work.
I need 4TB usable in RAID10 though.
Also -- what drives are the 1TB's you have? Enterprise grade, or desktop drives?
http://www.newegg.com/Product/Product.aspx?Item=N82E16822145476 -- That's probably what I'd pick. up_the_irons: root: depends on availability. I have both WD RE4's and regular Hitachi's root: regular as in Deskstars? up_the_irons: yeah root: That are great drives, if pre-acquisition
Wonderful drives, I'd take the Hitachis up_the_irons: they are all pre-acquisition cuz i can't get any more like those ;) root: Actually have about 12 of them in my house right now, but mostly 2TB deskstars, at least a few 1TBs though
Yeah, it sucks. up_the_irons: yep root: I don't know where to turn after all the mergers, will take time to figure out how it all goes up_the_irons: root: i'm hoping the RE4's are decent root: Shit, I'd buy those Hitachis off you for more than you paid at this point up_the_irons: LOL root: haha up_the_irons: we need 'em like crazy, hands off! ;) root: I hear you.
So, is a 8x1TB config plausible? up_the_irons: root: do you do hardware raid10 or soft? root: depends on the box
we have two, one of each
SW RAID10 has been good to us, despite all of the frowns it gets up_the_irons: root: our 1U's are 4x. I'm going to shy away from building custom setups this early in the launch phase (of the dedicateds) root: 1073741824 bytes (1.1 GB) copied, 4.99374 seconds, 215 MB/s
That's on SW RAID10
with Constellation ES 2TB's up_the_irons: i'd like to accommodate, but we just have too much going on right now
root: nice and fast :) root: Yeah, thanks
No big deal up_the_irons: anyone seeing any more IPv4 issues?
pingdom and my pagerduty seem to have calmed down jbergstroem: up_the_irons: nope, all good root: Actually, kind of sucks since you have Deskstars, but the 215MB/s well help me sleep. up_the_irons: root: you guys storing large files or ...? root: Nope, just don't oversell. up_the_irons: jbergstroem: cool root: VPSs
And shared hosting up_the_irons: ah ok
you must pack a lot of space into a vps root: Zero overselling, professional clients, premium service
What do you mean? up_the_irons: that's nice; good market
root: i mean you must offer a lot of space per VPS root: Starting at 32GB up_the_irons: to need 2TB drives min root: So, yeah andol: up_the_irons: Completly dead IPv4 wise to my other VPS halleck.arrakis.se root: Our VPS plans storage goes like this: 32/128/192/256/512/768... and so on
GB up_the_irons: andol: looks to be NTT problem
7. ae-5.r21.lsanca03.us.bb.gin.ntt.net 0.0% 2 1.0 1.1 1.0 1.2 0.2 root: Actually, no so on. That's it. up_the_irons: 8. ae-2.r20.asbnva02.us.bb.gin.ntt.net 0.0% 2 65.4 66.6 65.4 67.8 1.7
9. ???
lol
root: cool
yeah quite big disks
andol: what does traceroute look like *from* halleck.arrakis.se to ARP ? root: I'd like to get us moved to ARPNetworks eventually. We're only like 30ms away currently. :P up_the_irons: root: are you not happy with your current host? root: Depends on what aspect you look at it from andol: up_the_irons: http://pastie.org/5106931 up_the_irons: i c root: Wonderful facility, great prices
Poor remote hands
Will, hit and miss remote hands.
As, I'm in Florida. andol: up_the_irons: Also seems to be in that direction the packets are being dropped, because while doing a tcpdump on halleck I do see the icmp packets from leto. They just don't make it back. root: And our servers have always been on the West coast
We've looked at bringing them home and me running a colo operation, but this city sucks for such.
I'd trust ARPNetworks remote hands more than our current dedicated providers.
Especially in off hours or emergencies. up_the_irons: andol: it drops at NLayer, so I opened a ticket with GTT just now (they purchased NLayer) andol: up_the_irons: thanks up_the_irons: root: the thing is, we don't even have remote hands besides myself making a drive. All our gear is administered remotely (IPMI, serial console, APC power control, the works)
we've tried real hard to make sure physical presence is not needed (and this saves tons of time and money). the only time I go down there is to put in a new box or to replace a drive. root: Understood
But, regardless, I trust you working on routing issues, hardware replacements, etc more than our current dedicated provider's techs.
The current provider is larger, and has some crappy techs and some good techs. I have had mostly good luck, but it's just that -- luck.
As some of these people shouldn't be touching a server up_the_irons: LOL root: Not even from a different galaxy over psychokinesis, much less hands on or remote console. up_the_irons: i know the feeling, let find a picture...
root: had to make this sticker once: http://www.flickr.com/photos/51184165@N00/2174345059/in/photostream
taking the kids to bed, bbl toddf: not that its a big deal nor even a blip on the radar but wouldn't it be cool if arpnetworks had a local freenode irc entry point? ;-) hehe ***: sako has joined #arpnetworks
sako has quit IRC (Changing host)
sako has joined #arpnetworks
sako has quit IRC (Ping timeout: 248 seconds)
sako has joined #arpnetworks
sako has quit IRC (Ping timeout: 265 seconds)
Ehtyar has quit IRC (Quit: Ex-Chat)
sako has joined #arpnetworks
sako has quit IRC (Ping timeout: 246 seconds)