mnathani: smaller accounts do not get less CPU. everyone vps has 1 core. The scheduling of cores per vps is determined by the standard Linux scheduler (we don't do anything fancy to it) toddf: isync eh? i'll have to check that out I keep meaning to experiment with "multiple master" mail servers for high availability. Two MXs which both deliver into local Maildir folders and then offlineimap/isync/dsync/something_else to keep the Maildirs in sync plett: i'm doing multiple master servers for a client (but not the offlineimap stuff). they haven't implemented the 2nd MX, so i can't comment about it yet. up_the_irons: How are they intending to do the syncing between servers? plett: oh, actually, these are just relay hosts for spam, virus, etc... filtering plett: so i guess it doesn't really apply to your setpu *setup I eventually want to have multiple A/AAAA records for my IMAP server and clients can use one or the other without needing to care which one they connect to, and to have the setup survive the failure of one of the servers without losing mail plett: that would be sweet up_the_irons: I can't see any reason why it can't work either. The only complex bit is the IMAP level syncing and whether that is reliable for syncing bi-directional changes plett: that's not an "only" ;) plett: that's probably the most complex part like, which is the most up-to-date? but, yeah, if you make it work, please share :) The only corner cases I can think up are ones where clients are making changes on both servers and both change message flags in some way But for mail delivery it should work well yeah http://goo.gl/Kvue0 - my favorite slide from this year's OpenBSD PF talk (so far, still reading through). in case anyone has issues with a CentOS 6.3 custom install, I just wrote this: http://support.arpnetworks.com/kb/vps/centos-63-experiences-segmentation-faults-and-crashes-during-install Finally official: https://www.arpnetworks.com/dedicated or, without the s, http://arpnetworks.com/dedicated Fancy! ok, who's looking at it from Sweden... Nice :-) link to console on wiki doesn't work also, you might want to explain a bit more about dual uplinks. both ports in same vlan, must use lacp? seperate vlans/subnets? etc Busted; I looked at it from Sweden... i knew it! josephb: tnx for catching that broken link; fixed cool :) josephb: "Dual 1 Gbps (GigE) Uplinks within VLAN" or "Dual 1 Gbps (GigE) Uplinks (Same VLAN)" or... ? I have very limited space, but want to explain as much as possible in a few words :) no restrictions on layer 2 proto, but we recommend active / failover bonding (bonding in Linux, lagg in FreeBSD, ... something else in OpenBSD) "Dual 1 Gbps (GigE) Uplinks in VLAN" shorter ;) up_the_irons: I think linking it off to a wiki article might be better "Redundant Uplinks (GigE)" josephb: yeah.. and a link to something where you have space to explain it might work. but I'm a detail guy and this is a sales page :) yeah, that was on my plan actually, but for a bit later for what it's worth you could probably leave out the wiki link for console. and just leave it as straight sales pitch i tend to find that generates sales inquiry emails and i just have to say "read this". so, when i link it, those emails stop :) i'd link other things too, but i simply haven't written those articles yet this is more of a "soft launch" anyway :) yeah, for sure. Looks great thanks! plett: I used to think I wanted to do the reziliant mail server approach by having a sql backing store for the mail itself, then use db replication for the mail itself. now I see things like dsync that could easily be extended to sync multiple mail systems, but it still doesn't feel quite right. isync and offlineimap work because they cache info about the local mail so it is know if local deleted a message and can be translated to upstream, flags are updated with sane logic (if someone read the mail somewhere, guess it is read, eh? etc..) .. all via the imap protocol .. one could do similar with physical access, its simply a question of how much disk io are you willing to spend to synch how much mail? unless you have a very efficient ... ... mechanism, probably at delivery time, you're going to run into scalability issues the larger you scale, imho toddf: Yeah. Syncing mail via IMAP like this is inherently un-scalable. I wonder how much of Dovecot you'd have to re-implement to be able to use a database backend openbsd terminology would be trunk(4) for failover and bonding, you'll have to be clear that it is setup as LACP otherwise bonding would cause issues I have several dell systems with ipmi. guess I need to figure out how to access remote power reboot and serial console that way too .. would be nice in theory you would just do a plugin with a different storage mechanism toddf: yeah, i have conserver hooked up with ipmi and can hit "console kvr24" (for example), and get a serial login :) any chance I can hit you up for the 'console kvr24 {' section (minus ip address info etc) ? or is it a 'go read conserver + ipmi in google' type of thing? toddf: see pm http://dovecot.org/pipermail/dovecot/2006-June/013818.html <-- timo's official answer to the 'can I store email in a sql database with dovecot' .. seems to suggest a shared filesystem is better than a non existent free multimaster database, maybe with postgres sporting multimaster as well as mysql these days (yes? pgpool at least .. or am I dreaming?) maybe this answer would be different today dunno toddf: Last time I had to do IPMI on a Dell was with an R200. The magic incantation was: ipmitool -E -I lan -H hostname -U root isol activate toddf: http://blog.dovecot.org/2012/02/dovecot-clustering-with-dsync-based.html looks like an official answer to a different question plett: interesting bit indeed "Arp Metal" eh? Time to turn this up to 11! (I'm guessing these will be the retired KVM servers?) up_the_irons: to clarify a bit, maybe refer to the uplinks as "Dual 1 Gbps (GigE) Uplink ports"? To me that sounds like the server has dual-NICs rather than, somewhere upstream, there are redundant providers. up_the_irons: Also - what types of RAID are supported by the hardware RAID card? You might add that... you might not, it's up to you. The prices seem reasonable though :) Now if only I had $129/mo... up_the_irons: Also - do any IP addresses come with the dedicated servers? ^ a question sure to be asked, and best answered by the webpage ;) hmmm the small one is bigger than my existing colo box toddf: yeah, dsync looks hackish. I thought it was a bit better than cron a script to run as every user. sounds like from the above url from plett .. dsync can be run in an automated fashion based on when things change not automatically for every user .. aka a queue of dsync runs .. instead of scheduling the thundering herd .. can it? The wiki page doesn't describe anything like that :/ oh I see, plugins. 'do your dirty work with plugins. if we like your plugin enough, we might refine it and include it in the base distro...' is how I read that "modules" is probably more accurate. it's part of the core package, just not enabled by default. their sieve support (pigeonhole) is probably more like what you're describing. up_the_irons: when you get a chance, can you look at my beta VM? It's not allowing inbound SSH, but I do have connectivity from the box outwards and there aren't any firewalls loaded uuid is ad2baab0-f17b-012f-0d9d-525400972102 brycec: all good points, thanks! brycec: they are not the retired KVM servers, btw (none of them are retired actually), but a new set of servers just for this new product line (with newer E3-1240V2 chips) Lefty: i'm sure your beta VM is fine, must be something in the vps causing trouble up_the_irons: yeah, I don't imagine it's on the kvm server itself but it's odd brycec: "Dual 1 Gbps (GigE) NICs" <-- might be better; i just tried "Ports" but even then it might not be totally clear. Yet, a NIC is almost always associated with a network card on a server. i'm gonna write up a KB article about the NICs now, cuz it seems to be a source of questions toddf: with trunk(4) failover mode, do you know if OpenBSD sends a gratuitous ARP should the master port change (that is, fail over to the other NIC)? up_the_irons: with trunk you can do active active well if you're connecting to a switch with lagg mercutio: but i don't want active active ok err lacp what's the difference between lacp and lagg? you could probably test to see if it does that arp mercutio: of course i can, but i'd rather just ask if someone already knows ;) it would save time mercutio: lagg is an interface in FreeBSD. LACP is a protocol. well true ahh that's where i got that from problem with lacp is a lot of switches won't do lacp betwen switches ah i used to think there was little benefit for redundancy cos of that but intel cards have had a few transmit hangs over the years mercutio: i have one Metal customer using Linux bonding in active-backup mode (same as "failover" in Free/Open BSD) with a lot of success. he did tons of testing, works great. OK guys, let me know if this clarifies things a bit on the dual uplink bullet point: http://support.arpnetworks.com/kb/dedicated-servers/about-the-dual-1-gbps-gige-nics-on-arp-metal-dedicated-servers and now that article is linked from the sales page http://arpnetworks.com/dedicated up_the_irons: over multiple swithces? mercutio: yes, i don't want switch failure to bring down a box understand for VPS', i can engineer it any way i want on the host machine, but for a dedicated box, i need to expose more of the inner workings of that stuff to the customer i wonder if loadbalance works theoretically, the other modes should work too, i've just never tested them yeah brb i can't reallya fford a dedicated myself as much as i'd like it :) Are the dedicated servers Vmware ESX capable? Had to turn off ipv6 :/ hiccup! Yeah, what happened to the arpnetworks IPv6? examples: www.nearlyfreespeech.net, halleck.arrakis.se i have packet loss too :P i don't share your choice of smiley. i would go for :S atm hehe it kind of defeats the purpose of being in the support chan if my connection is from my vps >.< my, my irc bot timed out again i should set up some additional pings on my rrd graphing 22 packets transmitted, 13 received, 40% packet loss, time 21073ms from my vps https://halleck.arrakis.se/smokeping/images/arrakis/leto4_last_10800.png hi ipv6 looks down andol: colours! up_the_irons: seeems to be acouple of people in here experiencing packet loss back up lol oh, its fixed now yes, seeing as up_the_irons is back in the channel... │20:08:11 @up_the_irons │ wow, IPv6 router is slooooooooooooooooooow lol up_the_irons: IPv4 a well https://twitter.com/arpnetworks/status/260941977959415808 I also experienced IPv4 packet loss i didn't see any IPv4 issues though s3.lax (our IPv6 router) was dropping a lot of packets interesting, a cyberverse VPS i was monitoring was showing loss as well up_the_irons: This is what I am experiecing between leto (arpnetworks) and my other vps halleck - https://halleck.arrakis.se/smokeping/smokeping.cgi?target=arrakis.leto4 colours! up_the_irons: Also saw it against www.nearlyfreespeech.net, even if that seems fine now. andol: silly question, but by any chance are you running smokeping on FreeBSD9? staticsafe: Nope, that would be an Ubuntu 12.04 hm ok am having some issues that I can't get fixed http://smokeping.asininetech.com/smokeping.cgi?target=_charts like so up_the_irons: http://stats.pingdom.com/31n57wtu0gs4/660615 relevant perhaps? staticsafe: file system permissions issues? andol: nope, there is a thread about it on the smokeping mailing list im gonna try to debug it again now jbergstroem: yeah maybe. smells to be of bad path / peer. probably will clear up on its own. i don't see anything out of the ordinary on my end, so, i'm still looking around. up_the_irons do you have 2tb drives available for the dedicateds? oh wth www user can write just fine \root: no, 2tb drives fail a lot \root: anything above 1tb pretty much blows; platter sizes and stuff go above a certain threshold of reliablity Hrm. I've been running 8 2TB Seagate Constellation ES drives for a long time with no issues I would consider moving to arpnetworks for our dedicated, but can't make due with less than 4 x 2TB on a server in RAID10 \root: man, i have piles of busted seagate drives stopped using them a while ago bad luck o.O What kind? Constellation ES? $239.99 each, ouch I hate Seagate, but those Constellation ES drives have been rock solid and fast. http://pastie.org/pastes/5106895/text?key=vaugnxeraqkpybeshs6r5q - can anybody make anything out of this? this is the fcgiwrap process, trying to debug a smokeping issue \root: how long have you had them? Enterprise drives... what are you using? About a year and a half, estimating not bad, guess those ES are better I wouldn't recommend them to you over HGST enterprise grade drives, but they are not the crap that is Seagate desktop drives. \root: so yeah, i can't shell the $239 each for the 2TB drives. we have a batch of 1TB's in stock for the dedicateds. Maybe at some pointer later we can. Well Can you do a 8x1TB system? in RAID10 That could work. I need 4TB usable in RAID10 though. Also -- what drives are the 1TB's you have? Enterprise grade, or desktop drives? http://www.newegg.com/Product/Product.aspx?Item=N82E16822145476 -- That's probably what I'd pick. \root: depends on availability. I have both WD RE4's and regular Hitachi's regular as in Deskstars? yeah That are great drives, if pre-acquisition Wonderful drives, I'd take the Hitachis they are all pre-acquisition cuz i can't get any more like those ;) Actually have about 12 of them in my house right now, but mostly 2TB deskstars, at least a few 1TBs though Yeah, it sucks. yep I don't know where to turn after all the mergers, will take time to figure out how it all goes \root: i'm hoping the RE4's are decent Shit, I'd buy those Hitachis off you for more than you paid at this point LOL haha we need 'em like crazy, hands off! ;) I hear you. So, is a 8x1TB config plausible? \root: do you do hardware raid10 or soft? depends on the box we have two, one of each SW RAID10 has been good to us, despite all of the frowns it gets \root: our 1U's are 4x. I'm going to shy away from building custom setups this early in the launch phase (of the dedicateds) 1073741824 bytes (1.1 GB) copied, 4.99374 seconds, 215 MB/s That's on SW RAID10 with Constellation ES 2TB's i'd like to accommodate, but we just have too much going on right now \root: nice and fast :) Yeah, thanks No big deal anyone seeing any more IPv4 issues? pingdom and my pagerduty seem to have calmed down up_the_irons: nope, all good Actually, kind of sucks since you have Deskstars, but the 215MB/s well help me sleep. \root: you guys storing large files or ...? Nope, just don't oversell. jbergstroem: cool VPSs And shared hosting ah ok you must pack a lot of space into a vps Zero overselling, professional clients, premium service What do you mean? that's nice; good market \root: i mean you must offer a lot of space per VPS Starting at 32GB to need 2TB drives min So, yeah up_the_irons: Completly dead IPv4 wise to my other VPS halleck.arrakis.se Our VPS plans storage goes like this: 32/128/192/256/512/768... and so on GB andol: looks to be NTT problem 7. ae-5.r21.lsanca03.us.bb.gin.ntt.net 0.0% 2 1.0 1.1 1.0 1.2 0.2 Actually, no so on. That's it. 8. ae-2.r20.asbnva02.us.bb.gin.ntt.net 0.0% 2 65.4 66.6 65.4 67.8 1.7 9. ??? lol \root: cool yeah quite big disks andol: what does traceroute look like *from* halleck.arrakis.se to ARP ? I'd like to get us moved to ARPNetworks eventually. We're only like 30ms away currently. :P \root: are you not happy with your current host? Depends on what aspect you look at it from up_the_irons: http://pastie.org/5106931 i c Wonderful facility, great prices Poor remote hands Will, hit and miss remote hands. As, I'm in Florida. up_the_irons: Also seems to be in that direction the packets are being dropped, because while doing a tcpdump on halleck I do see the icmp packets from leto. They just don't make it back. And our servers have always been on the West coast We've looked at bringing them home and me running a colo operation, but this city sucks for such. I'd trust ARPNetworks remote hands more than our current dedicated providers. Especially in off hours or emergencies. andol: it drops at NLayer, so I opened a ticket with GTT just now (they purchased NLayer) up_the_irons: thanks \root: the thing is, we don't even have remote hands besides myself making a drive. All our gear is administered remotely (IPMI, serial console, APC power control, the works) we've tried real hard to make sure physical presence is not needed (and this saves tons of time and money). the only time I go down there is to put in a new box or to replace a drive. Understood But, regardless, I trust you working on routing issues, hardware replacements, etc more than our current dedicated provider's techs. The current provider is larger, and has some crappy techs and some good techs. I have had mostly good luck, but it's just that -- luck. As some of these people shouldn't be touching a server LOL Not even from a different galaxy over psychokinesis, much less hands on or remote console. i know the feeling, let find a picture... \root: had to make this sticker once: http://www.flickr.com/photos/51184165@N00/2174345059/in/photostream taking the kids to bed, bbl not that its a big deal nor even a blip on the radar but wouldn't it be cool if arpnetworks had a local freenode irc entry point? ;-) hehe