Posts by Category

Buttons

Pure New Zealand

This site is driven by Blosxom

T
his site was written in vi

SDF is driven by NetBSD

Subscribe to this sites RSS/XML feed

This site has a Tableless Stylesheet

Email me
home :: tech :: datacenter

Jul 04, 2007

New Datacenter Established

So we installed our first datacenter 'beach-head' last week. It was actually mostly painless - due to all the advance prep work put in over the previous months. We have a 'feed & water' hosting contract so we own all our gear but our host looks after the power and environmentals (including a certain number of tape-changes).

Our initial 'beach-head' consisted of a diverse fibre data connection (100Mb), a router, out of band management switch (for the IP-KVM & ILO interfaces), data switch (separate vlans for data & san traffic), firewall (even though its all internal - traffic falls into different security zones to keep the auditors happy) and domain controller. We'll supplement this with our prod-SAN, a bunch of app & database servers, our backup server and tape drive + another telco comms circuit.

Some interesting tips if you're thinking of shipping gear offsite -

  • If you're in a metro area diverse fibre is cheap and fast (two leads into the building coming in from different directions going via different circuits).
  • Setup your equipment as if it were offsite - spin off a vlan to simulate the entire off-site network internally so you can fully test everything.
  • Label up absolutely everything and note down all the interfaces and port connections. Keep track of this information in a spreadsheet or visio so you can talk to your host site engineers should they need to troubleshoot anything on your behalf.
  • If you're allowed (many hosts require you to leave your phone, pda or camera at the door), take a bunch of photo's to complement your diagrams.
  • Most datacenters have a colour-code for their cables - make sure you follow it or specify they stick to your existing scheme.
  • Your host will have engineers that can rack and cable everything up much tidier than you could so leave them to it. As long as you tell them where you want stuff they'll take care of the rest. Actually get them your rack layout in advance and they may even have some suggestions about what to put where.
  • Unless you're filthy rich you can run all your management traffic (IP KVM and ILO) through another switch (a good use for all those old non-PoE 10/100Mb Cisco's). Leave your server data & SAN traffic through a good non-blocking switch (we went with a Cisco 4948 as a big Catalyst enterprise chassis would have been overkill). Ideally we'd have two switches for redundancy and multi-pathing but cost would have been prohibitive and lets face it a $10 Power Supply on a media convertor is more likely to die than a $15k switch.
  • IP KVM's are cool and supplement ILO/LOM (Integrated Lights Out/Lights Out Management) - if you move to a totally hands-off approach to server provisioning you can get hardware delivered straight to the datacenter and then hooked up to the KVM - you can configure the rest remotely. In fact IBM's RSA II ILO card even lets you boot off a file or remote CD.
  • You can pick up a multi-port serial adaptor fairly cheaply - stick it into your management server and hook up your switch and SAN console ports for an extra level of low-level access.
  • Diesel goes 'stale' make sure your host cycles their tanks regularly in addition to running regular generator and UPS tests.
  • Zoning your internal network seems to be popular with the auditors - use different firewall NIC's to access different parts of your LAN and lock down the rules. We're starting with a very simple configuration - we've split out our management, data and telco traffic. When we shift our DMZ out there we'll add another zone. We also will have an inter-datacenter circuit primarily for SAN replication to our DR/UAT site (due to earthquake risk most NZ datacenters have a presence in a couple of different locations). A recent external security assessment recommended fourteen different zones which was frankly insane for an organisation our size so we'll start small.
  • Most hosts will charge by the rack - make sure you think carefully about what you send to the datacenter. It might be a good opportunity to consolidate your servers. If you have lots of blades (or storage arrays) you may get hit up for more $$$ as they really suck down power. As your rack fills the host will take regular measurements of the amount of power you're pulling down - if you exceed the 'draw' for a standard rack you may be charged extra.
  • If you tour the datacenter make sure it has all the good stuff you'd want out of a custom built server hosting facility - hot & cold aisles (so the hot air from one rack doesn't get sucked into the opposite rack), iso-base earthquake damping (nothing like watching the rack jiggle), raised floors, 2+1 (two units plus a spare) redundancy for power, aircon, adequate filtering, UPS, comsm etc.
  • Be sure to go over the financials with a fine tooth comb - you'll find some variation on price and what is and isn't included. If you're anything like us you'll find the host with the simplest price schema is often the best.
  • Its interesting to look for little things that make life easier - for example a separate Tape library room off the main server room. This means datacenter operators can do their tape changes without having to go anywhere near the servers themselves (we switched from SCSI to fibre-channel to accomodate the 12m cable run from the backup server to the tape drive). Another hosting provider was looking at rack-hoods for blade servers to ensure the air flow wasn't dissipated.
  • Zoning your internal network seems to be popular with the auditors - use different firewall NIC's to access different parts of your LAN and lock down the rules. We're starting with a very simple configuration - we've split out our management, data and telco traffic. When we shift our DMZ out there we'll add another zone. We also will have an inter-datacenter circuit primarily for SAN replication to our DR/UAT site (due to earthquake risk most NZ datacenters have a presence in a couple of different locations). A recent external security assessment recommended fourteen different zones which was frankly insane for an organisation our size so we'll start small.

Will add updates if anything else of use comes along.

[/tech/datacenter] | [permalink] | [2007.07.04-03:49.00]

Jan 30, 2007

Datacenter Research

Seems to be very little information on picking a data-center to host your infrastructure servers - plenty of information for collocation, website hosting etc but not a whole lot to help you pick someone to entrust with your core kit.

If you're interested in a DIY data-center these sites contain some useful information:

* Good guidelines (if a little dated) for Data-center requirements (cooling, power, security, connectivity and staffing/accessibility)

* Data-center Resource Site

* Sun's guide to Planning a Data-center

Some great remote management kit is available too - remote control your systems via Web-browser:

* Raritan 64 Port IP KVM - even allows you to dial in via modem in an emergency when your LAN link dies

* Raritan 20 Port Power Strip

* OpenGear 8 Port Serial Console Server - make out of band adjustments to your switches, Unix and SAN gear

[/tech/datacenter] | [permalink] | [2007.01.30-01:23.00]