Jul 12, 2007
Mobile Extension & Teleworker
We put in two key MiTel servers this week - Mobile Extension and Teleworker.
Mobex lets you twin your internal phone extension to any other phone number (usually a mobile phone but it could be an analog phone) - its like a fancy phone forward. Essentially the Mobex server creates a conference call between the two phones so at any time you can transfer or pickup the call on the other twinned phone. Very useful for travelling staff - it also means you can publish a single number on your business card that will get you where-ever you are.
Teleworker lets you remote boot a VoIP phone from anywhere on the internet - ideal for people working from home or colocated working through a broadband connection. The phone itself does the QoS (your PC connects via the phone) so it will always prioritise the voice traffic over data if you're in a call. Your phone works exactly the same as an internal extension, you can associate it with any PABX controller and you can even get a local analog breakout module to allow local calls.
Interestingly both Mobex and Teleworker are based on CentOS (RedHat derivative) and act as appliances - most configuration is done via a web interface. A bit of a departure from MiTels other add-on application servers which are primarily Windows based.
[/tech/network] | [permalink] | [2007.07.12-19:01.00]
Jul 04, 2007
New Datacenter Established
So we installed our first datacenter 'beach-head' last week. It was actually mostly painless - due to all the advance prep work put in over the previous months. We have a 'feed & water' hosting contract so we own all our gear but our host looks after the power and environmentals (including a certain number of tape-changes).
Our initial 'beach-head' consisted of a diverse fibre data connection (100Mb), a router, out of band management switch (for the IP-KVM & ILO interfaces), data switch (separate vlans for data & san traffic), firewall (even though its all internal - traffic falls into different security zones to keep the auditors happy) and domain controller. We'll supplement this with our prod-SAN, a bunch of app & database servers, our backup server and tape drive + another telco comms circuit.
Some interesting tips if you're thinking of shipping gear offsite -
- If you're in a metro area diverse fibre is cheap and fast (two leads into the building coming in from different directions going via different physical circuits).
- Setup your equipment as if it were off-site - spin off a vlan at your existing location to simulate the entire off-site network so you can fully test everything before sending it off-site. That way you change IP addresses and spend the next few hours re-establishing your connectivity because you missed something.
- Label up absolutely everything and note down all the interfaces and port connections. Keep track of this information in a spreadsheet or visio so you can talk to your host site engineers should they need to troubleshoot anything on your behalf.
- If you're allowed (many hosts require you to leave your phone, pda or camera at the door), take a bunch of photo's to complement your diagrams.
- Most datacenters have a colour-code for their cables - make sure you follow it or specify they stick to your existing scheme.
- Your host will have engineers that can rack and cable everything up much tidier than you could so leave them to it. As long as you tell them where you want stuff they'll take care of the rest. Actually get them your rack layout in advance and they may even have some suggestions about what to put where.
- Unless you're filthy rich you can run all your management traffic (IP KVM and ILO) through another switch (a good use for all those old non-PoE 10/100Mb Cisco's). Leave your server data & SAN traffic through a good non-blocking switch (we went with a Cisco 4948 as a big Catalyst enterprise chassis would have been overkill). Ideally we'd have two switches for redundancy and multi-pathing but cost would have been prohibitive and lets face it a $10 Power Supply on a media convertor is more likely to die than a $15k switch.
- IP KVM's are cool and supplement ILO/LOM (Integrated Lights Out/Lights Out Management) - if you move to a totally hands-off approach to server provisioning you can get hardware delivered straight to the datacenter and then hooked up to the KVM - you can configure the rest remotely. In fact IBM's RSA II ILO card even lets you boot off a file or remote CD.
- You can pick up a multi-port serial adaptor fairly cheaply - stick it into your management server and hook up your switch and SAN console ports for an extra level of low-level access.
- Diesel goes 'stale' make sure your host cycles their tanks regularly in addition to running regular generator and UPS tests.
- Don't forget to phase your deployment - start small and allow time to bed-down your infrastructure. No point throwing lots of critical gear out in the initial push and discovering a crappy patch lead causes your grief after a couple of days - make sure the basics work well before sending application servers offsite!
- Most hosts will charge by the rack - make sure you think carefully about what you send to the datacenter. It might be a good opportunity to consolidate your servers. If you have lots of blades (or storage arrays) you may get hit up for more $$$ as they really suck down power. As your rack fills the host will take regular measurements of the amount of power you're pulling down - if you exceed the 'draw' for a standard rack you may be charged extra.
- If you tour the datacenter make sure it has all the good stuff you'd want out of a custom built server hosting facility - hot & cold aisles (so the hot air from one rack doesn't get sucked into the opposite rack), iso-base earthquake damping (nothing like watching the rack jiggle), raised floors, 2+1 (two units plus a spare) redundancy for power, aircon, adequate filtering, UPS, comms etc.
- Be sure to go over the financials with a fine tooth comb - you'll find some variation on price and what is and isn't included. If you're anything like us you'll find the host with the simplest price schema is often the best.
- Its interesting to look for little things that make life easier - for example a separate Tape library room off the main server room. This means datacenter operators can do their tape changes without having to go anywhere near the servers themselves (we switched from SCSI to fibre-channel to accomodate the 12m cable run from the backup server to the tape drive). Another hosting provider was looking at rack-hoods for blade servers to ensure the air flow wasn't dissipated.
- Look out for procedural aspects of datacenter operation that may affect how you currently do things. For example does the datacenter have existing relationships with archive companies so you can cycle your tapes to and from offsite storage ? Do they have a relationship with a specialist courier for shipping IT gear ? Do they have an acclimatisation period (some like 12 hours for new kit to adjust to the datacenter temperature & humidity) for new gear before they rack it and power it up ? Do you need to put contractors on an authorised access list for the site ?
- Zoning your internal network seems to be popular with the auditors - use different firewall NIC's to access different parts of your LAN and lock down the rules. We're starting with a very simple configuration - we've split out our management, data and telco traffic. When we shift our DMZ out there we'll add another zone. We also will have an inter-datacenter circuit primarily for SAN replication to our DR/UAT site (due to earthquake risk most NZ datacenters have a presence in a couple of different locations). A recent external security assessment recommended fourteen different zones which was frankly insane for an organisation our size so we'll start small.
Will add updates if anything else of use comes along.
[/tech/datacenter] | [permalink] | [2007.07.04-03:49.00]
Jun 28, 2007
Annotated Blosxom
If you're a fan of Perl of Blosxom you'll appreciate the fully annotated Blosxom script by Rob Reed.
[/tech/perl/blosxom] | [permalink] | [2007.06.28-10:41.00]
Jun 21, 2007
Self-Provisioning IT
One of the annoyances for any IT group is dealing with the trivia of BAU (Business As Usual) tasks. Anything that makes BAU more bearable definitely falls into 'Killer App' territory.
I hardly ever read ComputerWorld but when I do I usually come across at least one decent article and this one in particular caught my eye - Self-provisioning helps Warehouse Stationery save time.
Whats cool is that they're utilising a Kiwi companies technology - Activate.
Check out some of the demo's - the work flows match up with a number of common operational tasks performed by Helpdesks and Admins everywhere.
Whats funny is that we've initiated an inhouse MAC's (Moves Adds & Changes) project that will probably end up re-inventing the wheel with respect to some of this stuff with custom code rather than an off-the-shelf product.
[/tech] | [permalink] | [2007.06.21-00:40.00]
Jun 08, 2007
Another ESX Advantage
Built in clustering & high availability!
Even if you have an application which is CPU hungry (say Exchange) you can drop it into an ESX farm and allocate resources equivalent to a single physical server - then if you leverage VMotion and High Availability if the Exchange server crashes you can migrate it (manually or automatically) to another physical server with minimal downtime. In the old-days if the server failed catastrophically you potentially needed to re-install and restore from backup (assuming you had spare hardware available).
[/tech/virtual] | [permalink] | [2007.06.08-01:31.00]
Jun 07, 2007
Mac Uptime (Updated 01/06/07)
I finally got around to applying the 10.4.9 update to my MacBook. Just before my reboot the uptime displayed:
17:17 up 65 days, 9:40, 2 users, load averages: 0.21 0.17 0.28
Pretty good stability for a laptop (or computer of any kind). I could have kept going as performance seemed the same as it did when I'd rebooted 65 days ago.
My MacBook has been the fastest and most stable Mac I've ever owned. Highly recommended if you're in the market for a non Windows laptop.
[/tech/mac] | [permalink] | [2007.06.07-19:07.00]
May 29, 2007
LINA
Looks interesting - LINA.
With LINA, a single executable written and compiled for Linux can be run with native look and feel on Windows, Mac OS X, and UNIX operating systems.
Check the demo video - its a bit geeky but you get a feel for how it works.
[/tech/unix] | [permalink] | [2007.05.29-06:37.00]
May 26, 2007
NetApp Volumes & LUN's
A good guide us to place LUN's into Volumes grouped by similar Snapshot or Snapmirror regimes (as this functionality occurs at the Volume level). I think the techy term for grouping LUN's in this way is a 'Consistancy Group' - anything you need to get Snap'd together should be kept in the same Volume.
Another thing I picked up was that when allocating space for LUN's be sure to allocate twice the space you need to allow for Snapshots. This space requirement supercedes the default 20% allocated at the Volume level. For LUN based Snapshots the agent software on the host itself (eg SnapDrive for Windows or SnapDrive for Exchange) manages the Snapshot - it interacts with the SAN to ensure this happens properly but the SAN itself has no knowledge of whats inside the LUN.
What this means is that if every block in the LUN changes you need at least as much space again for the Snapshot or you'll get a disk-space error. Its unlikely this would occur - a situation in which it might would be a drive defragment which touched every block.
[/tech/storage] | [permalink] | [2007.05.26-00:17.00]
May 17, 2007
'Toaster' Mailing List
If you're considering getting a SAN or already have one you should check out the Toasters mailing list. I've searched the interweb for equivalent lists for HDS and EMC but haven't found any equivalent (that isn't actually hosted by the vender itself).
Its completely independent of NetApp but is an excellent place to ask questions or search for answers in the list archives.
A good overview of the list is here.
[/tech/storage] | [permalink] | [2007.05.17-06:12.00]
May 16, 2007
Solaris + iSCSI + NetApp
When you get down to the nitty gritty of configuring an iSCSI connection theres not actually a nice guide to getting this done. Sure there are plenty of docs and white-papers on the topic but many of them are either to detailed or not detailed enough (NetApp, Sun and Microsoft are all guilty of making something which should be simple more difficult than it needs to be).
From a Solaris perspective there are a couple of really good guides that fill in the blanks between the Solaris & NetApp documentation:
[/tech/storage] | [permalink] | [2007.05.16-23:22.00]
NetApp Backup Idea
A cunning way to do your SAN backups:
Schedule a job to mount your LUN's to the backup server and backup a SnapShot to tape from there. Requires a bit of scripting and tweaking but it should provide much more flexibility than trying to backup each server individually.
That way you can avoid being reamed by backup software vendors on a per host basis. You may still opt to do an NTBackup to file for servers and applications but the databases will reside on the SAN and get backed up to tape.
[/tech/storage] | [permalink] | [2007.05.16-22:32.00]
Aggregates, Volumes and LUN's
I'm not a storage person so it took me awhile to get my head around the terminology. I suspect Sysadmins who host databases get harassed regularly by their DBA's about this stuff on a regular basis and as a result are much more intimately acquainted with this stuff than I am. One feature that helps the Sysadmin stay out of DBA initiated RAID-config-hell is that DataOnTap (the NetApp OS) only supports RAID 4 or RAID DP (similar to RAID 6) - note that the 'D' in DP is for 'Diagonal' not 'Dual'.
In NetApp land -
An Aggregate is a collection of disks - this is fairly straightforward. One thing to remember is that for every aggregate you lose a disk to parity data - fine if you have multiple shelves of disks or groups of disks of different capacity (eg you might aggregate 7 15k 72Gb disks and another aggregate of 7 10k 300Gb disks) but not really needed if you have only a single shelf with all disks the same. I guess there are plenty of reasons you might want different aggregates on a single shelf but if you're not doing anything fancy you may as well stick with one). Its easy enough to expand an aggregate by adding disks but its not easy to downsize an aggregate.
A Volume is a chunk of space within an aggregate - note that by default the DataOnTap OS is in vol0 within aggr0. Don't mess with vol0! If you're doing CIF's or NFS sharing (eg NAS type functionality) then you'd dish up your shares at a volume level. Volumes come in two type - the old style TradVolume and the newer FlexVolume - unless you have a special requirement you're better off with a FlexVolume which lets you resize it on the fly.
A LUN is a SCSI term (Logical Unit Number) to reference an individual disk. From an NetApp iSCSI point of view (probably Fibre-Channel too) a LUN is a big file that exists in a Volume. From a server perspective the LUN is visible as a raw disk device to make merry with. A LUN can be easily resized (up or down) but be aware that the NetApp has no inherent knowledge of whats inside a LUN - this has implications for SnapShots - if you need to Snap a LUN's contents you'll want to get SnapDrive and/or SnapManager (depending on the app using the LUN) which acts as an agent on the server (which does understand the LUN's contents) to initiate a Snap within the LUN from the NetApp.
In terms of layout we were initially tempted to create a Volume per application (eg Exchange, Oracle, SQL etc) with multiple LUN's inside each volume. We're now looking at a Volume per LUN as this will give us more flexibility in terms of Snapshots & SnapMirroring (or restore from Backup).
[/tech/storage] | [permalink] | [2007.05.16-22:00.00]
Teaming your NetApp Nic's
We bit the bullet and bought two NetApp 'toasters' - a FAS270 for our DR/UAT site and a FAS270c ('c' for clustered) for our Prod site.
For 2Tb of storage apiece they were actually pretty cheap - we'll SnapMirror between the two sites, we'll use the SnapManager tools for SQL, Oracle and Exchange and iSCSI as our transport medium (Fibre Channel is to expensive and complicated although we will use it between our backup server and tape drive).
So I'll be collecting some tips here as we put real apps on these systems.
You can team your Nic's in a couple of ways - either singletrunk or multitrunk mode. Single mode is purely for failover - one connection dies and the other will pick up the connection. Multi mode provides failover, link aggregation and load-balancing.
If you have more than two Nic's you can do this via the web interface, if you've only got two then you'll have to use the console (obviously you can't reconfigure a Nic if its already being used for something else; eg if you 'ifconfig e0 down' you'll lose your connectivity to configure the trunking).
To create a multi trunk virtual interface called multitrunk1 with e0 and e1 issue the following command on the console:
vif create multi multitrunk1 e0 e1
Then to configure it do the usual:
ifconfig multitrunk1 [ip address] netmask [netmask address]
And you can brink it up or down in the same way as any other interface.
One important point to note is that if you do this from the console be sure to update /etc/rc & /etc/host to reflect the vif or you'll lose the interface after a reboot. The web interface does write this info to these files but its worth double-checking that the updates have been made.
[/tech/storage] | [permalink] | [2007.05.16-21:41.00]
ESX 3 Network Reconfig
From here Changing the IP address of service console in ESX 3.x :
esxcfg-vswif -a vswif0 -p Service\ Console -i 10.1.1.1 -n 255.255.255.0 -b 10.1.1.255
And don't forget to set the correct gateway in /etc/sysconfig/network or the command to configure the virtual switch interface will hang.
ESX is very cool and they've made it pretty compelling in terms of a step up on the free Server (and older GSX) versions. It include user ACL, virtual switching, more efficient hypervisor (the RedHat 7.2 upon which ESX is based is stripped to the bare bones) and more granularity in terms of resource allocation. One of the things that isn't made very clear is that if you want to leverage some of the bells & whistles (eg High Availability, VMotion, Backup, centralised licensing) you'll need a SAN (or NAS in a pinch) and another box - ideally physical although it could be virtual (obviously you can't do HA or VMotion if your ESX instance hosting the management box dies though!).
[/tech/virtual] | [permalink] | [2007.05.16-21:14.00]
Apr 04, 2007
Switch UPS
Wellingtons had some up's and down's with respect to power in the central city over the last few months - just before Christmas half the Terrace was knocked offline for 4 hours and a few weeks ago we had rolling power spikes for an afternoon.
For a major outage theres not a whole lot you can do other than having a really good UPS on your core servers (or hosting in a data-center) - the spikes generally aren't a problem for your server room as the UPS will condition the power.
Most PC's actually handle spikes quite well too - the kicker is that your distributed switches will either reboot or pass on any spike to your Power over Ethernet equipment - which means as well as the lights dimming your VoIP phones cut-out and reboot (ditto your Wireless Access Points if they also use PoE).
Now you start to think about some UPS's to cover your distributed switching gear (if you have the luxury of structured cabling all the way back to your server room then you're really lucky!).
From an expert (not me!) -
"A general rule of thumb is that no UPS should be loaded more than 70% to 80% of full capacity to minimise the risk of compromising the protection due to unplanned or temporary overloads. In my calculations I have divided the total load by 0.8 to give a 20% headroom. It is then necessary to establish the VA rating of the UPS. Our UPS's suited to this application have a 0.7 output power factor, so the total watts requirement is then divided by 0.7."
So if you have four 24 port Catalyst 3550 switches and a PowerDsine PoE Injector -
Cisco Catalyst 3550-24-PWR: 525W x 4 = 2100W
PowerDsine POE Injector: 525W x 1 = 525W
Total Load: 2625W
Total UPS Watts Requirement with Headroom Allowance (/ 0.8): 3281
Minimum VA rating of UPS (/0.7): 4687
Which equates to a sizeable UPS. A 6kVA unit will last about 15min under full load but its primarily there for power conditioning and to buy a little time to cut phones over to another location in case of power-cut.
On the subject of Power Conditioning versus a UPS - again more expert opinion -
"Power conditioners were commonly used for protecting against brownouts - they would hold the voltage up for a few cycles. However off-line or line-interactive UPS's have now become lower priced than power conditioners and do the job adequately in virtually all cases. An on-line UPS regenerates the AC power so it is always perfect and constant irrespective of the incoming power."
"Most UPS's have spike protection too. However this is minimal and may become exhausted with one or two spikes, and there is no indication of this. If spikes are a special concern then dedicated surge diverters with a good practical surge capacity and low surge let-through voltage plus status indication."
So watch your switches :-)
[/tech/power] | [permalink] | [2007.04.04-00:38.00]