Posts by Category

Buttons

Pure New Zealand

This site is driven by Blosxom

T
his site was written in vi

SDF is driven by NetBSD

Subscribe to this sites RSS/XML feed

This site has a Tableless Stylesheet

Email me

May 16, 2007

Solaris + iSCSI + NetApp

When you get down to the nitty gritty of configuring an iSCSI connection theres not actually a nice guide to getting this done. Sure there are plenty of docs and white-papers on the topic but many of them are either to detailed or not detailed enough (NetApp, Sun and Microsoft are all guilty of making something which should be simple more difficult than it needs to be).

From a Solaris perspective there are a couple of really good guides that fill in the blanks between the Solaris & NetApp documentation:

* OpenSolaris and iSCSI: NetApp Makes it Easy

* iSCSI Examples

[/tech/storage] | [permalink] | [2007.05.16-23:22.00]

NetApp Backup Idea

A cunning way to do your SAN backups:

Schedule a job to mount your LUN's to the backup server and backup a SnapShot to tape from there. Requires a bit of scripting and tweaking but it should provide much more flexibility than trying to backup each server individually.

That way you can avoid being reamed by backup software vendors on a per host basis. You may still opt to do an NTBackup to file for servers and applications but the databases will reside on the SAN and get backed up to tape.

[/tech/storage] | [permalink] | [2007.05.16-22:32.00]

Aggregates, Volumes and LUN's

I'm not a storage person so it took me awhile to get my head around the terminology. I suspect Sysadmins who host databases get harassed regularly by their DBA's about this stuff on a regular basis and as a result are much more intimately acquainted with this stuff than I am. One feature that helps the Sysadmin stay out of DBA initiated RAID-config-hell is that DataOnTap (the NetApp OS) only supports RAID 4 or RAID DP (similar to RAID 6) - note that the 'D' in DP is for 'Diagonal' not 'Dual'.

In NetApp land -

An Aggregate is a collection of disks - this is fairly straightforward. One thing to remember is that for every aggregate you lose a disk to parity data - fine if you have multiple shelves of disks or groups of disks of different capacity (eg you might aggregate 7 15k 72Gb disks and another aggregate of 7 10k 300Gb disks) but not really needed if you have only a single shelf with all disks the same. I guess there are plenty of reasons you might want different aggregates on a single shelf but if you're not doing anything fancy you may as well stick with one). Its easy enough to expand an aggregate by adding disks but its not easy to downsize an aggregate.

A Volume is a chunk of space within an aggregate - note that by default the DataOnTap OS is in vol0 within aggr0. Don't mess with vol0! If you're doing CIF's or NFS sharing (eg NAS type functionality) then you'd dish up your shares at a volume level. Volumes come in two type - the old style TradVolume and the newer FlexVolume - unless you have a special requirement you're better off with a FlexVolume which lets you resize it on the fly.

A LUN is a SCSI term (Logical Unit Number) to reference an individual disk. From an NetApp iSCSI point of view (probably Fibre-Channel too) a LUN is a big file that exists in a Volume. From a server perspective the LUN is visible as a raw disk device to make merry with. A LUN can be easily resized (up or down) but be aware that the NetApp has no inherent knowledge of whats inside a LUN - this has implications for SnapShots - if you need to Snap a LUN's contents you'll want to get SnapDrive and/or SnapManager (depending on the app using the LUN) which acts as an agent on the server (which does understand the LUN's contents) to initiate a Snap within the LUN from the NetApp.

In terms of layout we were initially tempted to create a Volume per application (eg Exchange, Oracle, SQL etc) with multiple LUN's inside each volume. We're now looking at a Volume per LUN as this will give us more flexibility in terms of Snapshots & SnapMirroring (or restore from Backup).

[/tech/storage] | [permalink] | [2007.05.16-22:00.00]

Teaming your NetApp Nic's

We bit the bullet and bought two NetApp 'toasters' - a FAS270 for our DR/UAT site and a FAS270c ('c' for clustered) for our Prod site.

For 2Tb of storage apiece they were actually pretty cheap - we'll SnapMirror between the two sites, we'll use the SnapManager tools for SQL, Oracle and Exchange and iSCSI as our transport medium (Fibre Channel is to expensive and complicated although we will use it between our backup server and tape drive).

So I'll be collecting some tips here as we put real apps on these systems.

You can team your Nic's in a couple of ways - either singletrunk or multitrunk mode. Single mode is purely for failover - one connection dies and the other will pick up the connection. Multi mode provides failover, link aggregation and load-balancing.

If you have more than two Nic's you can do this via the web interface, if you've only got two then you'll have to use the console (obviously you can't reconfigure a Nic if its already being used for something else; eg if you 'ifconfig e0 down' you'll lose your connectivity to configure the trunking).

To create a multi trunk virtual interface called multitrunk1 with e0 and e1 issue the following command on the console:

vif create multi multitrunk1 e0 e1

Then to configure it do the usual:

ifconfig multitrunk1 [ip address] netmask [netmask address]

And you can brink it up or down in the same way as any other interface.

One important point to note is that if you do this from the console be sure to update /etc/rc & /etc/host to reflect the vif or you'll lose the interface after a reboot. The web interface does write this info to these files but its worth double-checking that the updates have been made.

[/tech/storage] | [permalink] | [2007.05.16-21:41.00]

ESX 3 Network Reconfig

From here Changing the IP address of service console in ESX 3.x :

esxcfg-vswif -a vswif0 -p Service\ Console -i 10.1.1.1 -n 255.255.255.0 -b 10.1.1.255

And don't forget to set the correct gateway in /etc/sysconfig/network or the command to configure the virtual switch interface will hang.

ESX is very cool and they've made it pretty compelling in terms of a step up on the free Server (and older GSX) versions. It include user ACL, virtual switching, more efficient hypervisor (the RedHat 7.2 upon which ESX is based is stripped to the bare bones) and more granularity in terms of resource allocation. One of the things that isn't made very clear is that if you want to leverage some of the bells & whistles (eg High Availability, VMotion, Backup, centralised licensing) you'll need a SAN (or NAS in a pinch) and another box - ideally physical although it could be virtual (obviously you can't do HA or VMotion if your ESX instance hosting the management box dies though!).

[/tech/virtual] | [permalink] | [2007.05.16-21:14.00]