keskiviikko 16. helmikuuta 2011

Netapp: Upgrading 3040 to 3240 by creating "twin" MetroCluster

This is not anykind of total configuration guide, just a "story" about upgrade process and some information how we did it.

So the case was that we had MetroCluster with 3040 heads. New 3240 heads came out and since 3040 was old enough, we decided to replace it with new ones.

First we thought that "ok, we can just takeover the other side -> change the head -> give it back to a new head and we're good to go". Well, after reading some documentation it was pretty clear that you can't do this. I'm not sure what's the exact reason for it, but at least 3240 has newer NVRAM and 3040 <> 3240 can't sync each others NVRAMs (and usually MC heads should be pretty much same HW, same modules and things like that). Problem was that we have a whole lot of servers using this storage and (big surprise) our boss didn't like the total blackout idea, so we had to figure something else out. I wonder how big companies usually do head upgrades at first place(?)

PUFF and we got an idea.. ;) Just create a new MC and use same backend switches which we are using with old MC as well. Then we asked this from Netapp and they said that twin fabric MC is supported (so that both MC have their own backend switches). And twin fabric MC sharing same backend switches should be supported later this year -- or something like that at least..?

So, non-supported feature. And this twin fabric support is only "available" when both MetroClusters are using same head-types, so our 3040 + 3240 twin won't be supported. ever(?) :) One idea was to buy new backend switches for new MC, but since we didn't need any extra gear, we just wanted to save our money.

Remind, NON-SUPPORTED. Use with caution and if it fails, it fails and nobody won't help you :)

To start with, we we had 2 brand new shelves, so we used those to build both sides up locally (one local shelf per site to install ontap and built up root aggregate). This included all basic stuff.

Next step was to build zoning for backend (it's pretty much recommended anyway, so this was a good thing to do). Basically you have:
  • 4 switches
  • 1 port for ISL (E-port)
  • 1 port for FCVI per switch
  • 1-2 ports used by NA itself for connecting shelves per switch
  • X ports for shelves per switch
To do zoning, you want to separate FCVI connection and NA + shelves. (=4 zones).
No zoning for ISL ports at all.

Brocade configuration could look something like this:

zonecreate "FCVI_3040", "1,0; 2,0"
(first number (1) is domain id and second (0) is port number. Zone is for fabric, so it will automatically spread to partner switch as well. You have to define partner's ports on same zone. You can find domain ID's with 'switchsshow' -command.

zonecreate "FCVI_3240", "1,1; 2,1"
zonecreate "STORAGE_3040", "1,3; 1,4; 2,3; 2,4"
zonecreate "STORAGE_3240", "1,5; 1,6; 2,5; 2,6"
cfgcreate "SAN_FABRIC1", "FCVI_3040; FCVI_3240; STORAGE_3040; STORAGE_3240"
cfgsave
cfgenable "SAN_FABRIC1"

That should do it.

When zoning is done, power up both heads and see if they are ok and can see local and partner disks (eg. disk show, sysconfig -r, storage show disk -p and so on...). You need to assign both local disks and disks which you are going to use for mirroring the root aggregate. You can see unassigned disk with 'disk show -n' -command.

To assign disk, just type: disk assign switch2:5.16 -p 1 -s
This will assign port 5, disk 16 (on switch2) on pool 1 with systemid you defined. You can find systemid eg. by typing 'sysconfig'. After that it will automatically assign all disk on pool 1 (might take a minute or two).

When disks are assigned, check that everything seems ok, eg. sysconfig -r

toaster1> sysconfig -r
Aggregate aggr0 (online, raid_dp, mirrored) (block checksums)
  Plex /aggr0/plex0 (online, normal, active, pool0)
    RAID group /aggr0/plex0/rg0 (normal)

      RAID Disk Device                  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      --------- ------                  ------------- ---- ---- ---- ----- --------------    --------------
      dparity   switch2:13.16   0d    1   0   FC:A   0  FCAL 15000 560000/1146880000 560208/1147307688
      parity    switch1:13.17   0c    1   1   FC:B   0  FCAL 15000 560000/1146880000 560208/1147307688
      data      switch1:13.18   0c    1   2   FC:B   0  FCAL 15000 560000/1146880000 560208/1147307688
      data      switch1:13.28   0c    1   12  FC:B   0  FCAL 15000 560000/1146880000 560208/1147307688


  Plex /aggr0/plex6 (online, normal, active, pool1)
    RAID group /aggr0/plex6/rg0 (normal)

      RAID Disk Device                  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      --------- ------                  ------------- ---- ---- ---- ----- --------------    --------------
      dparity   switch1:10.16    0c    1   0   FC:B   1  FCAL 15000 560000/1146880000 560208/1147307688
      parity    switch2:10.17    0d    1   1   FC:A   1  FCAL 15000 560000/1146880000 560208/1147307688
      data      switch2:10.28    0d    1   12  FC:A   1  FCAL 15000 560000/1146880000 560208/1147307688
      data      switch2:10.27    0d    1   11  FC:A   1  FCAL 15000 560000/1146880000 560208/1147307688

And:

toaster1> storage show disk -p
PRIMARY               PORT  SECONDARY             PORT SHELF BAY
--------------------- ----  --------------------- ---- ---------
switch1:10.16   B    switch2:10.16   A     1    0
switch2:10.17   A    switch1:10.17   B     1    1
switch1:10.18   B    switch2:10.18   A     1    2
switch2:10.19   A    switch1:10.19   B     1    3
switch2:10.20   A    switch1:10.20   B     1    4

When everyhing seems to be ok, you can create mirror. 

Just type: aggr mirror aggr0 

It should automatically do it. You can use 'aggr mirror aggr0 -n' if you want to simulate what NA is trying to do (good thing to check it).

And to complete MC, just enable clustering (you need cf and cf_remote licenses to do this).
Type: cf enable

So basically it's all same things when you're building normal MC. Just do zoning and it should work.

And when MC is up, plan is just to migrate servers from old MC to new MC. When one shelf (aggregate) is empty, just remove shelf and plug it to a new one and keep going until migration is done. Maybe slow way to upgrade, but at least you can minimize downtime a bit.

torstai 10. helmikuuta 2011

Netapp: registry settings lost after reboot

If you're using FAS 32x0 model and ONTAP 8.0.1, there's a possibility to lost certain registries (options).
This affects at least for options: timed.*, autosupport.* and timezone


To fix this, you need to do the following steps (on boot loader, so you need to shutdown the head):

  • halt 
  • printenv
    • watch if bootarg.init.wipeclean is set to 'true'
  • if yes, type: unsetenv bootarg.init.wipeclean
  • bye

After that change, it should work fine.


perjantai 4. helmikuuta 2011

Netapp: How to hot-remove disk shelf

AFAIK, this is non-supported feature, so don't do this at home.

It's possible to hot-remove disk shelf without shutting down entire head or metrocluster. Document says you should shut down all heads before removing a disk shelf. Here are instructions how to do it (at least one way to do it), if you want to try it. Don't blame me if it fails :) And this was done with ontap version 7.3.3. If I remember correctly, there was some bug on version 7.3.1.1 which might make your head to panic on certain situation, so maybe it's better to upgrade before doing this.

So, first decide which shelf (shelves) you want to remove. First step is to empty that aggregate, so:

Step 1) Offline and destroy all LUNs (if you have any)
Step 2) Offline and destroy all Volumes
Step 3) Offline and destroy your aggregate -> this will move all disks as spare disks
you can see this by typing 'vol status -s'

Now it's getting more intresting.. (frightening) :)

I'm not sure if you have to do zeroing/removing ownership before unplugging a shelf, but at least it helps a bit if you're re-using that shelf.

Step 4) You might want to zero all spares before re-using these:

toaster*> disk zero spares

Step 5) remove all ownerships from shelf (example)

Before removing ownership, you might want to disable disk.auto_assign option (otherwise head might get ownership back). Just remember to enable it again after shelf removal.

toaster*> options disk.auto_assign off

Check spare disks:

toaster> vol status -s

Spare disks

RAID Disk       Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
---------       ------  ------------- ---- ---- ---- ----- --------------    --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare           v5.32   v5    ?   ?   FC:B   -  FCAL  N/A  1020/2089984      1027/2104448

If you have lot of them be careful here and select correct ones.

And when you're ready to remove ownership, you need set advanced privileges.

toaster> priv set advanced
toaster*> disk remove_ownership v5.32
Disk v5.32 will have its ownership removed
Note: Disks may be automatically assigned to this node, since option disk.auto_assign is on.
Volumes must be taken offline. Are all impacted volumes offline(y/n)??

Do this for every disk on shelf.

WARNING: There's a possibility to use * -sign on disk-id. It won't ask any confirmation, so be careful with this one. Example: disk remove_ownership v5.*

Step 6) Now you can take unplug all FC-cables from shelf. You can do this from shelf or head or from switch. 

This will generate a whole bunch of errors and head itself feels quite bad but she's going to be ok after 10-15 mins. 

That's it.

maanantai 10. tammikuuta 2011

Cisco UCS firmware 1.4.(1j)

Cisco julkaisi juuri uuden version UCS:stä, jossa korjattu ilmeisesti vain tämä. Tietty ihan kiva ominaisuus jos UCSM:ään pääsee vielä kiinni päivityksen jälkeen.. :)

After activating the UCS Manger Software during upgrade from versions prior to 1.3(1p) you no longer lose the ability to log into the UCS Manager GUI. Upgrading to version 1.3(1p) before continuing to version 1.4(1j) is not necessary. (CSCtl22248)

Releasenotesit täältä: http://www.cisco.com/en/US/docs/unified_computing/ucs/release/notes/OL_24086.html

keskiviikko 5. tammikuuta 2011

Cisco UCS firmware v1.4

Cisco julkisti hiljattain uuden version UCS firmwarestaan (1.4(1)).

Muutamia poimintoja uusista ominaisuuksista. En ole vielä asentanut ks. firmwarea, joten henk. kohtaisia kokemuksia ks. featureista ei ole. Toivottavasti pian on!


  • Tuki C-sarjan palvelimelle, sekä B230 -palvelimelle
    • C-sarjalaiset ovat Ciscon räkkipalvelimia, joten nyt ne voidaan yhdistää samaan Fabriciin ja täten hallita niitä yhdestä paikasta
  • Maintenance Policies 
    • Voit säätää politiikan mitä tehdään kun palvelinasetuksia muutetaan, eli ei tarvitse olla epävarma boottaako palvelin itse itsensä ilmoittamatta mitään. Vaihtoehtoja on: immediate, user-ack, timer-automatic. Aika itsestään selvä ominaisuus, joka puuttui edellisistä versioista.
  • SAN-porttien yhdistäminen (Port-Channeling)
    • Saadaan tehtyä esim. kahdesta FC-portista yksi, joka mahdollistaa nopeamman konvergoitumisen vikatilanteessa
  • Tuki 20:lle kehikolle per "cluster"
  • PVLAN -tuki
  • SPAN -tuki (auttaa vianselvityksessä)
  • Tukee 1024 VLANia (vanhassa 512)
  • Software packaging with server bundles
    • Tämän avulla voidaan tuoda uusia palvelinmalleja ilman että koko järjestelmää tarvitsee päivittää
  • Fabricsync
    • 6100 kytkimet synkkaavat MAC-osoitteet keskenään
  • SNMP GET -tuki kaikille komponenteille
Myös muita ominaisuuksia oli, mutta nuo oli ehkä kiinnostavimmat itselleni.


Ensimmäinen VMUG tapaaminen

Aloitetaanpa uusi vuosi ja yritetään taas jatkaa blogin pitämistä aktiivisemmin.

25.1.2011 on ensimmäinen Suomen VMUG -tapaaminen TDC Oy:n tiloissa Mechelininkatu 1 A:ssa klo 17 alkaen. Tervetuloa kaikki mukaan!

Ilmoittautua voi tästä:
http://campaign.vmware.com/usergroup/invites/Finland_1-25-11.html

tiistai 24. elokuuta 2010

Cisco UCS: manual failover

Jos jostain syystä haluaa vaihtaa kumpi interconnect moduleista on primary, niin tässä ohje miten se tehdään manuaalisesti. Tämä voi tulla eteen esim. kun päivittää firmikset näihin.

Ota yhteys sen hetkiseen primary -kytkimeen (CLI). Seuraavalla komennolla näkee kumpi on primary.


ucs-B# show cluster state 
Cluster Id: 0xd4324e30a60c11df-0xa87300059b73f684


A: UP, SUBORDINATE
B: UP, PRIMARY


HA READY

Tämän jälkeen vaihdetaan kytkin A primaryksi:

ucs1-B# connect local-mgmt 
Cisco UCS 6100 Series Fabric Interconnect

TAC support: http://www.cisco.com/tac

Copyright (c) 2009, Cisco Systems, Inc. All rights reserved.

The copyrights to certain works contained herein are owned by
other third parties and are used and distributed under license.
Some parts of this software may be covered under the GNU Public
License or the GNU Lesser General Public License. A copy of 
each such license is available at
http://www.gnu.org/licenses/gpl.html and
http://www.gnu.org/licenses/lgpl.html

ucs-B(local-mgmt)# cluster lead a 
Cluster Id: 0xd4324e30a60c11df-0xa87300059b73f684

Eikä muuta. Tämän jälkeen GUI pitää käynnistää uusiksi. 

'show cluster state' -näyttää jonkin aikaa tältä:

ucs-B(local-mgmt)# show cluster  state 
Cluster Id: 0xd4324e30a60c11df-0xa87300059b73f684

B: UP, SUBORDINATE, (Management services: SWITCHOVER IN PROGRESS)
A: UP, PRIMARY

HA NOT READY
Management services: switchover in progress on local Fabric Interconnect

Ehkä tuossa joku minuutti menee kun failover on tehty.