Tuesday, 25 June 2019

Brocade : Zone creation steps

Zone creation:

* First login to the Switch and find out the online WWn’s
* Login into the one switch from the fabric

Switch-01:admin> nodefind 21:00:00:1b:32:0e:0d:db
Remote:
    Type Pid    COS     PortName                NodeName
    N    698400;      3;21:00:00:1b:32:0e:0d:db;20:00:00:1b:32:0e:0d:db;
        FC4s: IPFC FCP
        Fabric Port Name: 20:84:00:05:1e:36:61:10
        Permanent Port Name: 21:00:00:1b:32:0e:0d:db
        Device type: Physical Initiator
        Port Index: 132
        Share Area: Yes
        Device Shared in Other AD: No
        Redirect: No
        Partial: No
    Aliases: HBA1_P1

*Find out the Storage alias names and select the port which we need to do the zoning
*After finding the port details & alias for  Storage and alias for test server then create Zone with those aliases.

Zone Creating:-

""If alias is not created please use below steps to create alias:

alicreate "HBA1_P1","21:00:00:e0:8b:1d:f9:03"  (Creating alias)

alishow HBA1_P1 (To show the alias)    ""


zonecreate  "HBA1_P1_Storage_5A","HBA1_P1;Storage_5A"     (Creating the zone with server and storage aliases)

cfgadd "configswitch1","HBA1_P1_Storage_5A"        (Add  the newly cretaed zone to current configuration)

cfgsave           (Save the configuration)

cfgenable "configswitch1"            (Enable the configuratio)

Monday, 10 June 2019

Isilon Scale-out NAS Introduction

The Isilon scale-out NAS storage platform combines modular hardware with unified software to tackle unstructured data. Powered by the OneFS operating system, a cluster delivers a scalable pool of storage with a global namespace.

The unified software platform provides centralized web-based and command-line administration to manage the following features:

  • A cluster that runs a distributed file system
  • Scale-out nodes that add capacity and performance
  • Storage options that manage files and tiering
  • Flexible data protection and high availability
  • Software modules that control costs and optimize resources

Sunday, 16 October 2016

                                                               VSP G1000 ARCHITECTURE
                Here I am discussing about major components and architectural differences with VSP as well as features of VSP G1000.
Controller Chassis Components:
Front End Directors (FED/CHA)
Virtual Storage Directors (VSD/MPB)
Cache Path Control Adapter (CPC)
Cache Memory Backup (BKM)
Back-End Directors (BED/DKA)
Service Processor (SVP)
Cooling Fans
AC –DC Power Supply
*      All of the Controller boards within a system are connected through the HiStar-E network of Paths.
Major Change in Controller Design is CPC:
Ø  New Cache Path Control Adapter è Combined VSP Cross bar Switch (ESW Switch) and Cache Memory adapter functionality.
Ø  Virtual Storage Director Pairs è  Up to Four per Controller and Eight per Subsystem.
VSP G 1000 Specifications:
Max Cache Size: 2TB
Maximum Internal Disks: 2,304 –2.5 Inch Disks (2.7 PB max)
                                              1,152 – 3.5 Inch Disks (4.5 PB max)
Max VSD Pairs & Cores:  8 VSD Pairs & Total 128 Cores (Processor Type – Xeon E5 (Eight Core))
Max FED Ports: 192 FC
*      With VSP G1000 HDS came up with new Operating System i.e. Hitachi Storage Virtualization Operating System (with VSP – BOS/BOS V).


Tuesday, 13 September 2016

Hi-Track Monitor:


  • It is Installed on the Service Processor.
  • it is a Service and remote maintenance tool for Hitachi Arrays. 
  • Monitors the operation of the VSP Storage arrays at all times:
                •Collects hardware status and error data.
                •Transmits this data through a modem to the Hitachi Data Systems Support Center.
                ▪The Support Center analyzes the data and implements corrective action as needed.
  • Hi-Track Monitor FTP Transport is available and greatly reduces time transferring dumps to Support Center (HDS).

Friday, 15 July 2016

                             BCV Sync Script


Creating BCV sync Script for new Host/server.

Once completion of P-Vol creation we create S-Vol for those respected production volumes.

Once copy is completes we will kept the pair in split state. But we have to take the Incremental or Full copy of the P-Vol depending on the priority of the host.

So that’s why we are creating Sync script and schedule it, once created it will automatically starts and stop the copy between those devices.

Ø  First we have to create HORCM (Hitachi online remote copy) instances.

HORCM instances, if you want to use Hitachi’s Local Replication (Shadow Image) you need two HORCM instances on the recovery site, the instance ID for the Local Replication (Shadow Image) must be +1 of the replicated LUNS (LDEV’s as Hitachi calls them) so if you used HORCM10 for the instance for the replicated LUNS you must use HORCM11 for the Local Replication or running test fail-over’s will not work.

Ø  Once the services have been installed you must create the horcmX.conf files and place them in C:\HORCM\etc, again X is the HORCM instance ID.
              horcm00    11000/udp    # hormc0
              horcm101    11001/udp    # hormc1
             
              (Name of services must correspond to names of config files.)

               cd /etc/services –HORCM Instances location.

The simplest input for horcm00.conf:
          *********************** HORCM_MON******************************

          #ip_address                      service                            poll (10ms)    timeout (10ms)
          hostname or ip-address *      name-of-service-registered **          1000              3000
          HORCM_CMD
          #dev_name
#dev_group                      dev_name                        Serial#         CU: LDEV(LDEV#)  MU#
          \\.\PhysicalDrive2 or UUID  \\.\Volume
HORCM_INST
#dev_group                      ip_address      service

   *dev_group =Host Group Name
   * Paste name of your management host or its IP address.
   * Paste name of service registered in C:/etc/horcm*.conf UNIX or C:\winnt\horcm*.conf Windows details below.
*HORCM_CMD-Command Device id in ctd format.
(The Command Device is dedicated to CCI communications and should not be used by any other applications)

  
 The simplest input for horcm101.conf:
 *********************** HORCM_MON******************************

          #ip_address                      service                            poll (10ms)    timeout (10ms)
          hostname or ip-address *      name-of-service-registered **          1000              3000
          HORCM_CMD
          #dev_name
#dev_group                      dev_name                        Serial#         CU: LDEV(LDEV#)  MU#
          \\.\PhysicalDrive2 or UUID  \\.\Volume
HORCM_INST
#dev_group                      ip_address      service

              
Ø  After this initial configuration you can run CCI but with some basic "scripting".

cat horcm00.conf – EX: Configuration file
Testing:
## horcmstart.sh 00 – For running the instance
Ø  Second step is Create the configuration scripts Run the instance and check it is opening or not. If not check out the HORCM log for where exactly error is coming.
Cd /HORCM --- for Log files

Ø  Once horcmstart is successfull, then add the reaming P-Vol and S-Vols.
 After that we have to go the oracle home location add the new host details in tnsnames.ora.

            cd  /u01or u02 or u03 or u04 /app/oracle/product/network/admin

Ø  After that Add the given Oracle SID, Password & Control file backup location Path.
                     cd    /u01/app/oracle/script/.passwd

Ø  Then Finally Create or edit the Sync Script with new Horcm instances, Host Groups and Oracle Sid.

=========================****===============================


Saturday, 2 July 2016

SYMMETRIX CLONE OPERATION



Steps for SYM CLONE operation:


·              Create a device group
symdg create DGNAME  -type regular

·              Add production devices to device group
symld -g DGNAME    add dev 00C3:00DB
symld -g DGNAME add dev 022B:03F3

·              Check information about the device group
symdg show DGNAME

·              Add clone devices to device group
symld -g DGNAME add dev 1A03:1A1B -tgt
symld -g DGNAME add dev  1AD3:1C9B -tgt

·              Check information about the device group
symdg show DGNAME

·              Create a clone copy session
symclone -g DGNAME create -exact -differential -precopy –tgt  

(For additional allocation)

symclone -g DGNAME create -differential -precopy –tgt  DEV063  SYM ld TRG063

precopy      : Starts copying tracks in the background
differential : For subsequent cloning to the same target i.e. incremental

·              Verify the clone session
symclone -g DGNAME verify

·              Query the clone copy session
symclone -g DGNAME query

·              Put the Oracle database in hot back-up mode

·              Activate the clone session
symclone -g DGNAME activate -tgt

·              Verify the clone session
symclone -g DGNAME verify

·              Query the clone copy session
symclone -g DGNAME query

·              Bring the Oracle database out of hot back-up mode

·              Verify the clone session
symclone -g DGNAME verify -copied

Check for status – All of the devices in group ' DGNAME' are in the 'Copied' state

·              Query the clone copy session
symclone -g DGNAME query
Check for the copied state of the device pair

·              Confirm the clone devices are visible on back-up host and as required. Do the needful configuration on the back-up host. Initiate back-up.

·              Confirm back-up completion.

Do the needful configuration on the back-up host.

·              Recreate the clone session
symclone -g DGNAME recreate -precopy –tgt

Changed tracks since the last activate action will be copied over to the target device i.e. incremental changes.

·              Verify the clone session
symclone -g DGNAME verify

·              Query the clone copy session
symclone -g DGNAME query