Sunday 16 October 2016

                                                               VSP G1000 ARCHITECTURE
                Here I am discussing about major components and architectural differences with VSP as well as features of VSP G1000.
Controller Chassis Components:
Front End Directors (FED/CHA)
Virtual Storage Directors (VSD/MPB)
Cache Path Control Adapter (CPC)
Cache Memory Backup (BKM)
Back-End Directors (BED/DKA)
Service Processor (SVP)
Cooling Fans
AC –DC Power Supply
*      All of the Controller boards within a system are connected through the HiStar-E network of Paths.
Major Change in Controller Design is CPC:
Ø  New Cache Path Control Adapter è Combined VSP Cross bar Switch (ESW Switch) and Cache Memory adapter functionality.
Ø  Virtual Storage Director Pairs è  Up to Four per Controller and Eight per Subsystem.
VSP G 1000 Specifications:
Max Cache Size: 2TB
Maximum Internal Disks: 2,304 –2.5 Inch Disks (2.7 PB max)
                                              1,152 – 3.5 Inch Disks (4.5 PB max)
Max VSD Pairs & Cores:  8 VSD Pairs & Total 128 Cores (Processor Type – Xeon E5 (Eight Core))
Max FED Ports: 192 FC
*      With VSP G1000 HDS came up with new Operating System i.e. Hitachi Storage Virtualization Operating System (with VSP – BOS/BOS V).


Tuesday 13 September 2016

Hi-Track Monitor:


  • It is Installed on the Service Processor.
  • it is a Service and remote maintenance tool for Hitachi Arrays. 
  • Monitors the operation of the VSP Storage arrays at all times:
                •Collects hardware status and error data.
                •Transmits this data through a modem to the Hitachi Data Systems Support Center.
                ▪The Support Center analyzes the data and implements corrective action as needed.
  • Hi-Track Monitor FTP Transport is available and greatly reduces time transferring dumps to Support Center (HDS).

Friday 15 July 2016

                             BCV Sync Script


Creating BCV sync Script for new Host/server.

Once completion of P-Vol creation we create S-Vol for those respected production volumes.

Once copy is completes we will kept the pair in split state. But we have to take the Incremental or Full copy of the P-Vol depending on the priority of the host.

So that’s why we are creating Sync script and schedule it, once created it will automatically starts and stop the copy between those devices.

Ø  First we have to create HORCM (Hitachi online remote copy) instances.

HORCM instances, if you want to use Hitachi’s Local Replication (Shadow Image) you need two HORCM instances on the recovery site, the instance ID for the Local Replication (Shadow Image) must be +1 of the replicated LUNS (LDEV’s as Hitachi calls them) so if you used HORCM10 for the instance for the replicated LUNS you must use HORCM11 for the Local Replication or running test fail-over’s will not work.

Ø  Once the services have been installed you must create the horcmX.conf files and place them in C:\HORCM\etc, again X is the HORCM instance ID.
              horcm00    11000/udp    # hormc0
              horcm101    11001/udp    # hormc1
             
              (Name of services must correspond to names of config files.)

               cd /etc/services –HORCM Instances location.

The simplest input for horcm00.conf:
          *********************** HORCM_MON******************************

          #ip_address                      service                            poll (10ms)    timeout (10ms)
          hostname or ip-address *      name-of-service-registered **          1000              3000
          HORCM_CMD
          #dev_name
#dev_group                      dev_name                        Serial#         CU: LDEV(LDEV#)  MU#
          \\.\PhysicalDrive2 or UUID  \\.\Volume
HORCM_INST
#dev_group                      ip_address      service

   *dev_group =Host Group Name
   * Paste name of your management host or its IP address.
   * Paste name of service registered in C:/etc/horcm*.conf UNIX or C:\winnt\horcm*.conf Windows details below.
*HORCM_CMD-Command Device id in ctd format.
(The Command Device is dedicated to CCI communications and should not be used by any other applications)

  
 The simplest input for horcm101.conf:
 *********************** HORCM_MON******************************

          #ip_address                      service                            poll (10ms)    timeout (10ms)
          hostname or ip-address *      name-of-service-registered **          1000              3000
          HORCM_CMD
          #dev_name
#dev_group                      dev_name                        Serial#         CU: LDEV(LDEV#)  MU#
          \\.\PhysicalDrive2 or UUID  \\.\Volume
HORCM_INST
#dev_group                      ip_address      service

              
Ø  After this initial configuration you can run CCI but with some basic "scripting".

cat horcm00.conf – EX: Configuration file
Testing:
## horcmstart.sh 00 – For running the instance
Ø  Second step is Create the configuration scripts Run the instance and check it is opening or not. If not check out the HORCM log for where exactly error is coming.
Cd /HORCM --- for Log files

Ø  Once horcmstart is successfull, then add the reaming P-Vol and S-Vols.
 After that we have to go the oracle home location add the new host details in tnsnames.ora.

            cd  /u01or u02 or u03 or u04 /app/oracle/product/network/admin

Ø  After that Add the given Oracle SID, Password & Control file backup location Path.
                     cd    /u01/app/oracle/script/.passwd

Ø  Then Finally Create or edit the Sync Script with new Horcm instances, Host Groups and Oracle Sid.

=========================****===============================


Saturday 2 July 2016

SYMMETRIX CLONE OPERATION



Steps for SYM CLONE operation:


·              Create a device group
symdg create DGNAME  -type regular

·              Add production devices to device group
symld -g DGNAME    add dev 00C3:00DB
symld -g DGNAME add dev 022B:03F3

·              Check information about the device group
symdg show DGNAME

·              Add clone devices to device group
symld -g DGNAME add dev 1A03:1A1B -tgt
symld -g DGNAME add dev  1AD3:1C9B -tgt

·              Check information about the device group
symdg show DGNAME

·              Create a clone copy session
symclone -g DGNAME create -exact -differential -precopy –tgt  

(For additional allocation)

symclone -g DGNAME create -differential -precopy –tgt  DEV063  SYM ld TRG063

precopy      : Starts copying tracks in the background
differential : For subsequent cloning to the same target i.e. incremental

·              Verify the clone session
symclone -g DGNAME verify

·              Query the clone copy session
symclone -g DGNAME query

·              Put the Oracle database in hot back-up mode

·              Activate the clone session
symclone -g DGNAME activate -tgt

·              Verify the clone session
symclone -g DGNAME verify

·              Query the clone copy session
symclone -g DGNAME query

·              Bring the Oracle database out of hot back-up mode

·              Verify the clone session
symclone -g DGNAME verify -copied

Check for status – All of the devices in group ' DGNAME' are in the 'Copied' state

·              Query the clone copy session
symclone -g DGNAME query
Check for the copied state of the device pair

·              Confirm the clone devices are visible on back-up host and as required. Do the needful configuration on the back-up host. Initiate back-up.

·              Confirm back-up completion.

Do the needful configuration on the back-up host.

·              Recreate the clone session
symclone -g DGNAME recreate -precopy –tgt

Changed tracks since the last activate action will be copied over to the target device i.e. incremental changes.

·              Verify the clone session
symclone -g DGNAME verify

·              Query the clone copy session
symclone -g DGNAME query



Monday 27 June 2016

Hitachi Virtual storage Platform(VSP) Architecture




Major Improvements:

* 3D Scaling: Scale Up
                    Scale Out
                    Scale Deep

*Major Controller Architecture Design.

*High Performance SAS Back-end (SAN attached storage)

*Better storage Capacity and Better I/O Performance.

*Improved Heat Management.

Enhance 3D Scaling:

Scale Up: It describes how Hitachi VSP Architecture supports seamless capacity upgrades in single Controller Chassis System. Additional Cache,Processing,Storage & Connectivity can be added Non-disruptively.

Scale Out: it describes how VSP seamlessly grow to a two control chassis configuration that
delivers the greatest storage capacity, I/O performance & Power utilization.

Scale Deep: Describes Hitachi's Virtualization in the controller. Capability to connect & manage virtualized storage.

Supports 247 of external virtualized storage.


Has 1 or 2 Engine. Each Engine is one DKC

One Controller Chassis:

VSP Control chassis fully modular.

One Control Chassis supports one or two VSD(Microprocessor blades) pairs.

One Control chassis supports two or four cache memory features from 16GB to 256 GB.

The internal controller architecture is like Histar-E network because of internal components & interconnections of VSP.



Internal Controller Architecture







The DKC ==> Disk Controller is made up these five main Components.
FED,BED,Chache,Grid Switch & Microprossers.


VSD Board(Microprosser Blades):

Two to Four Chassis.
   Each board includes one intel 2.33 GHz core dual intel Xeon CPU with four processor blades, independent of CHAs and DKAs and can be shared across BEDs and FEDs.
     
VSD This is the Shared/Control Memory component of the VSP, called the Virtual Storage Directors.

VSD Board


·         GSW – Grid Switches:

        Full duplex switches
        Min of 2 switches to a maximum of 4
        Provides interconnection between the FEDs, BEDs and the CMs.
        They also connect the control signals between the virtual storage directors (MPs) and the CM boards.


SVPs – 2 – Both SVPs mounted Controller Chassis 0

Contains HDD or SDD drives 8 SAS switches
 Two Types of Drive Chassis:
• 80 disks capable 3.5” HDDs & Max number of 3.5” HDDs – 1280
 • 128 Disks capable 2.5” HDDs or SSDs &Max number of 2.5” HDDs - 2048 .

Maximum Configuration:

• 6 Rack twin version of the minimum configuration, containing 2 controller chassis, up to 16 drive   chassis .
•        The total space of the highest configuration is 2.5 PB.
•        Maximum number of Volumes supported – 64K .
•        Maximum Size of creatable volume = 4TB(60TB meta).





Friday 24 June 2016



                                           
                     Hitachi Hardware Architecture

                         Hitachi USP/USP-V/USP-VM architecture is based on Hi-Star/Hierarchical Star/Universal Star Architecture.

USP/USP-V/USP-VM Architecture Components:

   USP V Haradware Architecture

 CHA – Channel Adapter or FED (Front End Director):

 o   CHA or FED Controls the Flow of data transfer between the hosts and the cache memory.
 o   CHA is a PCB board that contains the FEDs. FED is also called the CHA.
 o   FED ports can be FC/FICON. 2 ports are controlled by 1 processor. There can be 192 FC ports in USP/USP-V/USP-VM
 o   Data transfer speeds of up to 4Gbps/400MBPS.
 o   FC ports can be 16 or 32 ports per pair of CHA.
 o   FC ports support both long and short wavelengths to connect to hosts, arrays or switches.
 o   There can be 96 FICON ports in USP/USP-V/USP-VM
 o   Data transfer speeds of up to 2Gbps/200MBPS.
 o   FICON can have 8 or 16 ports per pair of FICON CHA. FICON ports can be short or long wavelengths.
 o   In USP100 – Max 2 FEDs In USP600 - Max of 4 or 6 FEDs In USP1100 – Max of 4 or 6 FEDs.

 DKA – Disk Adapter or BED (Back End Director):

 DKA or BED is a component of the DKC that Controls the flow of data transfer between the drive and cache memory.
• The Disk drives are connected to the DKA pairs by Fibre cables using an arbitrated loop FC-AL technology.
• Each DKA has 8 independent fibre back end paths controlled by 8 back-end directors (micro-processors).
• Max of 8 DKAs, hence 64 backend paths.
• Bandwidth of FC path = 2Gbps or 200MBPS
• In USP the number of Ports per DKA pair is 8 and each port is controlled by a Microprocessor (MP).
• The USP V can be configured with up to 8 BED pairs, providing up to 64 concurrent data transfers to and from the data drives.
• The USP VM is configured with 1 BED pair, which provides 8 concurrent data transfers to and from the data drives.
• In USP100- Max of 4 BEDs .
• In USP600 – Max of 4 BEDs.

Shared Memory:

• This is the memory that stores the configuration information and the information and status for controlling the Cache, Disk Drives and Logical devices, the path group arrays also reside in the SM.

• Size of the shared memory is determined by the

A. Total Cache Size
B. Number of LDEVs
C. Replication Software in use.

• Non-Volatile Shared Memory contains the cache directory and configuration information of the USP/USP-V/USP-VM
• SM is duplexes and each side of the duplex resides on the first two shared memory cards, which are in cluster 1 and 2.
• In the event of power failure the SM data is protected for up to 36 hours of battery back-up in USP-V and USP-VM.
• In the event of power failure the SM data on the USP is protected for up to 7 days.
• USP can be configured up to 3 GB from 2 cards or 6 GB from 4 cards.
• USP-V can be configured up to 32 GB of Shared memory.
• USP-VM can be configured up to 16 GB of shared Memory.

CM – Cache Memory :

• This is the memory that stores the user data in order to perform I/O ops asynchronously with the reading and writing to a disk drive.
• USP can be configured with up to 128GB of cache memory in increments of 4GB for USP100and 600 or 8GB for USP1100 and with 48hours of battery Backup.
• USP-V can be configured with up to 512GB of cache memory
• USP-VM can be configured with up to 128GB of cache memory.
• USP V and USP VM, both have a cache back up of power for 36 hours.
• The cache is divided into 2 equal areas called cache A and cache B on separate cards.
• Cache A is on Cluster 1 and Cache B is on Cluster 2
• All USP models place the read and write data in the cache.
• Write data is written to both the cache A and B, so the data is duplexes across both the logic and power boundaries.

        CSW Cache Switch This switch provides multiple data paths between CHA/DKA and cache memory.
       SVP Service processor Exclusive PC for performing all HW and SW maintenance functions.
       
·                   Power Supplies & Batteries

USP Features:

1. 100% data availability guarantee with no single point of failure.
2. Highly resilient, multi-path fibre channel architecture.
3. Fully redundant, hot swappable components .
4. Non-disruptive micro-code updates & Non-disruptive expansion.
5. Global Dynamic Hot Sparing .
6. Duplexed Cache with Battery Back-up .
7. Multiple point-to-point, data and control paths.
8. Supports all open systems and mainframes .
9. FC, FICON and ESCON connectivity
10. Fibre-Channel switched, arbitrated loop and point-to-point configurations.

 USP Components:

 The DKC consists of
v  DKC contains:
v  CHA/FEDs
v  DKA/BEDs
v  Cache Memories
v  Shared memories
v  CSWs
v  HDU boxes containing disk drives
v  Power supplies
v  Battery Box
v  The DKC unit is connected to a Service processor SVP, which is used to service the storage subsystem, monitor its running condition and analyze faults.

The DKU consists of

v  HDU - Each HDU box containing 64 disks.
v  Cooling Fans
v  AC power supply.