Friday, 24 June 2016



                                           
                     Hitachi Hardware Architecture

                         Hitachi USP/USP-V/USP-VM architecture is based on Hi-Star/Hierarchical Star/Universal Star Architecture.

USP/USP-V/USP-VM Architecture Components:

   USP V Haradware Architecture

 CHA – Channel Adapter or FED (Front End Director):

 o   CHA or FED Controls the Flow of data transfer between the hosts and the cache memory.
 o   CHA is a PCB board that contains the FEDs. FED is also called the CHA.
 o   FED ports can be FC/FICON. 2 ports are controlled by 1 processor. There can be 192 FC ports in USP/USP-V/USP-VM
 o   Data transfer speeds of up to 4Gbps/400MBPS.
 o   FC ports can be 16 or 32 ports per pair of CHA.
 o   FC ports support both long and short wavelengths to connect to hosts, arrays or switches.
 o   There can be 96 FICON ports in USP/USP-V/USP-VM
 o   Data transfer speeds of up to 2Gbps/200MBPS.
 o   FICON can have 8 or 16 ports per pair of FICON CHA. FICON ports can be short or long wavelengths.
 o   In USP100 – Max 2 FEDs In USP600 - Max of 4 or 6 FEDs In USP1100 – Max of 4 or 6 FEDs.

 DKA – Disk Adapter or BED (Back End Director):

 DKA or BED is a component of the DKC that Controls the flow of data transfer between the drive and cache memory.
• The Disk drives are connected to the DKA pairs by Fibre cables using an arbitrated loop FC-AL technology.
• Each DKA has 8 independent fibre back end paths controlled by 8 back-end directors (micro-processors).
• Max of 8 DKAs, hence 64 backend paths.
• Bandwidth of FC path = 2Gbps or 200MBPS
• In USP the number of Ports per DKA pair is 8 and each port is controlled by a Microprocessor (MP).
• The USP V can be configured with up to 8 BED pairs, providing up to 64 concurrent data transfers to and from the data drives.
• The USP VM is configured with 1 BED pair, which provides 8 concurrent data transfers to and from the data drives.
• In USP100- Max of 4 BEDs .
• In USP600 – Max of 4 BEDs.

Shared Memory:

• This is the memory that stores the configuration information and the information and status for controlling the Cache, Disk Drives and Logical devices, the path group arrays also reside in the SM.

• Size of the shared memory is determined by the

A. Total Cache Size
B. Number of LDEVs
C. Replication Software in use.

• Non-Volatile Shared Memory contains the cache directory and configuration information of the USP/USP-V/USP-VM
• SM is duplexes and each side of the duplex resides on the first two shared memory cards, which are in cluster 1 and 2.
• In the event of power failure the SM data is protected for up to 36 hours of battery back-up in USP-V and USP-VM.
• In the event of power failure the SM data on the USP is protected for up to 7 days.
• USP can be configured up to 3 GB from 2 cards or 6 GB from 4 cards.
• USP-V can be configured up to 32 GB of Shared memory.
• USP-VM can be configured up to 16 GB of shared Memory.

CM – Cache Memory :

• This is the memory that stores the user data in order to perform I/O ops asynchronously with the reading and writing to a disk drive.
• USP can be configured with up to 128GB of cache memory in increments of 4GB for USP100and 600 or 8GB for USP1100 and with 48hours of battery Backup.
• USP-V can be configured with up to 512GB of cache memory
• USP-VM can be configured with up to 128GB of cache memory.
• USP V and USP VM, both have a cache back up of power for 36 hours.
• The cache is divided into 2 equal areas called cache A and cache B on separate cards.
• Cache A is on Cluster 1 and Cache B is on Cluster 2
• All USP models place the read and write data in the cache.
• Write data is written to both the cache A and B, so the data is duplexes across both the logic and power boundaries.

        CSW Cache Switch This switch provides multiple data paths between CHA/DKA and cache memory.
       SVP Service processor Exclusive PC for performing all HW and SW maintenance functions.
       
·                   Power Supplies & Batteries

USP Features:

1. 100% data availability guarantee with no single point of failure.
2. Highly resilient, multi-path fibre channel architecture.
3. Fully redundant, hot swappable components .
4. Non-disruptive micro-code updates & Non-disruptive expansion.
5. Global Dynamic Hot Sparing .
6. Duplexed Cache with Battery Back-up .
7. Multiple point-to-point, data and control paths.
8. Supports all open systems and mainframes .
9. FC, FICON and ESCON connectivity
10. Fibre-Channel switched, arbitrated loop and point-to-point configurations.

 USP Components:

 The DKC consists of
v  DKC contains:
v  CHA/FEDs
v  DKA/BEDs
v  Cache Memories
v  Shared memories
v  CSWs
v  HDU boxes containing disk drives
v  Power supplies
v  Battery Box
v  The DKC unit is connected to a Service processor SVP, which is used to service the storage subsystem, monitor its running condition and analyze faults.

The DKU consists of

v  HDU - Each HDU box containing 64 disks.
v  Cooling Fans
v  AC power supply.



Wednesday, 22 June 2016

                               

                               Hitachi Shadow Image


                         Shadow Image uses local mirroring technology to create and maintain a full copy of any volume in the Storage Array.

Why Secondary Copy?

                          SI copies are used as backups, with secondary host applications, Data Mining, for testing and other uses.

Working Procedure of Shadow Image:

First select a volume that you want to duplicate. This becomes the Primary Volume (P-VOL).
Identify another volume to contain the copy. This Becomes the Secondary Volume (S_VOL).
Associate the P-VOL & S-VOL, Perform the initial Copy.
(Here the Source Volume is called as P-VOL & the destination Volume is called as S-VOL)
Once the initial copy is in progress the P-VOL is available for Read/Write.
Once the copy is completed the volumes are in paired state. After the copy is completed, subsequent write operations to the P-VOL are continuously copied to the S-VOL.
The P-VOL & S-VOL are remains paired till they are Split.
After Split S-VOl data is consistent and usable. It is available for read/write access by secondary host applications.
We can Pair the volumes by resynchronizing the updated data from P-VOL to S-VOL and S-VOL to P-VOL.

           Each P-VOl can be paired with up to three S-VOls, means you can create three pairs with one source volume.
You can Pair with each S-VOL with a Second Level S-VOls.
Each S-VOL can be paired up to two secondary level S-VOLs. It means totally nine S-VOLs can be available for one P-VOL. These Secondary level S-VOLs are called as Cascaded Pairs.



 Supported RAID levels --- RAID 1, RAID 5 and RAID 6.



Pair Topology Type : select the boxes that match your configuration.


Pair Topology




Pair Creation
 In Split Type, you have the option of splitting the pair once it is created. Select one of the following:

1.    Non Split: Does not split the new pair.

2.     Quick Split: The new pair is split prior to data copy so that the SVOL is immediately available for read and writes I/O. Any remaining differential data is copied to the S-VOL in the background.

3.     Steady Split: Splits the new pair after all differential data is copied to the S-VOL.
 In Copy Pace, select the pace at which data is to be copied, Slower, Medium, or Faster.
Processing speed and system performance are affected by the pace you select; you see slower speed and better performance with Slower, faster speed but more impact to performance with Faster.

Pair Operations:
Create Pairs
Split Pairs
Resync Pairs (forward/reverse)
Suspend Pairs
Delete Pairs

Pair Status Descriptions:

SMPL      :   The volume is not assigned to a pair. The storage system accepts read/write for “SMPL” volumes that are not reserved.

SMPL (PD)    :  The pair is being deleted. Pair operations are not allowed in this status. Upon deletion, the status changes to “SMPL”.

P-VOL access: Read/write disabled
S-VOL access: Read/write disabled

COPY (PD)/ COPY: The pair creates initial copy is in progress. The storage system accepts read/write to the P-VOL but stops write operations to the S-VOL.

P-VOL access:  Read/write enabled
S-VOL access: Read only

PAIR   :  The initial copy operation is complete and the volumes are paired. The storage system performs update copy operations from P-VOL to S-VOL. The P-VOL and S-VOL in “PAIR” status may not be identical.

P-VOL access:  Read/write enabled
 S-VOL access: Read only

COPY (SP)/ COPY    : The pair is in the process of Steady Split. Any remaining differential data is copied to the S-VOL. When this is completed, the pair is split and the data in the S-VOL is identical to data in the P-VOL at the time of the split.

P-VOL access:  Read/write enabled
 S-VOL access: Read only

PSUS (SP)/ PSUS: The pair is in the process of Quick Split. P-VOL differential data is copied to the S-VOL in the background. Pairs cannot be deleted.

P-VOL access:  Read/write enabled
S-VOL access:  Read/write enabled

PSUS: The pair is split. The storage system stops performing update copy operations. Write I/Os are accepted for SVOL. The storage system keeps track of updates to split PVOLs and S-VOL, so that the pair can be resynchronized quickly.

P-VOL access:  Read/write enabled
S-VOL access:  Read/write enabled

COPY (RS)/ COPY: The pairresync operation is in progress. The storage system does not accept write I/Os for S-VOL. When a split pair is resynchronized, the storage system copies only PVOL differential data to the S-VOL. When a suspended pair is resynchronized, the storage system copies the entire PVOL to the S-VOL.

P-VOL access:  Read/write enabled
 S-VOL access: Read only



Tuesday, 21 June 2016

Creating a Configuration Report for VSP


                        
                      Creating  a Configuration Report for VSP

  •        Use the Reports menu.
  •          From General Tasks,click Create Configuration Report  Wizard.
  •          In Repots (resource tree) ,look the button.

Ø  These Reports are in Two Formats.
§  HTML
§  CSV
§  HTML can be viewd from Storage Navigator 2.
§  CSV requires saving a compressed file and access it when you want the CSV files.

·         Generating a CSV report and  We can select and Download  the Reports.


   Note:   Maximum 20 reports will be listed in the reports Tab.

Monday, 20 June 2016

RAID



RAID [Redundant Array of Independent (Inexpensive) Disk]

            The idea was to combine multiple small, inexpensive physical disks into
an array that would function as a single logical drive, but provide better performance and higher data availability than a single large expensive disk drive.
• A set of physical disk drives that can function as one or more logical drives (improved I/O)
• Data distribution across multiple physical disks (striping)
• Data recovery, or reconstruction of data in the event of a physical disk failure (redundancy).
RAID 0 : 

Technology: Striping Data with No Data Protection.
Performance: Highest
Overhead: None
Minimum Number of Drives: 2 since striping
Data Loss: Upon one drive failure
Example: 5TB of usable space can be achieved through 5 x 1TB of disk.
Advantages:
>
High Performance
Disadvantages: Guaranteed Data loss

Hot Spare: Upon a drive failure, a hot spare can be invoked, but there will be no data to copy over. Hot Spare is not a good option for this RAID type.

In RAID 0, the data is written / stripped across all of the disks. This is great for performance, but if one disk fails, the data will be lost because since there is no protection of that data.
-------------------------------------------------

RAID 1 :

Technology: Mirroring and Duplexing
Performance: Highest
Overhead: 50%
Minimum Number of Drives: 2
Data Loss: 1 Drive failure will cause no data loss. 2 drive failures, all the data is lost.
Example: 5TB of usable space can be achieved through 10 x 1TB of disk.
Advantages: Highest Performance, One of the safest.
Disadvantages: High Overhead, Additional overhead on the storage subsystem. Upon a drive failure it becomes RAID 0.

Hot Spare: A Hot Spare can be invoked and data can be copied over from the surviving paired drive using Disk copy.

The exact data is written to two disks at the same time. Upon a single drive failure, no data is lost, no degradation, performance or data integrity issues. One of the safest forms of RAID, but with high overhead. In the old days, all the Symmetrix supported RAID 1 and RAID S. Highly recommended for high end business critical applications.
The controller must be able to perform two concurrent separate Reads per mirrored pair or two duplicate Writes per mirrored pair. One Write or two Reads are possible per mirrored pair. Upon a drive failure only the failed disk needs to be replaced.


RAID 1+0 :


Technology: Mirroring and Striping Data
Performance: High
Overhead: 50%
Minimum Number of Drives: 4
Data Loss: Upon 1 drive failure (M1) device, no issues. With multiple drive failures in the stripe (M1) device, no issues. With failure of both the M1 and M2 data loss is certain.
Example: 5TB of usable space can be achieved through 10 x 1TB of disk.
Advantages: Similar Fault Tolerance to RAID 5, Because of striping high I/O is achievable.
Disadvantages: Upon a drive failure, it becomes RAID 0.

Hot Spare: Hot Spare is a good option with this RAID type, since with a failure the data can be copied over from the surviving paired device.

RAID 1+0 is implemented as a mirrored array whose segments are RAID 0 arrays.


RAID 3 : 

Technology: Striping Data with dedicated Parity Drive.
Performance: High
Overhead: 33% Overhead with Parity (in the example above), more drives in Raid 3 configuration will bring overhead down.
Minimum Number of Drives: 3
Data Loss: Upon 1 drive failure, Parity will be used to rebuild data. Two drive failures in the same Raid group will cause data loss.
Example: 5TB of usable space would be achieved through 9 1TB disk.
Advantages: Very high Read data transfer rate. Very high Write data transfer rate. Disk failure has an insignificant impact on throughput. Low ratio of ECC (Parity) disks to data disks which converts to high efficiency.
Disadvantages: Transaction rate will be equal to the single Spindle speed.

Hot Spare: A Hot Spare can be configured and invoked upon a drive failure which can be built from parity device. Upon drive replacement, hot spare can be used to rebuild the replaced drive.


RAID 5 :

Technology: Striping Data with Distributed Parity, Block Interleaved Distributed Parity
Performance: Medium
Overhead: 20% in our example, with additional drives in the Raid group you can substantially bring down the overhead.
Minimum Number of Drives: 3
Data Loss: With one drive failure, no data loss, with multiple drive failures in the Raid group data loss will occur.
Example: For 5TB of usable space, we might need 6 x 1 TB drives
Advantages: It has the highest Read data transaction rate and with a medium write data transaction rate. A low ratio of ECC (Parity) disks to data disks which converts to high efficiency along with a good aggregate transfer rate.
Disadvantages: Disk failure has medium impact on throughput. It also has most complex controller design. Often difficult to rebuild in the event of a disk failure (as compared to RAID level 1) and individual block data transfer rate same as single disk. Ask the PSE’s about RAID 5 issues and data loss?

Hot Spare: Similar to RAID 3, where a Hot Spare can be configured and invoked upon a drive failure which can be built from parity device. Upon drive replacement, hot spare can be used to rebuild the replaced drive.

RAID Level 5 also relies on parity information to provide redundancy and fault tolerance using independent data disks with distributed parity blocks. Each entire data block is written onto a data disk; parity for blocks in the same rank is generated on Writes, recorded in a distributed location and checked on Reads.
This would classify to be the most favorite RAID Technology used today.



RAID 6 : 


Technology: Striping Data with Double Parity, Independent Data Disk with Double Parity
Performance: Medium
Overhead: 28% in our example, with additional drives you can bring down the overhead.
Minimum Number of Drives: 4
Data Loss: With one drive failure and two drive failures in the same Raid Group no data loss. Very reliable.
Example: For 5 TB of usable space, we might need 7 x 1TB drives
Advantages: RAID 6 is essentially an extension of RAID level 5 which allows for additional fault tolerance by using a second independent distributed parity scheme (two-dimensional parity). Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID 6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures which typically makes it a perfect solution for mission critical applications.
Disadvantages: Very poor Write performance in addition to requiring N+2 drives to implement because of two-dimensional parity scheme.

Hot Spare: Hot Spare can be invoked against a drive failure, built it from parity or data drives and then upon drive replacement use that hot spare to build the replaced drive.
 The simplest explanation of RAID 6 is double the parity. This allows a RAID 6 RAID Groups to be able to have two drive failures in the RAID Group, while maintaining access to the data.

RAID S (3+1) :

Technology: RAID Symmetrix
Performance:  > High
Overhead: 25%
Minimum Number of Drives: 4
Data Loss: Upon two drive failures in the same Raid Group
Example: For 5 TB of usable space, 8 x 1 TB drives
Advantages: High Performance on Symmetrix Environment
Disadvantages: Proprietary to EMC. RAID S can be implemented on Symmetrix 8000, 5000 and 3000 Series. Known to have backend issues with director replacements, SCSI Chip replacements and backend DA replacements causing DU or offline procedures.

Hot Spare: Hot Spare can be invoked against a failed drive, data can be built from the parity or the data drives and upon a successful drive replacement, the hot spare can be used to rebuild the replaced drive.
The data protection feature is based on a Parity RAID (3+1) volume configuration (three data volumes to one parity volume).

RAID (7+1) :

Technology: RAID Symmetrix
Performance: High
Overhead: 12.5%
Minimum Number of Drives: 8
Data Loss: Upon two drive failures in the same Raid Group
Example: For 5 TB of usable space, 8 x 1 TB drives (rather you will get 7 TB)
Advantages: High Performance on Symmetrix Environment
Disadvantages: Proprietary to EMC. Available only on Symmetrix DMX Series. Known to have a lot of backend issues with director replacements, backend DA replacements since you have to verify the spindle locations. Cause of concern with DU.
Hot Spare: Hot Spare can be invoked against a failed drive, data can be built from the parity or the data drives and upon a successful drive replacement, the hot spare can be used to rebuild the replaced drive.
The data protection feature is based on a Parity RAID (7+1) volume configuration (seven data volumes to one parity volume).