Post has attachment

Post has attachment
High Performance Computing/ Storage Area Network/ Data Center Interconnect Cabling, SFP+,QSFP+, CX4, Mini SAS, Mini SAS HD Cables.
http://goo.gl/v3zN7Z
Photo

Post has attachment

Post has attachment
Yoram Novick, CEO and founder of Sunnyvale, Calif.-based hyper-converged infrastructure software developer Maxta, said he has heard of Project Mystic, but declined to comment on anything VMware is doing in that area.

Novick, however, did say that traditional data center infrastructures that separate the server and storage make it increasingly difficult to tie storage into virtualized environments.

"There are issues with how storage works with virtual servers, how to expand capacity, how to manage change," he said. "Converged infrastructure addresses these issues."

Maxta  focuses exclusively on software and stays away from the hardware side, which Novick said makes it easy to match a solution to customer requirements. And unlike VMware's VSAN, Maxta's software addresses a wide range of data services, including native support of sub-second snapshots and sub-second cloning, he said.

"In a lot of ways we're happy VMware has announced VSAN," he said. "We have similar visions in the sense that both of us believe there needs to be a software-defined storage practice."

http://www.maxta.com/project-mystics-potential-competitors-vmware-bring/

Post has attachment
The real skinny on snapshots and clones

Snapshots (read-only point in time copies) and clones (writable point in time copies) are table stakes features for enterprise storage for many years. Many of the storage solutions out in the market implement snapshots and clones, but not all snapshot and clone technologies are created equal. Especially in virtualized environments, snapshots and clones must be scalable and manageable to enable customers to leverage their full potential. Snapshots and clones support multiple use cases in the virtual data center. In this blog, we will discuss the following two use cases:

Short term data protection
Test and development
Short term data protection

The key requirements to support this use case are:

Streamlined recovery from snapshots
Support a large number of snapshots (in the 1000s or more)
Efficiently and rapidly (in seconds) create snapshots
Streamlined recovery from snapshots: Data protecion is all about streamlining data recoverability. Rapid recovery of VMs from snapshots is vital for an organization. Recovery that takes hours will leave the organization without access to data for hours causing loss of revenue. Ideally, the recovery time from snapshots should be in seconds. It is also important to have the ability to recover at the granularity of a VM to make it simple and fast.

Support a large number of Snapshots: In a virtual environment with many VMs, the number of snapshots required for short term data protection policy that minimzes data loss (good Recovery Point Objective – RPO) adds up very quickly. In a scenario where a snapshot is taken every 4 hours and the requirement is to retain the snapshot for 7 days for 200 VMs, the storage has to support 8,400 snapshots (200*6*7 = 8,400).

Effeciently and rapidly create snapshots: Due to the large number of snapshots, it is very important to minimize the overall resource consumption required to create each snapshot and provide the ability to rapidly create snapshots. Ideally, snapshot creation should be a sub-second operation. It is also important to ensure that snapshot creation doesn’t impose performance degradation to the base VM or any other snapshot or clone.

Test and Development

Clones provide an effective way to deliver point-in-time copies of the latest production data and accelerate time-to-market of applications. Additionally, having the ability to manage these point-in-time copies by the test and development team is icing on the cake.

The key requirements to support this use case are:

Rapidly create many clones without performance degradation
Ability to create and manage the clones by the test and development teams
Similar to snapshots, the number of clone creations quickly adds up. For example, if there are 20 team members and each of them need at least 10 different environments, the system should have the ability to create 200 clones. It is extremely important to maintain the performance of these clones as well as the base VMs so that the team members can effectively use them.

Clone creation shouldn’t require knowledge of storage constructs such as volumes/LUNs/filesystems and how various VMs are mapped to storage constructs. The ability to clone at the granularity of a VM is very improtant for simplifing clone creation and enabling test and development teams to leverage clones independently from the IT or storage team.

Post has attachment
Storage Vendor Lock-in – Is the End Near?

Since the inception of IT, vendor lock-in has been a major issue for many customers. In the mainframe days, vertical integration was prevalent. Once you chose a vendor for your Information System, you had to buy servers, networking, and storage from the same vendor. Since standards were not commonplace and each vendor designed products in a unique way, switching from one vendor to another meant re-designing your Information System and associated applications. Thus, moving to a different vendor was an expensive, long, and risky proposition.

As computing evolved from mainframe to client/server, standards emerged and enabled horizontal integration. You could easily integrate servers from one vendor with storage from another vendor. On the server side, standards at multiple levels actually eliminated vendor lock-in. However, storage vendor lock-in persisted. The next major evolution to virtualized environments or software-defined data centers didn’t eliminate storage vendor lock-in either.

 Why can’t you just install a new storage solution from a different vendor, migrate data from existing storage to new storage, decommission existing storage, and move on?  Why switching to another storage vendor is different from upgrading to a new storage solution from your current vendor?  

The reasons for storage vendor lock-in are twofold.

The first is storage management lock-in. Each storage vendor designed its “box” with its own interfaces and processes to set up, provision, and manage storage, RAID groups, volumes, file systems, etc. This is true whether you choose to work with a SAN or a NAS architecture. The integration of these interfaces and processes into IT and the associated vendor-specific training are the sources of storage management vendor lock-in.

The second reason is data management lock-in. Each storage vendor developed a proprietary implementation of data services such as snapshots, clones, and replication. Vendors also developed their own proprietary server-side software for orchestrating their data services and integrating them with leading middleware, and data protection software. The integration of these proprietary implementations to IT and the IT processes built around their unique capabilities and interfaces are the sources of data management vendor lock-in.

So, are we stuck with storage vendor lock-in for the foreseeable future? Two emerging technologies, and more importantly the synergy between them, may provide the relief from storage vendor lock-in in virtualized server environments. The first technology is compute/storage convergence.

The basic idea is to leverage server-side flash and disk drives with intelligent distributed software running on the same servers that host applications to replace storage arrays. With this approach, the need for a storage array and all the proprietary storage management interfaces associated with a storage array can be eliminated thereby eliminating storage management lock-in. The flip side is that many vendors that deliver convergence offer it as a closed system. Their closed system may leverage commodity parts but the customer can only purchase the closed system as a whole. Thus, you may be swapping a storage vendor lock-in with a convergence vendor lock-in.

To avoid this issue, the convergence solution must be an open solution supporting all leading commodity servers and storage components. Unfortunately, convergence alone doesn’t eliminate data management lock-in. In today’s virtualized environments, old storage constructs such as volumes and file systems are utilized to implement storage for virtual machines. While this concept works, it is inefficient and complex.

The second emerging technology, VM-aware storage matches the constructs of storage to those of virtualized servers and more importantly introduces VM-level data services that can be managed entirely by the virtualization management framework. This approach improves storage efficiency and simplifies IT. Moreover, by elevating data services to the VM level and leveraging the virtualization management framework, data management lock-in is eliminated.

Unfortunately, most implementations of VM-aware storage are utilizing a storage array and therefore create storage management lock-in. By blending open convergence and VM-aware storage, you get the best of both worlds. You eliminate both storage management and data management lock-in in virtualized environments. Since server virtualization is already dominant and getting stronger in the data center, this approach can help in eliminating vendor lock-in in most cases.

So what about virtualization vendor lock-in? This is certainly an issue although it is outside the context of this Blog. However, virtualization vendor lock-in is independent of storage vendor lock-in. The proposed approach doesn’t increase virtualization vendor lock-in in any way.  Moreover, if the storage solution is hypervisor-agnostic, you can benefit from the fact that if or when you decide to switch to a different virtualization vendor, you do not have storage vendor lock-in in the new virtualization environment either.
Wait while more posts are being loaded