Profile cover photo
Profile photo
Bill Gram-Reefer
192 followers -
Expert solutions for business owners that want the Internet to work for them
Expert solutions for business owners that want the Internet to work for them

192 followers
About
Bill's posts

Post has attachment

Post has attachment
A $3B project with 20K new resident will impact development of Central Contra Costa County County for next 20 years.

Post has attachment
Probably shouldn't have had picture of Councilwoman who fought to get her purse back next to lede about crime.

Post has attachment

Post has attachment

Post has attachment
Learning every day on how to generate great leads for my clients.

Post has attachment
Free fix for #Minecraft on #Yosemite. SuperSync installer adds all files so #gamers can load and play Minecraft on their Macs running OS 10.x in minutes.

Post has shared content
virtualization not plug and play
Virtualization Nightmare
I'd like to share our experience with some brand new hardware we got for testing VM server solutions, to migrate multiple physical servers into VM server solutions. We got very disappointing results, here is the story.

We used the following new hardware:
- HP DL380p Gen8 Server 128 GB RAM ( 2 x Xeon 8 core CPUs)
- Onboard SAS controller with eight 6 Gb-SAS channels (2 x 4 - SFF8088 ports)
- HP H221 second PCI SAS host bus adapter with eight 6 Gb-SAS channels (2 x 4 - SFF8088)
- A HP P2000 G3 RAID equipped with 24 2.5” 15k RPM SAS disks connected via up to four mini-SAS ports to the server (four ports each with 4 x 6 Gb-SAS)

Side note: all SAS controllers use LSI chips and the internal and external PCI controller offers about the same performance. However, the external controller can boot only from the first disk, whereas the internal controller allows choosing the boot disk.

RAID storage layout
We use an SAS RAID to keep connections simple between the server and the storage. SAS has also the benefit that we have a low-latency storage and that we can connect up to four mini-SAS SFF8088 connectors, with each supporting 4 x 6 Gbit SAS channels.

One major problem we experienced with the storage is that when we have heavy large file I/O, e.g. reading and writing 100 GB files, the system is so busy writing data that a single additional “ls” command takes several seconds. It seems the queue of pending writes for a single file keeps the controller busy for additional requests. To avoid this, it is required to work with multiple controllers using multiple RAID groups connected to separate SAS ports on the RAID and separate SAS ports on the server.

We have not finally decided how the disk configuration will be but is it likely that we use 4 different RAID-5 groups connected on separate SAS ports.

Native Linux or Windows
The good news is that the server and storage work great, with excellent performance running the latest Linux or Windows Server operating systems. Everything works and we get more than 800 MB/sec with a single SFF8088 cable. Adding another cable to a an additional RAID group gives us more than 1000 MB/sec.

VMware ESX Server 5.5
Extremely poor disk performance
The hardware works completely under VMware 5.5, but the disk performance we get (< 400 MB/sec) within a Linux or Windows VM guest OS is less than half of the native performance, which is completely disappointing. As a control, we tried to use one controller with passthrough into Linux VM which provided the expected 800 MB/sec. Unfortunately, PCI passthrough does not work with HP servers in ESX 5.5, so we needed to downgrade to VMware ESX 5.1 for this test. While passthrough mode demonstrates the capability of the hardware, it cannot be used in practice because then key VMware features cannot be used, and because it is limited to one VM only.

In summary, the VMware ESX 5.5 and 5.1 managed storage gives us less that 50% of potential disk performance. We are very disappointed with the performance.
 
Windows Server 2012r2 with Hyper-V
Extremely poor network performance
Windows Server installs and works fine on this machine, and running a Hyper-V Linux VM gives us the expected 800 MB/sec performance within the VM, which is more than twice the disk performance of VMware. However, the problem is that the network adapter works extremely slow within the Linux or Windows VM. We need to resolve the problem because within the host OS Windows Server 2012r2, network performance is great, only the VM has the problem. Again the solution is disappointing because of the network performance problem.

Summary
That's the current status. We will continue to perform additional tests and update this article. At this point it is clear that VM deployment should not be considered a "plug-and-play" technology, at least if you want to be certain you are achieving the full potential of the underlying hardware. We suspect that a high percentage of VM deployments never compare native performance to VM performance, and hence are unaware of the need for further tuning, and if that fails, the potentially significant performance tradeoff.

Any comments?
Photo

Post has attachment
Redpark ships industry first Lightning Serial+Power cable for iOS. Allows applications to run 24x7 as users can now charge iOS device AND connect via lightning device in POS, retail, process control, or any number of uses in medical, education, labs, and commerce.

Post has attachment
Pleasant Hill and Concord improve with Walnut Creek 680 Corridor overall. http://halfwaytoconcord.com/walnut-creek-680-corridor-colliers/
Wait while more posts are being loaded