Let's Check Out an Old Blade Server System with 32 CPUs!



Many, many servers these days run as virtual machines — but there was a time when virtualization was still just catching on, and companies needed physical servers to be as dense as possible. So let’s look at a blade server system from around 2010 that packs 32 CPU sockets and weighs 500 pounds!

Image of C7000 chassis will full-height blades: https://commons.wikimedia.org/wiki/File:HP_BladeSystem_c7000_Enclosure.jpg#/media/File:HP_BladeSystem_c7000_Enclosure.jpg

—————————————-Р’В­————————————-

Please consider supporting my work on Patreon: https://www.patreon.com/thisdoesnotcompute

Follow me on Twitter and Instagram! @thisdoesnotcomp

—————————————-Р’В­————————————-

Music by
Epidemic Sound (http://www.epidemicsound.com).

20 thoughts on “Let's Check Out an Old Blade Server System with 32 CPUs!

  1. Very informative video, great to actually see such a thing and not only work on it remotely. A few questions popped up my mind as you repetedly said, this is pre virtualization era hardware. Now as a matter of fact we work on a later gen C7000 with Xeon Gold blades in a fully virtualized environment BECAUSE it provides only CPU and memory in a dense and fast package, connected to a SAN via fibre channel. Actually two stacked C7000s to be precise. We use VMWare Vsphere with several Clusters and DRS enabled and besides more modern solutions like HPE Synergy this is as close as we can get to an "ideal" virtualized environment. At least we thought so…
    So if I may ask: Why do think this approach is "deprecated" and what is the alternative you use for on premise virtualization?

  2. i use these at home. bought them on ebay just to play around with. about 5 years later i decided to start a wireless isp, and i knew they would come in handy. i have 4 of the 480c g1's, now that i want to virtualize, i'm looking at getting a few of the full height servers.

  3. Blades fell out of fashion as hosting centers became sensitive about electricity use. As the cost of colocation went up, companies bought in space saving blade systems. So server densities went up dramatically, which led to colo and hosting centers separating out their charges for electricity, which became a considerable charge for the customer. The solution to that was the vm. The issue was that most of these physical (blade) servers did very little, but still consumed 75-100W each. A vm could sit idle and cost a fraction of that. This method not only reduced electricity costs, but colocation footprints. Colocation by the mid 2000s in places like London Telehouse was already pushing towards 5 figures a year for a single rack, before the electricity cost was added.

  4. When it comes to the drives, HP might have their own firmware on them to kind of "optimize" the drive for their own hardware and stuff. So that's might be the reason for their own branding on the drive.

  5. Yeah, I ran a C7000 with 1x G1 and 2x G5 blades for a home lab. I had to hook it up in my detached garage because as it was little loud, even in it's "running" mode. I ran it with 3 Power supplies and all the fans on 220v. My power bill only went by up by $100 a month.

  6. this takes me back… i was invited to Lyon to get teached on HP CCI back in 2004. those were 20 PCР’Т‘s in a single chassis to which the user would connect via thin clients over RDP. interesting stuff back in the day 😀

Leave a Reply

Your email address will not be published.