Let's Check Out an Old Blade Server System with 32 CPUs!
Many, many servers these days run as virtual machines — but there was a time when virtualization was still just catching on, and companies needed physical servers to be as dense as possible. So let’s look at a blade server system from around 2010 that packs 32 CPU sockets and weighs 500 pounds!
Image of C7000 chassis will full-height blades: https://commons.wikimedia.org/wiki/File:HP_BladeSystem_c7000_Enclosure.jpg#/media/File:HP_BladeSystem_c7000_Enclosure.jpg
—————————————-Р’В————————————-
Please consider supporting my work on Patreon: https://www.patreon.com/thisdoesnotcompute
Follow me on Twitter and Instagram! @thisdoesnotcomp
—————————————-Р’В————————————-
Music by
Epidemic Sound (http://www.epidemicsound.com).
I feel like blade systems could make a comeback for kubernetees clusters for absolutely massive companies
Chrome: give me all your ram or I will boil your teeth
Me: gives ram to chrome
Very informative video, great to actually see such a thing and not only work on it remotely. A few questions popped up my mind as you repetedly said, this is pre virtualization era hardware. Now as a matter of fact we work on a later gen C7000 with Xeon Gold blades in a fully virtualized environment BECAUSE it provides only CPU and memory in a dense and fast package, connected to a SAN via fibre channel. Actually two stacked C7000s to be precise. We use VMWare Vsphere with several Clusters and DRS enabled and besides more modern solutions like HPE Synergy this is as close as we can get to an "ideal" virtualized environment. At least we thought so…
So if I may ask: Why do think this approach is "deprecated" and what is the alternative you use for on premise virtualization?
i use these at home. bought them on ebay just to play around with. about 5 years later i decided to start a wireless isp, and i knew they would come in handy. i have 4 of the 480c g1's, now that i want to virtualize, i'm looking at getting a few of the full height servers.
I just installed 2 new blades in a C7000 yesterday with 512GB of RAM each….. We use these for virtualization.
thanks for the nice video and all the information. Any tips about max power this system may draw will be appreciated.
If you hunted and ate elk, you'd be able to lift bigger, heavier servers. Joe Rogan says so…
oooh this shit gives my wood.
You're right, 2 people could not be expected to lift 223kg… without months in hospital afterwards.
Blades fell out of fashion as hosting centers became sensitive about electricity use. As the cost of colocation went up, companies bought in space saving blade systems. So server densities went up dramatically, which led to colo and hosting centers separating out their charges for electricity, which became a considerable charge for the customer. The solution to that was the vm. The issue was that most of these physical (blade) servers did very little, but still consumed 75-100W each. A vm could sit idle and cost a fraction of that. This method not only reduced electricity costs, but colocation footprints. Colocation by the mid 2000s in places like London Telehouse was already pushing towards 5 figures a year for a single rack, before the electricity cost was added.
When it comes to the drives, HP might have their own firmware on them to kind of "optimize" the drive for their own hardware and stuff. So that's might be the reason for their own branding on the drive.
Yeah, I ran a C7000 with 1x G1 and 2x G5 blades for a home lab. I had to hook it up in my detached garage because as it was little loud, even in it's "running" mode. I ran it with 3 Power supplies and all the fans on 220v. My power bill only went by up by $100 a month.
this takes me back… i was invited to Lyon to get teached on HP CCI back in 2004. those were 20 PCР’Т‘s in a single chassis to which the user would connect via thin clients over RDP. interesting stuff back in the day 😀
i still see people in march 2021 who want one….heck, i would like to have one to play around and im a medium business…
when they say you can only buy one server:
HPC still loves blades
It's absolute beautiful engineering even if virtualization win over it, looks great needs alot of operation efforts
yo there's still government sectors that still utilize this tech. I had no idea this stuff was 10 years old untill I watched this video haha
Interesting video! A friend has some blade hardware I can maybe borrow it to play with. Only problem is power consumption ?
I used to work with those, it was really good to save datacenter space, I used to have big virtualization farms running on those back then…