| | August 20178CIOReviewThe Human Capital of High-Performance ComputingBy Mike Fisk, CIO, Los Alamos National LaboratoryThe often-overlooked aspect of high-performance computing is not the flashy "big iron" but the people. While the hardware can be spectacular, it won't benefit your business if you don't have talent who know how to use it, or if those employees don't have the supporting infrastructure they need.For much of its existence, high-performance computing resided in specialized supercomputing centers supporting a relatively small cadre of "users." Those "users" are in fact sophisticated software developers who use specialized libraries or programming languages to harness the horsepower of supercomputers. These specialized developers are typically domain scientists in fields such as physics, materials, or geology who are building and running simulations of some complex environment or system. Making these people productive is the key to getting value out of your computing investments. Don't underestimate the importance of human capital investments to build the capability to understand how high-performance computing platforms work and how to generate value from computing. HPC also requires significant investments in facilities, power measured in megawatts, and cooling capacity to match. These tangible problems will be surmounted before you take delivery of your HPC system. Other issues, like network bandwidth between the users and the computing and the intellectual capacity to effectively use the computer, can linger if not addressed.The rise of the phrase high-performance computing over the older term supercomputer reveals that performance is no longer achieved through buying a singular, supercomputer but instead through a much more complex ecosystem of clusters, power and cooling infrastructure, specialized interconnect networks, languages and runtime environments. The first supercomputers were simply the fastest computers of the day, but today performance comes from parallel computation across large numbers of often commodity processors. The leading supercomputers today have millions of compute cores. Orchestrating that parallel computation, requires that programmers adopt specialized programming models such as message passing or parallel global address space languages. A program that can exploit multiple cores on a commodity system generally cannot exploit the scale of HPC without a complete rewrite.To get value out of these complex computing environments, you have to enable teams of scientific developers. HPC programmers are at the leading edge of parallel computing, In My OpinionMike Fisk
<
Page 7 |
Page 9 >