As published in HPCWire
The HPC500 recently held its quarterly members only call and the topic was processor architectures. The group is comprised of industry leaders who
steer the direction of HPC and bring HPC technology to bear on challenging problems in science, engineering, and business. Members represent a
worldwide diverse group of established HPC professionals from a cross-section of academic, government, and commercial organizations, spanning geographies,
budget sizes, and application areas.
Most participants on the call were experimenting with or had experimented with Nvidia GPUs and/or Intel Xeon PHI. In general, adoption has been slow
by their user bases. The consensus was that porting code takes a lot of work, even for evaluation, and that is slowing adoption rates across the
board. Most participants stated that porting to PHI is or appears to be easier than Nvidia GPUs, with one participant sharing that he expected
this process to take man-weeks for PHI versus man-months for Nvidia.
Though Intel has been slow in getting their product out, participants felt the company has made rapid progress with its compilers for PHI and this
was seen as a very good sign. A concern was voiced about whether any other tools, such as PGI and GNU, would support PHI in the future.
Many members felt that PHI looks promising as an easier programming model than CUDA, since it’s faster to port code with Intel tools. Several members
discussed how successful and easy to port various code to PHI and how impressed they were. But one suggested that while CUDA does take a lot of
effort to get running, PHI needs just as much tweaking in order to optimize it. He stated that while CUDA is all or nothing, he prefers the fact
that you can see gradual progress when working with PHI.
One participant, who has access to x86 clusters, GPUs, Power 6 and 7, and had some experience with GPGPUs, was rolling out Intel MICs to great success.
MICs represent 5-10% of his overall workload right now, but he expects that to increase. He has seen phenomenal interest in MIC from both commercial
and academic users. Another member did not see his organization moving to another processor and his decision is dependent on ISV support, so he
felt his site would be using Power and x86 for the near term.
In the discussion on programming languages and models, most members considered using OpenMP locally and MPI between nodes as their standard programming
environment.
Common barriers to adoption for both Nvidia and PHI include lots of work to port code, lack of support by compiler vendors other than Intel Tools for
PHI, and insufficient ISV support and licensing models. One participant would like to do his product development on GPUs running in a Windows environment,
but has only been able to get access to NVidia, he can’t obtain hardware that will run under Windows.
Another processor architecture discussed was ARM. For embarrassingly parallel applications, one member whose codes were internally developed thought
that either ARM or Snapdragon could be an interesting option due to its low power consumption and cost. He felt that once pricing drops that an
ARM-based box could be a great option to replace an older dual cluster rack. Performance of ARM, which in his tests were slower than a Xeon processor,
was attractive when considering the lower power consumption and cost. For several participants, power consumption is a huge factor in their purchasing
decision since energy costs are such a large part of their operating budget.
The last point discussed was the future of AMD. Consensus was that the chip manufacturer has taken themselves out of contention in the HPC market.
Their roadmap sounds wonderful but they haven’t been able to execute, and members felt that they’d moved into other market segments and away from
HPC. AMD does have good products, but not as good as Nvidia’s solution, members couldn’t see their advantage. That said, if ARM starts to gain
traction in HPC, AMD may yet return to this marketplace.