Access Keys:
Skip to content (Access Key - 0)
"Advancing research...creating solutions!"
<< November 2011 >>
Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30      

Blog Posts

  • Blog post: OU Seeks Associate Director, Research Computing Strategy and Development created by
    Aug 26
  • Blog post: GPN and Kansas State University Receive Award to Improve Access to National Computational Infrastructure created by
    Aug 07
  • Blog post: 2nd Call- 3rd Workshop on Sustainable Software for Science Practice and Experiences (WSSSPE3) created by
    Aug 03
  • Blog post: Bioinformatics Workshop at Oklahoma State University, Stillwater campus created by
    Jul 22
  • Blog post: 2015 OU Supercomputing Symposium Registration Now Open created by
    Jul 14
  • Blog post: Merit Offering FREE Online Security Webinars created by
    Jul 02
  • Blog post: FREE NVIDIA GPU Programming at OU Jul 15-16 created by
    Jun 19
  • Blog post: Position for Bioinformatics Analyst & Programmer for Alfalfa Breeder's Toolbox created by
    Jun 10
  • Blog post: Notes from the Annual Meeting about DDoS created by
    Jun 05
  • Blog post: U of MN Supercomputing Institute Hiring Three Positions created by
    Jun 04
  • Blog post: Northwest Missouri State University Seeks Computer Science Professor created by
    Jun 04
  • Blog post: Keynote Announced for OU Supercomputing Symposium 2015 created by
    May 01
  • Blog post: U of OK Seeks Assoc Director Research Computing created by
    Apr 30
  • Blog post: Open Science Grid User School at U of WI Madison created by
    Apr 21
  • Blog post: Linux Cluster Institute Workshop at OU Deadline created by
    Apr 21
  • Toggle Sidebar
    News from SC11
    Last Changed by Greg Monaco, Nov 16, 2011 20:22

    News from SC11

    by Greg Monaco

    Keynote Address by Jen-Hsun Huang

    Co-founder and CEO of Nvidia

    Huang opened his keynote address on reaching Exascale computing performance with a promise to follow up on last year's excellent keynote on disruptive technology by Clayton Christenson.  You can read my comments about Christenson's keynote from last hear at this site (you can read my description of last year's keynote here).  Essentially, a disruptive technology starts as a technology that is initially tailored for a niche market. As the landscape changes, features available in the niche technology are seen to be useful and appealing to a larger market.  

    Jen's premise is that graphics processing units are the disruptive technology of today that will lead the way to Exascale computing.  The argument goes like this:  We have reached the limit of Denard's Scaling proposition that explains the 1000 times explosion in computing capability (e.g., by reducing chip size) with only a 40 times increase in power consumption.  According to Jen we are at the point where we can only predict a two to one increase in computing capacity relative to power requirements for the future. (We heard something similar to this at Horst Simon's 2010 keynote presentation at the OK Supercomputing Symposium.)

    Jen went on to point out how graphics (GPU) processors are truly disruptive:  They were designed and honed to improve the gaming experience for a unique segment of the market (gamers), but the chips are likely to get rapidly adopted by the broader market as the price comes down and the low-power intensive but high capability features of GPGUs appeal to the broader and supercomputing market.  GPU processors have become more programmable as they offer a wider variety of features and more control to game designers.

    I found a couple of other of Jen's comments to be quite thought-provoking.  

    • 98% of current processor time is dedicated to scheduling (scheduling CPU instructions) and that reducing scheduling could ultimately affect both power consumption and performance.  
    • Highly efficient GPGU processors are not single threaded and this prevents the uptake of these highly efficient processors.
    • Scientist in Japan could not afford a supercomputer so he made one out of GPU chips available locally--he became a celebrity in Japan.
    • Without the GPU processor, the supercomputer of the future will require the power from two Nimitz class nuclear submarines.

    Heard at SC11

    1.  Compared to today, reaching Exascale computing levels will require

    • 1000 times increase in overall performance
    • 135 times increase in performance per watt
    • 1000 times increase in performance per dollar
    • Less than a 2% increase in footprint  

    2.  The expense of moving data is drastically underestimated (helping to make the case for bringing processors to the data rather than vv.)

    3.  If upgrading a supercomputer requires a new building, then that diverts money from what's really needed such as improved algorithms, people, and so forth.

    4.  Here's what's required for technology to be easily adopted in the developing world:

    • Easy to use
    • Well documented
    • There have to be people in the mix

    Art Vandenburg of Ga State aptly commented that this is necessary in the developed world, as well.

    Archiving Data in the Clouds BoF & Panel

    Lead by folks from Spectra, Argonne and elsewhere

    Note:  This meeting is being held at 5:30PM and it's PACKED!

    Archiving: making data useable, available, manageable.  The archive may be the primary copy of data.

    Big data is like a big cluster--it's relative.  Let's not be dismissive if someone considers a smaller dataset to be "big" because it's big to them.

    Storage is the fastest growth area in HPC.  Data is being created exponentially, from year to year.  We can't just throw more hardware at the data.

    If you don't have metadata created along with the data content it is increasingly difficult to get value from the data.

    People also want their archive to be available immediately and online.  

    Managing a petabyte of data vs terabyte

    Managing a petabyte of data is currently pretty common (50% of the people in the room).  But it's different than managing a terabyte where you could just add more storage as the data grew and you could back up to tape.

    The Case for Cloud for Archive

    If you can get all your data into one spot you gain economies of scale in terms of disk, power, operation.  This applies to either a private, university cloud or purchasing cloud services.  Get all the data from the university together in one place and it's cheaper.

    Now you can leverage both tape and disk in data center archiving.  The future will be being able to put most of the data on low cost media like tape yet accessing it in the same way as disk storage.

    You put your IOPS on flash/SSD drives and the archive on more economical media and it can all work together seamlessly.

    Adoption of Public Cloud storage

    1. 10.6% is reference data
    2. 9.8% are storing primary data
    3. 10.6% are storing backup data 
    4. 78.9% are not using the cloud

    Comments on Storage in "The Cloud"
    Comment:  The Cloud is like The Internet, it's a bit amorphous.
    Response:  The Cloud is where you may have a single point for managing the data.
    Comment:  With Google it may now be possible to bring the computation to the data.  We'll see whether that works.
    The fastest way to move a petabyte of data is still FedEx and ?putting it on a 747.  (And it's in the cloud!)
    It's impossible to have a cloud storage offering without tape.
    Disk needs to be rewritten more frequently than tape.  Also, tape is more economical in terms of power.
    Sata bit error every 11.3 terabytes vs every 10.8 exabytes with IBM tape storage.

    How do you verify that the data still has integrity?  Argonne--The application specialists can determine what is going on with their data and notify us if there is a problem.

    What's the lifespan of the media?  Tape written today can be read for 7.5 years.  Disk refresh needs to occur at least every 3 years.

    Storage systems only recently scale to over 1 exabyte.  

    Posted at Nov 16, 2011 by Greg Monaco 0 Comments
    Toggle Sidebar

    Related pages:

    All Spaces

  • Document Library
  • Home
  • K20
  • Network Operations Center
  • SC13
  • Strategic Planning 2011
  • Adaptavist Theme Builder (4.2.0) Powered by Atlassian Confluence 3.4.6, the Enterprise Wiki