Access Keys:
Skip to content (Access Key - 0)
"Advancing research...creating solutions!"
<< November 2011 >>
Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30      


Blog Posts

  • Blog post: Save the date for LCI Workshop created by
    about 2 hours ago
  • Blog post: Register Now for 2nd NSF-Sponsored Broadening the Reach Workshop created by
    Mar 26
  • Blog post: NAE Elects New Members for 2014 created by
    Mar 25
  • Blog post: Merit SELinux Class at a Discount created by
    Mar 21
  • Blog post: GPN perfSonar Capabilities created by
    Feb 28
  • Blog post: Advanced Cyberinfrastructure Research and Educational Facilitation -- Campus Based Computational Research Support created by
    Feb 28
  • Blog post: OCCAM'S READER -- A Software Triumph for GWLA created by
    Feb 21
  • Blog post: Annual Meeting 2014 Moves to InterContinental Hotel in Kansas City created by
    Feb 21
  • Blog post: 2014 Open Access Meeting in Kansas City created by
    Jan 30
  • Blog post: GlobusWorld2014 - What you should know! created by
    Jan 30
  • Blog post: Spring 2014 Professional Development Speakers Announced created by
    Jan 30
  • Blog post: NSF Campus Cyberinfrastructure Webinar Slides created by
    Jan 29
  • Blog post: Advanced Clustering Technology Free eQUEUE Webinair created by
    Jan 23
  • Blog post: Another Chance for Merit Classes created by
    Jan 21
  • Blog post: Campus Cyberinfrastructure Proposal Development Workshop February 18 to 19 in KCMO created by
    Jan 17
  • Toggle Sidebar
      2011/11/16
    News from SC11
    Last Changed by Greg Monaco, Nov 16, 2011 20:22

    News from SC11

    by Greg Monaco

    Keynote Address by Jen-Hsun Huang

    Co-founder and CEO of Nvidia

    Huang opened his keynote address on reaching Exascale computing performance with a promise to follow up on last year's excellent keynote on disruptive technology by Clayton Christenson.  You can read my comments about Christenson's keynote from last hear at this site (you can read my description of last year's keynote here).  Essentially, a disruptive technology starts as a technology that is initially tailored for a niche market. As the landscape changes, features available in the niche technology are seen to be useful and appealing to a larger market.  

    Jen's premise is that graphics processing units are the disruptive technology of today that will lead the way to Exascale computing.  The argument goes like this:  We have reached the limit of Denard's Scaling proposition that explains the 1000 times explosion in computing capability (e.g., by reducing chip size) with only a 40 times increase in power consumption.  According to Jen we are at the point where we can only predict a two to one increase in computing capacity relative to power requirements for the future. (We heard something similar to this at Horst Simon's 2010 keynote presentation at the OK Supercomputing Symposium.)

    Jen went on to point out how graphics (GPU) processors are truly disruptive:  They were designed and honed to improve the gaming experience for a unique segment of the market (gamers), but the chips are likely to get rapidly adopted by the broader market as the price comes down and the low-power intensive but high capability features of GPGUs appeal to the broader and supercomputing market.  GPU processors have become more programmable as they offer a wider variety of features and more control to game designers.

    I found a couple of other of Jen's comments to be quite thought-provoking.  

    • 98% of current processor time is dedicated to scheduling (scheduling CPU instructions) and that reducing scheduling could ultimately affect both power consumption and performance.  
    • Highly efficient GPGU processors are not single threaded and this prevents the uptake of these highly efficient processors.
    • Scientist in Japan could not afford a supercomputer so he made one out of GPU chips available locally--he became a celebrity in Japan.
    • Without the GPU processor, the supercomputer of the future will require the power from two Nimitz class nuclear submarines.

    Heard at SC11

    1.  Compared to today, reaching Exascale computing levels will require

    • 1000 times increase in overall performance
    • 135 times increase in performance per watt
    • 1000 times increase in performance per dollar
    • Less than a 2% increase in footprint  

    2.  The expense of moving data is drastically underestimated (helping to make the case for bringing processors to the data rather than vv.)

    3.  If upgrading a supercomputer requires a new building, then that diverts money from what's really needed such as improved algorithms, people, and so forth.

    4.  Here's what's required for technology to be easily adopted in the developing world:

    • Easy to use
    • Well documented
    • There have to be people in the mix

    Art Vandenburg of Ga State aptly commented that this is necessary in the developed world, as well.

    Archiving Data in the Clouds BoF & Panel

    Lead by folks from Spectra, Argonne and elsewhere

    Note:  This meeting is being held at 5:30PM and it's PACKED!

    Archiving: making data useable, available, manageable.  The archive may be the primary copy of data.

    Big data is like a big cluster--it's relative.  Let's not be dismissive if someone considers a smaller dataset to be "big" because it's big to them.

    Storage is the fastest growth area in HPC.  Data is being created exponentially, from year to year.  We can't just throw more hardware at the data.

    If you don't have metadata created along with the data content it is increasingly difficult to get value from the data.

    People also want their archive to be available immediately and online.  

    Managing a petabyte of data vs terabyte

    Managing a petabyte of data is currently pretty common (50% of the people in the room).  But it's different than managing a terabyte where you could just add more storage as the data grew and you could back up to tape.

    The Case for Cloud for Archive

    If you can get all your data into one spot you gain economies of scale in terms of disk, power, operation.  This applies to either a private, university cloud or purchasing cloud services.  Get all the data from the university together in one place and it's cheaper.

    Now you can leverage both tape and disk in data center archiving.  The future will be being able to put most of the data on low cost media like tape yet accessing it in the same way as disk storage.

    You put your IOPS on flash/SSD drives and the archive on more economical media and it can all work together seamlessly.

    Adoption of Public Cloud storage

    1. 10.6% is reference data
    2. 9.8% are storing primary data
    3. 10.6% are storing backup data 
    4. 78.9% are not using the cloud

    Comments on Storage in "The Cloud"
    Comment:  The Cloud is like The Internet, it's a bit amorphous.
    Response:  The Cloud is where you may have a single point for managing the data.
    Comment:  With Google it may now be possible to bring the computation to the data.  We'll see whether that works.
    The fastest way to move a petabyte of data is still FedEx and ?putting it on a 747.  (And it's in the cloud!)
    It's impossible to have a cloud storage offering without tape.
    Disk needs to be rewritten more frequently than tape.  Also, tape is more economical in terms of power.
    Sata bit error every 11.3 terabytes vs every 10.8 exabytes with IBM tape storage.

    How do you verify that the data still has integrity?  Argonne--The application specialists can determine what is going on with their data and notify us if there is a problem.

    What's the lifespan of the media?  Tape written today can be read for 7.5 years.  Disk refresh needs to occur at least every 3 years.

    Storage systems only recently scale to over 1 exabyte.  

    Posted at Nov 16, 2011 by Greg Monaco 0 Comments
    Toggle Sidebar

    Related pages:

    All Spaces

  • Document Library
  • Home
  • K20
  • Network Operations Center
  • SC13
  • Strategic Planning 2011
  • Adaptavist Theme Builder (4.2.0) Powered by Atlassian Confluence 3.4.6, the Enterprise Wiki