Note: We are looking for the media linked below. If you would like us to track down the original speaker, please let us know.
Byte Me: Megabytes, Gigabytes, and Terabytes at the University of Missouri Slides
The data environment at the University of Missouri is varied and complex. Campus stakeholders, including Faculty, IT, University Administration, and the Libraries are engaged in an ongoing conversation about how to best meet current and future data needs. This presentation will bring you up-to-date on what is happening at the University of Missouri with a focus on how the Libraries are engaging with data management issues on campus.
Jeannette Pierce joined the MU Libraries as Associate Director for Research and Information Services in August, 2013, having served previously in the Loyola University (Chicago) Libraries for seven years, first as head of reference services and subsequently as Director of Klarchek Information Commons. Pierce has also served as a research services librarian at Johns Hopkins University and a reference librarian at Saint Louis University. Pierce is an active in national professional associations, including the Association of College and Research Libraries, American Library Association, and the Library Leadership and Management Association, among others. She holds an MS degree in Library and Information Science from the University of Illinois at Urbana-Champaign, a BA in History from the same institution, and an MS in History from Saint Louis University.
HPC Clusters in the Cloud
Gavin will talk about about local cluster vs cloud, price calculations, spot pricing, and the StarCluster script and machine images. The info will be interesting for researchers and systems programmers as well as those who are interested in using resources “in the cloud.”
As a specialist in Linux and high-performance computing, Burris enjoys enabling faculty within The Wharton School of the University of Pennsylvania by providing effective research computing resources. Burris has been involved in research computing since 2001. Current projects find Burris working with HPC, big data, cloud computing and grid technologies. His favorite languages are Python and BASH. In his free time, he enjoys bad cinema, video editing, synthesizers and bicycling.
Software Carpentry: Lessons Learned
Over the last 15 years, Software Carpentry has evolved from a week-long training course at the US national laboratories into a worldwide volunteer effort to raise standards in scientific computing. This talk explains what they have learned along the way, the challenges they now face, and plans for the future.
Greg Wilson is the creator of Software Carpentry, a crash course in computing skills for scientists and engineers. He has worked for 25 years in high-performance computing, data visualization, computer security, and academia, and is the author or editor of several books on computing (including the 2008 Jolt Award winner “Beautiful Code”) and two for children. Greg received a Ph.D. in Computer Science from the University of Edinburgh in 1993, and presently works for the Mozilla Foundation. Greg has a short paper written about the subject, which is available for download.
Review the Science DMZ design pattern and how it would relate to campuses that have CC-NIE money, or those that don’t and may be looking to upgrade.
Interested in hearing more about what’s going on in InCommon and federated access across higher education and K12? Join Ann West to hear what InCommon and organizations near you are doing in this space. Also presenting were Scott Isaacson and Mike Danahy on Nebraska K12 Identity Federation.
Campus Bridging Slides
Richard Knepper has worked for Indiana University supporting research since 1999. In his role as manager of the Campus Bridging and Research Infrastructure team, he connects researchers with resources and assists in providing environments that support research with technologies. He is responsible for Indiana University’s support of the NASA Operation Ice Bridge Missions field IT for radar data program, and is a member of the XSEDE Campus Bridging team.
Internet2 Innovation Platform and Research Support Center Slides
The Internet2 Innovation Platform is a combination of new technologies and services that provide a leading-edge, end-to-end architecture and a unique set of unified capabilities at the national, regional and campus level to create an environment for innovation in research and education. This presentation will provide an overview of Internet2’s Innovation Platform Initiative, including a discussion of the three technologies: 100 GE, Software Defined Networking and Science DMZ that make up the platform. In addition, an overview of Internet2’s Research Support Center will be provided.
Wendy Huntoon is Sr. Director for Research and Science Engagement for the CTO Office at Internet2 where she is responsible for developing programs for and providing direct support to the research community nationwide. Activities to date include support for Internet2’s Innovation Platform Initiative Pilot program as well as the development of the R&E Network Research Liaison Program. In addition, she is the Director of Advanced Networking at the Pittsburgh Supercomputing Center, a research department within Carnegie Mellon University. Management responsibilities include oversight of the Three Rivers Optical Exchange, the regional optical network in Western PA and parts of West Virginia as well as oversight for the group’s research on network performance and analysis such as the NSF funded Web10G.
Using Networks for his research in Biology, Philippe Hanset decided to explore the world behind those wires. As wireless networking became somewhat affordable, Hanset turned his attention to waves and implemented the largest wireless-LAN in academia in 2000-2001 for University of Tennessee. When the ancestor of eduroam was being investigated in 2003 in Europe, Hanset brought the concept to the US. Since 2009 eduroam has been growing aggressively on this side of the pond and Hanset’s company (ANYROAM LLC) operates eduroam for Internet2, as a NET+ service. Hanset holds a Master in Zoology from University of Brussels, and a Master in Computer Science from University of Tennessee.
John Goodhue is the Executive Director of the Massachusetts Green High Performance Computing Center (MGHPCC). The MGHPCC is dedicated to supporting the growing scientific computing needs of faculty-driven research at MIT, University of Massachusetts, Boston University, Northeastern University, and Harvard University. John is a business and technical leader with 30 years experience in networking and high performance computing. John has held senior engineering management, general management, and technology leadership positions atCisco Systems, where he led the development and marketing of Internet routers for service providers, and BBN Technologies, where he led projects to develop Internet routing and High Performance Computing technologies. He has also been on the early management teams for several Boston-area startup companies. John holds a B..S. in Computer Engineering from the Massachusetts Institute of Technology.
Developing Computational Science Programs by Steve Gordon, Sr. Director of Education and Client Support, Ohio Supercomputer Center.
Gluster by Pol Llovet, Research Computing, Montana State University
NSF International Networking Activities by James G. Williams, Indiana University
The NSF, via its International Research Network Connections program provides advanced networking connectivity, capacity, and capabilities enabling international scientific research and education collaborations and activities. For details see: http://irnclinks.net/. This presentation focuses primarily on the infrastructure and support tools available to researchers and educators today from the “ProNet” projects (again see the previous URL). It outlines some short-term expansion and enhancement to currently available services. The presentation also discusses some longer-term (100G based) expansion possibilities.
Tandy Computing Center of Tulsa by Alex Barclay (TSC Director) and George Louthan (TSC Computer Scientist)
Using Globus Online to Simplify Research Data Management
The goal of the tutorial is to introduce researchers and systems administrators the easy-to-use Globus Online services for moving and sharing large amounts of data. Increasingly computational and data intensive science make data movement and sharing across organizations inevitable. The cloud-hosted Globus Online service makes such research tasks as easy, for example, as Netflix makes streaming movies. In this webinar, attendees will learn: how to perform fire-and-forget file transfer and synchronization between their local machine, campus clusters, regional supercomputers and national cyberinfrastructure using Globus Transfer via both Web and command line interfaces; how to store, version and share data using the new Globus Storage cloud storage service; how to integrate their security and identity infrastructure so that users may access Globus services seamlessly with their organization’s standard account (e.g., campus login via InCommon) or existing accounts with OAuth and OpenID identity providers (e.g., XSEDE, Google); and how to connect their own systems to the Globus Online services.
Steve Tuecke, University of Chicago
Steve Tuecke is Deputy Director of the Computation Institute (CI) at The University of Chicago and Argonne National Laboratory, where he is responsible for leading and contributing to projects in computational science; cloud, high-performance and distributed computing; and CI strategy and operations. His primary focus is on co-leading the Globus project, with Dr. Ian Foster, to develop cloud-based, software as a service data management services to accelerate research. Prior to CI, Steven was co-founder, CEO and CTO of Univa Corporation from 2004-2008, providing open source and proprietary software for the high-performance computing and cloud computing markets. Before that, he spent 14 years at Argonne as research staff.
Network Performance: You can always get what you want Slides
Jason Zurawski is a Senior Research Engineer with Internet2, and focuses primarily on deployment of advanced networking technologies to assist scientific users. Jason has experience in the development of the perfSONAR monitoring framework, the OSCARS Layer 2 control protocol, and data movement applications such as Phoebus. Jason resides in the Washington DC region.
Workforce Development Slides
Professor Apon joined the Clemson University School of Computing as Chair of the Division of Computer Science in August, 2011. Prior to that, Professor Apon was Professor of Computer Science and Computer Engineering at the University of Arkansas as well as the Director of the Arkansas High Performance Computing Center. Areas of research interest include performance modeling and analysis of parallel and distributed system, data-intensive computing, emerging parallel architectures, scheduling policies in parallel systems, parallel file systems, networks for high performance computing, impact of high performance computing to research competitiveness, sustainable funding models for research computing, and data center design.
Software Defined Data Centers in the Enterprise and Provider Space
100G and Beyond (Networking) by Randy Eisenach, Fujitsu
GENI, Grown to 100-200 Campuses by Chip Elliott, BBN
Enabling Exascale Computing through the ParalleX Execution Model by Thomas Sterling, Indiana University Slides
Internet2 and other Research and Education Networks by Randy Brogle, Internet2 Slides
Next Generation IP Networking, IP and Optical Networking by David Altstaetter, ADVA Optical
TeraGrid to XSEDE Transition by John Towns, XSEDE
OpenFlow: What is it and Where is it going? by Rob Sherwood, Bigswitch, Inc
Getting Your Data Where You Need It: The Power of Cloud-Hosted Data Movement by Steve Tuecke, University of Chicago Slides
An Update on the Internet2 Cloud Services Initiative by Khalil Yazdi, Internet2
InCommon and Federated Identity Management by Tom Scavo, InCommon Slides
Solving the Large File Attachment Problem: Internet2’s Deployment of FileSender by Randy Frank, Internet2.
This presentation was not recorded.
Since the origins of the Internet/Arpanet, once a file attachment is larger than a typical SMTP server will accept (plus or minus 10MB these days), a user has been on their own to find a solution. An international consortium of NRENs, lead by AARNET, UNINETT, HEAnet and SURFNet, has been promoting a system called FileSender (www.filesender.org) to solve this problem, and Internet2 is proposing a US deployment. The confluence of international adoption of federated authentication in the R&E community (InCommon in the US and soon REFED as an inter-federation standard) and software APIs such as the new FileAPI in HTML5 now allows for a “provisionless” service that works without any apriori provisioning of accounts or software so that users can simply store and then send to recipients a URL for file pickup, with the only requirement that their campus be a member of Internet2/InCommon. It is important to differentiate this service from various “dropbox” types of services. This is not an archival service, and files will be automatically deleted after two weeks. Because it is designed solely to solve the file attachment problem and therefore doesn’t need a constantly growing backing store nor ancillary services like file backup and replication, we expect to be able to offer this service as a core network service (read: no additional charge for using this service), unlike various cloud storage offerings which most likely will need to be cost recovered.