Latest Articles
GP-ENGINE and the Great Plains Network (GPN) contribute computing power to the U.S. National Science Foundation’s National Radio Astronomy Observatory (NRAO)
Grant Scott edited this article based on:
Astronomers & Engineers Use a Grid of Computers at a National Scale to Study the Universe 300 Times Faster, https://public.nrao.edu/news/astronomers-study-the-universe-300-times-faster/
Compute nodes that were sponsored by the National Science Foundation (NSF) Campus Cyberinfrastructure program at campuses within the Great Plains Region have continually contributed to national research initiatives through NSF’s Partnership to Advance Throughput Computing (PATh).
“The NRAO manages some of the largest and most used radio telescopes in the world, including the NSF’s Karl G. Jansky Very Large Array (VLA). When these telescopes are observing the Universe, they collect vast amounts of data, for hours, months, even years at a time, depending on what they are studying.” – Article
Where does GPN fit in?
“Rather than sending one Mt. Petabytes to one supercomputing facility, the data was divided into pieces and distributed to smaller banks of computers with GPUs, distributed to university computing centers across the country both large and small.” In the GPN, these are servers on the campuses of University of Missouri, University of Nebraska, and Kansas State University (see map below).
The GP-ENGINE project is currently supporting 30-40 research projects each month, providing over 10,000 GPU hours per month for data processing workloads such as the NRAO.
University of Arkansas Medical System to House Final Node in NSF’s GP-ENGINE Project, Expanding Supercomputing Grid Across Midwest
In a significant stride towards advancing high-performance computing accessibility, the University of Arkansas Medical System (UAMS) in Little Rock has become the site for the final node installation in the National Science Foundation’s (NSF) GP-ENGINE project. The initiative, jointly sponsored by the NSF, the University of Missouri, and the Great Plains Network (GPN), aims to bolster the supercomputing grid infrastructure across the Midwest, with the final addition poised to serve central Arkansas.
The GP-ENGINE project, short for Great Plains “Extended Network of GPUs for Interactive Experimenters,” is a collaborative effort aimed at enhancing research capabilities and accelerating scientific discoveries through state-of-the-art computing resources. By strategically situating nodes throughout the Midwest, the project aims to democratize access to high-performance computing (HPC) resources that enable artificial intelligence and machine learning (AI/ML) workloads, particularly in underserved areas.
According to Lawrence Tarbox, UAMS Director of High-Performance Computing, the decision to house the final node at UAMS signifies a pivotal moment for both the institution and the broader research community in Arkansas. “With this installation,” said Dr. Tarbox, “UAMS will
further enhance its role as a hub for cutting-edge computational research, facilitating interdisciplinary collaboration and innovation across various fields.”
Elon Turner, the Executive Director of the Arkansas Research and Education Optical Network, was enthusiastic about the state’s pivotal role in the project. “We are honored to be selected as host sites for these research and education-enabling resources in the GP-ENGINE project, directly available to those connected to the ARE-ON network.”
Bobby Clark, Network Engineer for ARE-ON, completes the testing of the GP-ENGINE node.
In addition to UAMS, a node was placed at the main University of Arkansas campus in northwest Arkansas. The installation of the final node at UAMS holds immense promise for central Arkansas, as it will significantly enhance the region’s research capabilities and technological infrastructure. Researchers, scientists, and educators across disciplines will gain access to unparalleled computational power, enabling them to tackle complex challenges and push the boundaries of knowledge.
Moreover, the expansion of the NSF-sponsored supercomputing grid into central Arkansas underscores the critical role of HPC in driving economic growth and innovation. By providing researchers with the tools and resources needed to conduct cutting-edge research, the GP-ENGINE project fosters a conducive environment for scientific breakthroughs and technological advancements.
According to Grant Scott, the GP-ENGINE project director, the impact of the project advances the adoption of advanced computing and data resources in the Great Plains Network region. The AI/ML compute nodes housed at UAMS and eight other sites in six states hold the promise of accelerating research efforts aimed at addressing pressing scientific issues and improving quality of life.
As the final node installation progresses at UAMS, stakeholders anticipate a transformative shift in the research landscape of central Arkansas. The convergence of advanced computing technology and interdisciplinary collaboration sets the stage for groundbreaking discoveries and innovations, positioning the region as a powerhouse of scientific inquiry and technological advancement.
The completion of the GP-ENGINE node deployments marks a significant milestone in the quest to democratize access to high-performance computing resources and foster a culture of innovation across the Midwest. With UAMS and ARE-ON at the forefront of this endeavor, both Arkansas and the Great Plains Network are poised to emerge as a dynamic hub for research, discovery, and technological innovation in the years to come.
From left to right: GPN Executive Director, Mickey Slimp; Bobby Clark, Network
Engineer for ARE-ON, and Grant Scott, GP-ENGINE Primary Investigator and
Associate Professor at the University of Missouri
For more information about GP-ENGINE contact:
Mickey Slimp, Great Plains Network, 903-571-0892 (cell)
University of Texas at Austin Student Opportunities
Hat tip to Henry Neeman for these student opportunities.
SUMMARY:
TACC @ U Texas Austin has several summer opportunities for students!
(1) NSF REU Site: Cyberinfrastructure Research 4 Social Change
(2) Frontera Computational Science Fellowship Program
(3) SCIPE Undergraduate Summer Scholarship
(4) SCIPE Graduate Year-Long Fellowship Program
DETAILS:
(1) NSF REU Site: Cyberinfrastructure Research 4 Social Change
NOW ACCEPTING APPLICATIONS
The University of Texas at Austin
June 3 – Aug 2 2024
APPLY NOW! Application Deadline: Mon April 1 2024
Contact: scipe@tacc.utexas.edu
https://www.tacc.utexas.edu/re
The National Science Foundation Research Experience for Undergraduates (REU) Site: Cyberinfrastructure (CI) Research for Social Change is now recruiting undergraduates for the paid 9-week summer research experience at The University of Texas at Austin.
Students gain skills in advanced programming and problem-solving to conduct research in engineering, science, and computational medicine.
Research projects emphasize advanced computing as a tool to power discoveries that will impact social change for future generations.
Prior research or programming experience is not required.
REU participants will receive:
- $6,300 stipend over nine weeks
- Travel
- Free housing at The University of Texas at Austin + meal card
For more information, contact Rosalia Gomez and Neddie Ann Underwood:
cir4sc@tacc.utexas.edu
(2) Frontera Computational Science Fellowship Program
June 1 2024 – May 31 2025
APPLY NOW! Application Deadline: Tue Feb 6 2024
Contact: fcsf@tacc.utexas.edu
https://frontera-portal.tacc.u
The Frontera fellowship program provides a year-long opportunity for talented graduate students to compute on the most powerful academic supercomputer in the world and collaborate with experts at the Texas Advanced Computing Center.
Fellows will receive:
- $34,000 stipend in two installments of $17,000
- Up to $12,000 in tuition allowance throughout the year
- Travel support to present research results at a Frontera user community event and/or professional conference
- 50,000 node-hours on Frontera
- Paid visits to Austin, TX
For more information, please contact Geoffrey Reid and Neddie Ann Underwood:
fcsf@tacc.utexas.edu
(3) SCIPE Undergraduate Summer Scholarship
The University of Texas at Austin
June 1 – 30 2024
APPLY NOW! Application Deadline: Tue Feb 6 2024
Contact: scipe@tacc.utexas.edu
https://www.tacc.utexas.edu/ed
The SCIPE AI in Civil Engineering Scholarship Program provides a summer-long opportunity for undergraduate students to develop advanced Artificial Intelligence and Cyberinfrastructure for applications in Civil Engineering and collaborate with experts at The University of Texas at Austin and Texas Advanced Computing Center.
Selected applicants will receive:
- Stipend of $7,000, includes a maintenance allowance for the time at UT Austin/TACC
- Paid travel to Austin, TX
For more information, please contact Neddie Ann Underwood:
scipe@tacc.utexas.edu
(4) SCIPE Graduate Year-Long Fellowship Program
June 1 2024 – May 31 2025
APPLY NOW! Application Deadline: Tue Feb 6 2024
Contact: scipe@tacc.utexas.edu
https://www.tacc.utexas.edu/ed
The SCIPE AI in Civil Engineering fellowship program provides a year-long opportunity for graduate students to develop advanced Artificial Intelligence and Cyberinfrastructure for applications in Civil Engineering and collaborate with experts at The University of Texas at Austin and Texas Advanced Computing Center.
Fellows will receive:
- $37,000 stipend in two installments of $18,500
- Up to $12,000 in tuition allowance throughout the year
- Travel support to present research results at a SCIPE AI in Civil Engineering community event and/or professional conference
- Paid visits to Austin, TX
For more information, please contact Neddie Ann Underwood:
scipe@tacc.utexas.edu
Open Science Grid (OSG) Highlights GP-ARGO Project’s Role in Democratizing Science Research
This a repost from the OSG website. Acknowledgments for OSG at the bottom of article.
By: Hannah Cheren
As a multidisciplinary and multi-institutional collaboration, the Great Plains Augmented Regional Gateway to the OSG (GP-ARGO) has made significant strides in democratizing computing. Continued support by the CC* award from the National Science Foundation (NSF) is a testament to its dedication to advancing the field.
The task of effectively supporting computational and data-intensive research at an under-resourced and understaffed university in a rural area without the benefit of in-person support is a formidable challenge. Yet, the Great Plains Augmented Regional Gateway to the OSG (GP-ARGO) undertook this daunting responsibility across eighteen universities with exceptional success. Not only did it accomplish this feat, but it also established a new standard of excellence in the field, supplying cyberinfrastructure and support.
GP-ARGO is a product of a regionally distributed OSG Gateway led by the Great Plains Network (GPN), but it started as a gigabit Point of Presence (gigaPOP) of institutions across the great plains region. “It was just a whole bunch of institutions saying, let’s buy a bunch of networks together because it’s easier on us,” Co-principal investigator (PI) and Cyber Infrastructure Program Committee lead Dan Andresen explained, “which is still what GPN is today, but we’ve moved into more facilitating research and connectivity at a social and scientific level as well.”
The social networking part of this project came later, starting with GPN, but then developing into the CyberTeam. “As part of CyberTeam, we noticed that smaller institutions lacked intrinsic capabilities compared to larger ones,” Andresen noted. This gap in research computing sparked the idea of GP-ARGO.
The “O” in GP-ARGO stands for “OSG,” indicating the team’s intention to leverage OSG resources. “We knew we wanted to connect these 18 institutions, and OSG was the way to do it,” Andresen explained. Derek Weitzel, a Research Professor in distributed computing at the University of Nebraska-Lincoln, played a vital role in connecting OSG with GP-ARGO. Weitzel had worked with OSG before the project began, playing an integral part in interfacing between the OSG and GP-ARGO. After establishing OSG’s role in this new project, “it became just a simple matter of obtaining the 18 machines and then figuring out which institutions wanted to be a part of this first beta testing phase,” Andresen reminisced.
Handling 18 machines across six states came with challenges, particularly in communicating and managing 18 administrative domains, security protocols, and rule differences. “None of these sites were the same,” Weitzel explained. “Some sites were very restrictive, others were very relaxed, and we had to make all of them work.” Kyle Hutson, one of the former mentors for the Cyber Infrastructure side of the CyberTeam, played a crucial role in resolving these technical nuances.
With GP-ARGO consistently ranking among the top five OSG entry points for a good part of the last year, the team has successfully linked the machines together and ensured smooth operation, even without dedicated system administrators on-site. Through a large dashboard that compiles information from each institution on which projects are actually running on the nodes, IT leaders and CIOs can monitor and visualize each of the nodes. The dashboard also comes with a data visualization of usage by university, including the PIs on each project, adding a personal component to the monitoring.
Acknowledging the great success of this regional network organization, the National Science Foundation (NSF) supports it. First, CyberTeam received a CC* award, and later, the entire GP-ARGO network received one — something that no one has done before. “Applying as a network rather than a single institution made sense,” Andresen explained, “this emphasizes this is a regional effort rather than an individual, institutional effort.”
GP-ARGO has truly set the curve in taking on a project of this scale and magnitude and doing it successfully. Reflecting on what went well, Andresen gleamed, “I mean, we did it! We’ve got it working; we’re among the top five OSG entry points, we’ve contributed 13 million CPU hours of science, and we have people who are excited and involved, which has been incredibly fun and exciting.”
Furthermore, the team has ensured the sustainability of this operation. “Most of the institutions we’re working with don’t have the expertise or the full-time employees to spare,” Andresen explained. Central administration by OSG has been instrumental in this regard, especially recently, regarding restructuring administration roles with the leaving of Kyle Hutson. “If something happens to whoever is the administrator, like leaving for another institution,” Hutson jokingly remarked, “we have four people across four different institutions that all have administrative rights. I was a primary person doing that, but I was not the only person who could do this, so somebody else can take over.”
Part of GP-ARGO’s appeal lies in their determination and dedication to helping other consortiums and networks aiming to achieve similar goals. They provide a Git repository with all their code and emphasize the importance of both social and technical networks. “Building trust and familiarity is crucial,” Andresen advised. “Get involved with the OSG and get to know people; having Derek [Weitzel] available as the interface has been invaluable. Knowing the context and the people is much easier than starting from scratch.”
Despite the immense undertaking, Andresen commented on how fun and exciting the project has been, with the OSG playing a pivotal role. “This program only builds stronger connections within the region between all these different professionals,” Weitzel reflected. “It’s allowed us to reach out to different types of people, creating new opportunities that build on each other.”
Echoing this sentiment, Hutson highlighted the project’s impact in involving previously less-engaged institutions within GPN with the network’s recent expansion from 18 to 19 campuses. “Cameron University heard about some of the things we’re doing through their state network, had a spare box, and asked if they could get involved!” Hutson explained.
Building these regional connections was one of the most important steps in creating this network. The Midwest doesn’t have any major supercomputing centers or institutions with enough people to drive a network of this magnitude forward. However, Andresen noted that the key to their triumph in this large-scale and long-term endeavor lay in the region’s heritage: “We knew we couldn’t do this alone, but here in the Midwest, our spiritual successor has always been that we look out for and help each other out. That’s who we are, and it’s what has helped us reach remarkable feats.”
This research was done using services provided by the OSG Consortium [1,2,3,4], which is supported by the National Science Foundation awards #2030508 and #1836650.
- Pordes, R., Petravick, D., Kramer, B., Olson, D., Livny, M., Roy, A., Avery, P., Blackburn, K., Wenaus, T., Würthwein, F., Foster, I., Gardner, R., Wilde, M., Blatecky, A., McGee, J., & Quick, R. (2007). The open science grid. J. Phys. Conf. Ser., 78, 012057. https://doi.org/10.1088/1742-6596/78/1/012057
- Sfiligoi, I., Bradley, D. C., Holzman, B., Mhashilkar, P., Padhi, S., & Wurthwein, F. (2009). The pilot way to grid resources using glideinWMS. 2009 WRI World Congress on Computer Science and Information Engineering, 2, 428–432. https://doi.org/10.1109/CSIE.2009.950
- OSG. (2006). OSPool. OSG. https://doi.org/10.21231/906P-4D78
- OSG. (2015). Open Science Data Federation. OSG. https://doi.org/10.21231/0KVZ-VE57