Manish Parashar, Sidney Fernbach Memorial Award Winner

December 6, 2023 by No Comments

[ad_1]

manish parasharDr. Manish Parashar is a luminary in computational science and engineering. As Director of the Scientific Computing and Imaging Institute at the University of Utah, he has pioneered advancements in parallel and distributed computing that have created transformative waves within the field. His impactful contributions extend to his leadership roles at the US National Science Foundation (NSF), where he shaped national cyberinfrastructure strategy. Being a Fellow of AAAS, ACM, and IEEE, Dr. Parashar’s legacy is marked by his innovative work and visionary leadership.

In honor of his many achievements, he has received the 2023 Sidney Fernbach Memorial Award for, “…contributions to distributed high-performance computing systems and applications, data-driven workflows, and translational impact.

 

Could you share what IEEE Fellowship means to you and how it has influenced your career in computational science and high-performance computing?


My academic career has focused on impacting science and society through my research, and this focus has guided all my research, educational, and professional activities. Being recognized as Fellow by IEEE, the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity, has been a tremendous honor. I have always benefited from the many opportunities that IEEE has provided as well as interactions across the broad IEEE community and have learned and grown through these interactions. These interactions have particularly influenced my research in computational science and high-performance computing, which are inherently multidisciplinary fields.


Honor your colleagues achievements. Nominate Someone for a Major Award Today!


 

Could you elaborate on how you’ve addressed technological challenges in the fields of translational computer science and computational science & engineering? How you expect changes in the future?


Translational research bridges foundational, use-inspired, and applied research with the delivery and deployment of its outcomes to the target community and supports essential bi-direction interplays where delivery and deployment process informs and advances research. My academic career in translational computational and data-enabled science and engineering has addressed key conceptual and technical (and socio-technical) challenges in the broad area of high-performance parallel and distributed computing. Specifically, I have investigated conceptual models, programming abstractions, and implementation architectures that can enable new insights through very large-scale computations and big data in a range of domains critical to advancing our understanding of important natural, built, and human systems. My contributions have included innovations in data structures and algorithms, programming abstractions and systems, and systems for runtime management and optimization.

Two aspects have been key to achieving the translational impacts that I have always strived for. The first is working very closely with researchers and practitioners from many diverse disciplines, often embedding myself in their research groups to understand issues and challenges. My collaborations and research contributions have spanned a range of domains, including subsurface and seismic modeling, wildfire management, plasma physics and fusion, hydrology, compressible turbulence and computational fluid dynamics, bio-/medical informatics, oceanography, numerical relativity/astrophysics, plasma physics, and business intelligence. The second aspect is developing and deploying systems that encapsulate research innovations and can be used by scientists and engineers in academia and industry to advance their own research and development.

Moving forward, computation and data will play an increasing important role in all areas of science and engineering, especially as science and engineers research increasingly uses data-enabled, AI-driven approaches. As a result, it is important that there is a tight bi-directional coupling among computer, computational, and domain science research advances, and translation approaches can ensure such coupling.“Make no little plans. They have no magic to stir men’s blood!” Daniel H. Burnham, Architect and City Planner Extraordinaire, 1907.

 

You’ve developed and deployed innovative software systems based on your research. Can you share what the process was like, starting from ideation to finalization of a specific project that you are proud of? What significance did it have in the field of computational science?


Translation has always been integral to my research program. This has resulted in taking my research beyond more traditional computer science research practices, where end outputs include papers, but increasingly code and data made available, sometimes through open-source techniques, for the community to adopt. In my work, I have made sure that a community is engaged during problem definition, i.e., when the problem is mapped from the real to an abstract formulation. The community is also engaged in the evaluation of solutions to make sure that we work not only on test problems, but also in the real world with real problems, real users, and real funding, societal and political constraints. Such community engagement and evaluation have added an additional loop in my research workflow, and the solutions developed have been refined to ensure that they worked in the real world.

Note that such research is iterative – rarely will the first ideas and prototypes solve the problem. Many shortcomings will be identified, ranging from an inability to solve the problem at all to inadequate performance. Often new algorithms need to be developed, and different implementation techniques and platforms must be explored. Thus, the workflow includes an “evaluate and refine” loop, which iterates until the problem can be realistically solved.

One early example of this is the DAGH/GrACE project, which was part of the US National Science Foundation (NSF)-funded Binary Black Hole Grand Challenge project with the goal of modeling 3D spiraling coalescence of two black holes, which required computationally solving Einstein’s equations of gravity. A key motivation was that colliding black holes are among the most promising sources for generating gravitational waves. The project’s aim was to provide examples of computational waveforms that could be used to predict signals that could then be observed by future detectors, such as the Laser Interferometer Gravitational-Wave Observatory (LIGO).

The black-hole equations are mixed in nature, consisting of nonlinear hyperbolic evolution equations for the true dynamic degrees of freedom, nonlinear elliptic constraint equations for initial data, and gauge conditions determining the coordinates, which can be of any type. Numerical treatments of Einstein’s field equations of general relativity are typically formulated using a 3 (space) + 1 (time) decomposition of spacetime, and the very large space and times scales lead to significant computational requirements. Dynamically adaptive methods for the solution of differential equations, which employ locally optimal approximations, have been shown to yield highly advantageous ratios for cost/accuracy when compared to methods based upon static uniform approximations. Whereas parallel versions of these methods offer the potential for accurate solution of physically realistic models of important physical systems, their efficient and scalable implementations are challenging due to dynamically changing computational loads and communication patterns, as well as the need to preserve complex localities.

DAGH (subsequently called GrACE) was one of the first data-management infrastructures to support parallel adaptive computations using hierarchical adaptive mesh-refinements techniques. It was based on a formulation of locality-preserving distributed and dynamic data structures along with programming abstractions that enabled parallel adaptive formulations to be directly expressed. Furthermore, it implemented a family of innovative partitioning algorithms that incorporate system/applications characteristics, and mechanisms for actively managing adaptive grid hierarchies. A prototype implementation DAGH/GrACE was openly shared and rapidly adopted by the Binary Black Hole Grand Challenge project. It was also adopted by a range of other applications internationally, ranging from modeling the dynamic response to detonation of energetic materials, to modeling forest fire propagation and blood flow in the human heart. Furthermore, the data structures provided by DAGH/GrACE were extended to support adaptive multigrid methods as well as adaptive multiblock-based formulations and were used to support the modeling of fluid flows in the subsurface and oil reservoir simulations.

However, despite its effectiveness, rapid uptake, and impact, the DAGH/GrACE software stack was not successfully translated and could not be sustained. The reasons for this are elaborated in the following section, but the overarching reason was the lack of adequate planning and resources for the translation, and in particular, the move from locale to community. However, the conceptual framework underlying DAGH/GrACE has persisted and has been incorporated into other frameworks, many of which are in use today. In fact, these aspects of DAGH/GrACE have influenced the codebases used in the research efforts that were part of the 2017 detection of gravitational waves, which resulted in a Nobel Prize in Physics in that year.

I have since deployed a number of software systems, including DataSpaces (2013 R&D 100 award winner), for extreme scale in situ coupled workflows; DART for high-throughput, low latency data streaming; Fenix for online failure recovery; R-Pulsar programming framework for data-driven edge-cloud integration and for enabling urgent applications; CometCloud for enabling dynamic software defined infrastructure across federated infrastructure, and AutoMate/Accord/Meteor to support autonomics. I continue to work on translational projects such as the recently launched National Data Platform project.

 

Your experience in deploying and operating large-scale production systems, such as the cyberinfrastructure for the National Science Foundation (NSF) Ocean Observatories Initiative, is impressive. What were some of the challenges you encountered, how did you overcome them, and what was the biggest impact?


The National Science Foundation-funded Ocean Observatories Initiative (OOI) is an integrated infrastructure project composed of science-driven platforms and sensor systems that measure physical, chemical, geological and biological properties and processes from the seafloor to the air-sea interface. The OOI network was designed to provide seamless near-real-time access to multiple scales of globally distributed marine observations of processes at multiple oceanographic scales, from ocean basin to continental shelf, over time scales from short-term episodic events to decadal cycles, and allowing scientists to address critical questions that will lead to a better understanding and management of our oceans, enhancing our capabilities to address critical issues such as climate change, ecosystem variability, ocean acidification, and carbon cycling. The OOI project launched its construction phase in 2007, with a mission of, once operational, delivering data and data products over a 25-year-plus lifetime.

In 2014, while at Rutgers, The State University of New Jersey, I joined the OOI project and assumed the responsibility of designing, implementing, deploying, and operating the Cyberinfrastructure (CI) that underpins the OOI network. I was also responsible for data acquisition, data processing, and near-real-time data delivery after the initial CI effort was unsuccessful, causing NSF to change the CI team. Coming into the project in the middle of an ongoing construction efforts presented several challenges. The first was the compressed timelines – we were starting from zero and had to catch up with a fast-moving project. We essentially had to build the train while it was already moving at speed on the tracks, which meant we had to ramp up in terms of personnel and constrain our designs to decisions that were already made by other teams. The second was cultural – building a production system is very different from typical university projects. The OOI system CI was meant to be operational 24/7, and researchers and practitioners would be depending on it for research data.

In terms of scale, the OOI network integrates multiple scales of marine observations integrated into one observing system through a robust, secure, and scalable Cyberinfrastructure. The system we were designing was composed of seven arrays (Cabled Array, Coastal Pioneer, Coastal Endurance, Global Argentine Basin, Global Irminger Sea, Global Southern Ocean, Global Station Papa). The coastal assets of the OOI expanded existing observations off both U.S. coasts, creating focused, configurable observing regions. Cabled observing platforms ‘wired’ a single region in the Northeast Pacific Ocean with a high-speed optical and high-power grid. Finally, the global component (with deployments off the coast of Greenland and in the Southern Ocean and Argentinian Basin) addressed planetary-scale changes via moored open-ocean buoys linked to shore via satellite. Note that this original composition has since change and the current OOI is somewhat different.

We completed design, construction, and deployment of these systems in late 2015. This infrastructure includes 12 surface moorings, 8 subsurface flanking moorings, 22 profiler moorings, 20 cabled seafloor packages, 32 gliders, and 2 Autonomous Underwater Vehicles (AUVs). Overall, the unprecedented observational network integrated data from 57 stable platforms and 31 mobile assets, carrying 1227 instruments (~850 deployed), which provide over 25,000 science data products and over 100,000 scientific/engineering data products. Some of the cutting-edge instrumentation had never been fielded in an operational format before and was now in the water and actively collecting data, including the in-situ mass spectrometer, particulate DNA sampler, and other vent chemistry sensors.

The OOI CI started its early operational phase in January 2016, providing seamless access to data and data products to users through its portal. The OOI user community grew consistently, and by November 2016, OOI had received over 21,000 hits and delivered ~14 TB of data to hundreds of users spanning over 160 countries. I left the project when I joined NSF as Office Director for the Office of Advanced Cyberinfrastructure, in early 2018. The project has since evolved, was re-competed for its next phase, and continues to provide data and data products to a large global research community.

 

As Co-Chair of the National Artificial Intelligence Research Resource (NAIRR) Task Force, what have been the most significant policy and strategic contributions, in your opinion, made to advance these fields?


Serving as co-chair of the National Artificial Intelligence Research Resource (NAIRR) Task Force was a distinct honor and a unique experience. The Task Force was a federal advisory committee that ran from June 2021 through April 2023, with the overarching goal of strengthening and democratizing the US AI innovation ecosystem in a way that protects privacy, civil rights, and civil liberties.

The Final Report on the Task Force, released in January 2023, presented an urgent vision for US leadership in responsible Artificial intelligence (AI) and a strategic implementation plan for strengthening and democratizing AI innovations. AI is rapidly becoming the transformative technology of the 21st century, driving innovations, enabling new discoveries, and spurring economic growth. It is impacting everything from routine daily tasks and services, to addressing societal-level grand challenges — it has the potential to revolutionize solutions to many scientifically and societally important problems and to improve the lives of every individual, especially those with special needs (such as the elderly) and who have been traditionally underserved. At the same time, there are growing concerns that AI could have negative social, environmental, and even economic consequences. To realize the positive and transformative potential of AI, it is imperative to advance AI and its applications responsibly, i.e., in a way that achieves societal good while also protecting privacy, civil rights, and civil liberties, and promotes principles of fairness, accountability, transparency, and equity.

The Task Force emphasized the importance of democratizing AI research and development (R&D), allowing researchers from all backgrounds to participate in foundational, use-inspired, and translational AI R&D. This inclusivity is considered essential to mitigating potential negative impacts. Furthermore, recognizing that, today, advances in AI R&D are very often tied to access to large amounts of computational power and data, the Task Force highlighted the importance of a widely accessible AI research cyberinfrastructure (CI) that brings together computational resources, data, testbeds, algorithms, software, services, networks, and expertise that can help democratize the AI R&D landscape and enable responsible AI R&D that benefits all. Realizing such a CI would create opportunities to train the future AI workforce, support and advance trustworthy and responsible AI, and catalyze the development of ideas that can be practically deployed for societal and economic benefits.

The Task Force also highlighted the importance of implementing system safeguards, setting the standard for responsible AI research through the design and implementation of its governance processes, and being proactive in addressing privacy, civil rights, and civil liberties issues by integrating appropriate technical controls, policies, and governance mechanisms from the outset, including criteria and mechanisms for evaluating proposed research and resources from a privacy, civil rights, and civil liberties perspective.

 

In your role as Assistant Director for Strategic Computing at the White House Office of Science and Technology Policy (OSTP), you led the strategic planning for the Nation’s Future Advanced Computing Ecosystem. Could you discuss the challenges and rewarding aspects of this role?


As Assistant Director for Strategic Computing at the White House Office of Science and Technology Policy (OSTP), my goal was to develop an all-of-government vision for a future computing ecosystem, as a national strategic asset, that combines heterogeneous computing systems with the networking, software, data, and expertise required to support U.S. scientific and economic leadership, national security, and defense. In alignment with this goal, I led the establishment of the NSTC FACE Subcommittee and conducted all-of-government strategic planning through this subcommittee for the Nation’s Future Advanced Computing Ecosystem. This effort resulted in the development of the report, “Pioneering the Future Advanced Computing Ecosystem: A Strategic Plan.” Key objectives outlined in this strategic plan were: (1) Utilize the future advanced computing ecosystem as a strategic resource spanning government, academia, nonprofits, and industry; (2) Establish an innovative, trusted, verified, usable, and sustainable software/data ecosystem, (3) Support foundational, applied, and translational research and development to drive the future of advanced computing and its applications, (4) Expand the diverse, capable, and flexible workforce that is critically needed to build and sustain the advanced computing ecosystem; and (5) Establish partnerships across government, academia, nonprofits, and industry.

At OSTP, I also was part of the leadership team that created and managed operations of the COVID 19 HPC Consortium, a unique public-private partnership that brings together government, industry, and academic leaders urgently convened to provide computing resources in support of COVID-19 research. Based on the impacts of and experiences from this consortium (and lessons learned), I led the formulation of the National Strategic Computing Reserve (NSCR) concept and its development and the resulting Request for Information and blueprint. Specifically, NSCR was envisioned as a coalition of experts and resource providers that could be mobilized quickly to provide critical computational resources (including compute, software, data, and technical expertise) in times of urgent need.

The most rewarding aspect of my OSTP experiences was being able to work with some of the most talented and dedicated experts from across government on strategic and policy elements of the national advanced computing ecosystem aimed at benefiting the nation and its citizens. Probably the most challenging aspect of the experience was working under the constraints of the COVID 19 pandemic where interactions were largely virtual – I worked with a number of individuals I have never had the privilege of meeting in person. “If you want to travel fast, travel alone; if you want to travel far, travel together,” African Proverb.

 

You are the founding chair of the IEEE Technical Community on High Performance Computing. What are some of its key initiatives and achievements? Furthermore, what is the greatest benefit of being a part of this community?


The primary goal in establishing Technical Community on High Performance Computing (TCHPC), originally a Technical Consortium, was to advance and coordinate work in the crosscutting field of high-performance computing, encompassing networking, storage, and analysis concepts, technologies, and applications. This effort extends throughout the IEEE, aiming to enhance the roles of both the IEEE Computer Society and IEEE in this interdisciplinary and pervasive field.

TCHPC aims to provide value to the High Performance Computing (HPC) community in several ways. It provides a forum for exchange of ideas among interested practitioners, researchers, developers, maintainers, users, and students across IEEE working in the high-performance computing (HPC) field. It promotes and facilitates the sharing of ideas, techniques, standards, and experiences among TCHPC members for more effective contributions to and use of HPC technology, thereby advancing both the state-of-the-art and the state-of-the-practice of HPC. TCHPC also engages in workforce development and standards processes.

Over the years, TCHPC has focused on different activities aimed at fostering and nurturing the HPC community. For example, it has been awarding the IEEE Computer Society TCHPC Early Career Researchers Award for Excellence in High Performance Computing since 2016. This award recognizes up to 3 individuals who have made outstanding, influential, and potentially long-lasting contributions in the field of high-performance computing within 5 years of receiving their PhD degree and has become an effective and valued mechanism for motivating and recognizing early career researchers. TCHPC has also been championing two key initiatives:

    • Education and Outreach Initiative / Student Programs, which aims at coordinating activities, information, and best practices around HPC education/outreach across its member technical committees and the broader community. Activities within this initiative include coordinating student activities across conferences (e.g., PhD, forums, and student mentoring), developing a repository of related material and best practices to help organizers, creating a list of HPC resources available for educational use to support faculty teaching classes who need these resources, and serving as a bridge between undergraduate programs and industry and labs seeking to host HPC REU students and interns.

    • Reproducibility Initiative, which aims at leading a broad and deep conversation to advance the standards of simulation and data-based science, working with the community to coordinate efforts in this important area, as well as foster experiences and effective practices. Activities within this initiative include reproducibility badging for journals (e.g., IEEE TPDS) and conferences (e.g., SC and ICPP); leading the IEEE CS Ad Hoc Committee on Open Science and Reproducibility that analyzed the models, practices, and experiences in supporting open science and reproducibility within the IEEE Computer Society (CS) and at peer societies and publishers in the context of the recommendation of the NASEM report on reproducibility; and participating in the NISO committee on standardizing reproducibility  badging. We are in the process of documenting best practices and lessons learned from this initiative, which can be a resource for the community.

TCHPC also published the workshop proceedings for the SC conference series for several years until 2022. Also, since 2022, TCHPC has been leading the IEEE CS Assist effort at the SC conference, which provides on-site to help anyone who needs to report any issues to the IEEE Ethics Reporting Line or other appropriate authority.

 

As Co-Chair of the National Artificial Intelligence Research Resource (NAIRR) Task Force, what have been the most significant policy and strategic contributions, in your opinion, made to advance these fields?


Serving as co-chair of the National Artificial Intelligence Research Resource (NAIRR) Task Force was a distinct honor and a unique experience. The Task Force was a federal advisory committee that ran from June 2021 through April 2023, with the overarching goal of strengthening and democratizing the US AI innovation ecosystem in a way that protects privacy, civil rights, and civil liberties.

The Final Report on the Task Force, released in January 2023, presented an urgent vision for US leadership in responsible Artificial intelligence (AI) and a strategic implementation plan for strengthening and democratizing AI innovations. AI is rapidly becoming the transformative technology of the 21st century, driving innovations, enabling new discoveries, and spurring economic growth. It is impacting everything from routine daily tasks and services, to addressing societal-level grand challenges — it has the potential to revolutionize solutions to many scientifically and societally important problems and to improve the lives of every individual, especially those with special needs (such as the elderly) and who have been traditionally underserved. At the same time, there are growing concerns that AI could have negative social, environmental, and even economic consequences. To realize the positive and transformative potential of AI, it is imperative to advance AI and its applications responsibly, i.e., in a way that achieves societal good while also protecting privacy, civil rights, and civil liberties, and promotes principles of fairness, accountability, transparency, and equity.

The Task Force highlighted the democratization of AI research and development (R&D) so that all researchers from every background can participate in foundational, use-inspired, and translational AI R&D and contribute to the AI R&D ecosystem as essential to mitigating these negative impacts. Furthermore, recognizing that, today, advances in AI R&D are very often tied to access to large amounts of computational power and data, the Task Force highlighted the importance of a widely accessible AI research cyberinfrastructure (CI) that brings together computational resources, data, testbeds, algorithms, software, services, networks, and expertise that can help democratize the AI R&D landscape and enable responsible AI R&D that benefits all. Realizing such a CI would create opportunities to train the future AI workforce, support and advance trustworthy and responsible AI, and catalyze the development of ideas that can be practically deployed for societal and economic benefits.

The Task Force also highlighted the importance of implementing system safeguards, setting the standard for responsible AI research through the design and implementation of its governance processes, and being proactive in addressing privacy, civil rights, and civil liberties issues by integrating appropriate technical controls, policies, and governance mechanisms from the outset, including criteria and mechanisms for evaluating proposed research and resources from a privacy, civil rights, and civil liberties perspective.

 

Through the many milestones and accomplishments you’ve reached, were there any lessons learned? Furthermore, what advice do you have for any individuals who are interested in pursuing a similar career path as yours?


My academic journey so far has been extremely satisfying and rewarding. I have been privileged to have the opportunity to explore technical, socio-technical, educational, and policy issues across a range of scientific areas; to work with many excellent and extremely talented individuals; and to make contributions to these areas through my work in translational computational and data-enabled science and engineering and high-performance parallel and distributed computing. I have enjoyed all my interactions with mentors, collogues and collaborators, and students through the years and have learned and grown from each one of these interactions.

Throughout the journey, I’ve learned several lessons that may offer guidance to those aspiring to pursue a similar career path. The first is having faith in oneself and in one’s beliefs and goals –- the path is never straight or easy, and there always setbacks, but it is important to believe and persevere. The second is that it is never too late to learn – one must continue learning and growing. And possibly the third is to (try to) enjoy the ride and have fun along the way.  In the words of Douglas Adams (The Long Dark Tea-Time of the Soul), “I may not have gone where I intended to go, but I think I have ended up where I needed to be.”

More About Manish Parashar


Manish Parashar is Director of the Scientific Computing and Imaging (SCI) Institute, Chair in Computational Science and Engineering, and Presidential Professor in the Kalhert School of Computing at the University of Utah.

Manish’s academic career has focused on translational computer science with a specific emphasis on computational and data-enabled science and engineering, and has addressed key conceptual, technological, and educational challenges. His research is in the broad area of parallel and distributed computing, and he has investigated conceptual models, programming abstractions, and implementation architectures that can enable new insights through large-scale computations and data in a range of domains. His contributions include innovations in data structures and algorithms, programming abstractions and systems, and systems for runtime management and optimization, and he has developed and deployed software systems based on his research. He has also deployed and operated large scale production systems, such as the cyberinfrastructure for the NSF Ocean Observatories Initiative.

Manish recently completed an IPA appointment at the US National Science Foundation (NSF), serving as Office Director of the NSF Office of Advanced Cyberinfrastructure. At NSF, he oversaw strategy and investments in national cyberinfrastructure and led the development of NSF’s strategic vision for a National Cyberinfrastructure Ecosystem and blueprints for key cyberinfrastructure investments. He also served as Co-Chair of the National Science and Technology Council’s Subcommittee on the Future Advanced Computing Ecosystem (FACE) and the National Artificial Intelligence Research Resource (NAIRR) Task Force. In 2002, Manish served as Assistant Director for Strategic Computing at the Whitehouse Office of Science and Technology Policy, where he led strategic planning for the Nation’s Future Advanced Computing Ecosystem, and the formulation of the National Strategic Computing Reserve (NSCR) concept.

Manish is the founding chair of the IEEE Technical Community on High Performance Computing (TCHPC), and is Fellow of AAAS, ACM, and IEEE. For more information visit his website.

[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *