Popular Posts

Monday, November 29, 2010

Tears Are My Best Friends !!

I Say Thanks To Tears Because They Are There When There Is No One & Also when i am in Pain... Tears Are my Best Friend. :-))


I Love To Hide My Tears By Increasing The Length Of My Smile And I am Always Successful In Doing This .. I Simply Hate It If Anyone Cry Because Of Me But On The Other Hand I Love Eyes Washed With Tears.

When My Body Just Can’t Hold Any More The Intensity of My Emotions. My Emotions Turn Into ‘Tears’ And They Just Come Out Of My Eyes And Stay With Me Until I Feel Better !!

Tears Are My Best Buddies. I Love My Tears... They Are Precious To Me.

Please, Leave Me Alone...

Leave me alone. I would not see thee more. The storm is hushed, the agony is o'er. I would not feel again. The passion and the pain. Do not again come knocking at my door.

Leave me alone. Put not into my hand
A broken cup, though bound with golden band, Lest I with thirsty lip. Once more its passions sip. Still let it lie, all shattered on the sand.

Leave me alone. I followed, long ago,
Joy to its tomb, with tolling marches slow. Wake not my buried slain, Only to die again. Leave me to peace___'tis all I hope to know.

Leave me alone. I may not quite forget
The buried love, whose sweetness thrills me yet; But let the willow wave; Rake not a grass-grown grave;
Break not the turf, for fresh-rung tears to wet.

Wednesday, November 24, 2010

HPCC Program Accomplishments and Plans

1. Networking 

The HPCC Program provides network connectivity among advanced computing resources, scientific instruments, and members of the research and education communities. The Program has successfully accommodated the phenomenal growth in the number of network users and their demands for significantly higher and ever increasing speeds while maintaining operational stability. R&D in advanced networking technologies is guiding the development of a commercial communications infrastructure for the Nation. The development and deployment of this new technology is jointly funded and conducted by the HPCC Program, state and local governments, the computer and telecommunications industries, and academia.

1.1. The Internet

One illustration of the global reach of HPCC technologies is that the Internet now extends across the country and around much of the world. Initially the domain of government scientists and U.S. academics, by the beginning of FY 1994:
    • Almost two million computers were accessible over the Internet.
    • More than 15,000 regional, state, and local U.S. networks and 6,300 foreign networks in approximately 100 countries were part of the Internet.
    • Nearly 1,000 4-year colleges and universities, 100 community colleges, 1,000 U.S. high schools, and 300 academic libraries in the U.S. were connected.
The HPCC Programs Internet investment primarily supports the high speed "backbone" networks linking Federally-funded high performance computing centers.

1.2. The Interagency Internet

The Interagency Internet, that portion of the Internet funded by HPCC, is a system of value-added services carried on the Nation's existing telecommunications infrastructure for use in federally-funded research and education. Its three-level architecture consists of high speed backbone networks (such as NSFNET) that link mid-level or regional networks, which in turn connect networks at individual institutions. At the beginning of the HPCC Program in FY 1992, most of the backbones were running at T1 speeds (1.5 Mb/s megabits per second or millions of bits per second), and international connections had been established. Peak monthly traffic on NSFNET had reached 10 billion packets (of widely varying size). In FY 1992, NSFNET speed was upgraded to T3 (45 Mb/s) and NSF made awards to industry for network registration, information, and database services and for a Clearinghouse for Networked Information Discovery and Retrieval. By the beginning of FY 1994:
    • Peak monthly NSFNET traffic reached 30 billion packets.
    • DOE established six ATM (asynchronous transfer mode) testbeds to evaluate different approaches for integrating this technology between wide area and local area networks.
    • NASA provided T3 services to two of its Grand Challenge Centers (Ames Research Center and Goddard Space Flight Center) through direct connection to NSFNET. In addition, service to several remote investigators was upgraded to T1 data rates.
    • NASA launched its Advanced Communications Technology Satellite (ACTS).

Advanced Communications Technology Satellite (ACTS) deployed by the Space Shuttle.

    • NIH and NSF funded 15 Medical Connections grants for academic medical centers and consortia to connect to the Internet.
    • Five "gigabit testbeds" established by NSF and ARPA (described below) became operational. In addition, a DOD-oriented testbed founded by ARPA focuses on terrain visualization applications.
    • ARPA established a gigabit testbed in the Washington, DC area in cooperation with more than six other agencies in the area.
Projected FY 1994 Accomplishments
    • Awards will be made to implement a new NSFNET network architecture (including network access points (NAPs), a routing arbiter, a very high speed backbone, and regional networks).
    • Additional very high speed backbones will link HPCRCs (High Performance Computing Research Centers, described below).
    • Connectivity for DOE's ESnet will grow to 27 sites.
    • More universal and faster Internet connections for the research and education community
    • Improved network information services and tools
Proposed FY 1995 Activities 
    • NSFNET -- Implement new architecture (awards were made in FY 1994); implement very high speed backbone to NSF Supercomputer Centers; establish some additional high speed links for demanding applications
    • DOE -- Upgrade ESnet services to T3 and selected sites to 155 Mb/s; upgrade connectivity to Germany and Italy to T1 and to Russia to 128 Kb/s (kilobits per second or thousands of bits per second)
    • NASA's AEROnet and NSI -- Establish internal T3 and higher speed network backbone to five NASA centers
    • Expand connectivity to schools (K-12 through university) -- connectivity funded by NSF and NIH will reach a total of 1,500 schools, 50 libraries, and 30 medical centers; NASAs Spacelink computer information system for educators will be made available via the Internet; toll-free dial-up access will be provided to teachers without Internet access.
    • NIH -- Acquire gigabit (billions of bits per second) local networks for use with multiple parallel computers and as a backbone to enable development of the Xconf image conferencing system
    • Integrate NOAA's more than 30 environmental data centers into the Internet through high speed connectivity and new data management tools
    • Expand EPA connectivity to reach a substantial percentage of Federal, state, and industrial environmental problem-solving groups and test distributed computing approaches to complex cross-media environmental modeling
    • Continue to support and improve information services such as the NSFNET Internet Network Information Center (InterNIC)

1.3. Gigabit Speed Networking R&D

New technologies are needed for the new breed of applications that require high performance computers and that are demanded by users across the U.S. These technologies must move more information faster in a shared environment of perhaps tens of thousands of networks with millions of users. Huge files of data, images, and videos must be broken into small pieces and moved to their destinations without error, on time, and in order. These technologies must manage a users interaction with applications. For example, a researcher needs to continuously display on a local workstation output from a simulation model running on a remote high performance system in order to use that information to modify simulation parameters.
As these gigabit speed networks are deployed, the current barriers to more widespread use of high performance computers will be surmounted. At the same time, high speed workstations and small and mid-size scalable parallel systems will gain wider use.

A teraflops (a trillion floating point operations per second) computing technology base needs gigabit speed networking technologies.


HPCC-supported gigabit testbeds funded jointly by NSF and ARPA test high speed networking technologies and their application in the real world.

The HPCC Program is developing a suite of complementary networking technologies to take fullest advantage of this increased computational power. R&D focuses on increasing the speed of the underlying network technologies as well as developing innovative ways of delivering bits to end users and systems. These include satellite, broadcast, optical, and affordable local area designs.
The HPCC Program's gigabit testbeds are putting these technologies to the test in resource-demanding applications in the real world. These testbeds provide working models of the emerging commercial communications infrastructure, and accelerate the development and deployment of gigabit speed networks.
In FY 1994-1995, HPCC-funded research is addressing the following:
    • ATM/SONET (Asynchronous Transfer Mode/Synchronous Optical Network) technology -- "fast packet switched" cell relay technology (in which small packets of fixed size can be rapidly routed over the network) that may scale to gigabit speeds
    • Interfacing ATM to HiPPI (High Performance Parallel Interface) and HiPPI switches and cross connects to make heterogeneous distributed high performance computing systems available at high network speeds
    • All-optical networking
    • High speed LANs (Local Area Networks)
    • Packetized video and voice and collaborative workspaces (such as virtual reality applications that use remote instruments)
    • Telecommuting
    • Intelligent user interfaces to access the network
    • Network management (for example, reserving network resources, routing information over the networks, and addressing information not only to fixed geographical locations but also to people wherever they may be)
    • Network performance measurement technology (to identify bottlenecks, for example)
    • Networking standards (such as for interoperability) and protocols (including networks that handle multiple protocols such as TCP/IP, GOSIP/OSI, and popular proprietary protocols)
Additional FY 1995 plans include completing DOE's high speed LAN pilot projects and providing select levels of production-quality video/voice teleconferencing capability. NASA and ARPA plan experiments in mitigating transmission delay to the ACTS satellite. NASA plans to extend terrestrial ATM networks to remote locations via satellite and to demonstrate distributed airframe/propulsion simulation via satellite.

1.4. Network Security

Network data security is vital to HPCC agencies and to many other users such as the medical and financial communities. FY 1994-1995 research is directed at incorporating security in the management of current and future networks by protecting network trunks and individual systems. Examples include:
    • Joint ARPA/NSA projects in gigabit encryption systems for use with ATM
    • Use of the ARPA-developed KERBEROS authentication system by DOE for distributed environment authentication and secure information search and retrieval
    • Methods for certifying and accrediting information sent over the network
NSA is addressing the compatibility of DOD private networks with commercial public networks.
The rapid growth of networks and of the number of computers connected to those networks has prompted the establishment of incident response teams that monitor and react to unauthorized activities, potential network intrusions, and potential system vulnerabilities. Each team serves a specific constituency such as an organization or a network. One of the first such teams was CERT, the Computer Emergency Response Team, based at the Software Engineering Institute in Pittsburgh, PA. CERT was established in 1989 by ARPA in response to increasing Internet security concerns, and serves as a response team for much of the Internet. FIRST, the Forum of Incident Response and Security Teams, was formed under DOD, DOE, NASA, NIST, and CERT leadership. FIRST is a growing global coalition of response teams that alert each other about actual or potential security problems, coordinate responses to such problems, and share information and develop tools in order to improve the overall level of network security.


2. High Performance Computing Systems

At the beginning of the HPCC Program in 1991, few computer hardware vendors were developing scalable parallel computing systems, even though they acknowledged that traditional vector computers were approaching their physical limits. By 1993, all major U.S. vendors had adopted scalable parallel technology. Today, a wide range of new computing technologies is being introduced into commercial systems that are now being deployed at the HPCRCs, in industry, and in academia. These include the whole range of scalable parallel and traditional systems such as fine- and coarse-grained parallel architectures, vector and vector/parallel systems, networked workstations with high speed interfaces and switches, and heterogeneous platforms connected by high speed networks. Some of these systems now scale to hundreds of gigaflops (billions of floating point operations per second). The HPCC Program is well on track toward meeting its FY 1996 goal of demonstrating the feasibility of affordable multipurpose systems scalable to teraflops (trillions of floating point operations per second) speeds.
The architectures of scalable systems -- how the processors connect to each other and to memory, and how the memory is configured (shared or distributed) -- vary widely. How these architectures communicate with storage systems such as disks or mass storage and how they network with other systems also differ.


Simulation of the behavior of materials at the fundamental atomic scale (adsorption and diffusion of germanium on a reconstructed Si(100 ) surface). Simulated using the iPSC/860 hypercube and the Paragon XP/S- 5 supercomputers.

In past years, the HPCC Program concentrated on the design and manufacture of high performance systems, including fundamental underlying components, packaging, design tools, simulations, advanced prototype systems, and scalability. ARPA is the primary HPCC agency involved in developing this underlying scalable systems technology, often cost-shared with vendors, for the high performance computing systems placed at HPCC centers across the country. Efforts are still devoted to developing the foundation for the next generation of high performance systems, including new system components that overcome speed and power limitations, scalable techniques to exploit mass storage systems, sophisticated design technology, and ways to evaluate system performance. Additional effort is now being devoted to developing systems software, compilers, and environments to enable a wide range of applications.


Scaling of memory chip technology is essential to increase both speed and capacity of computer-based systems. This figure shows details of 16 memory cells in a high density 1 megabit Static Random Access Memory (SRAM) that were created and visualized using advanced model tools for integrated circuit and technology design developed at Stanford University. Each color represents a physical layer of material (grey-silicon, yellow- silicon oxide, pink-polysilicon, teal-local interconnect and blue-metal) which has been patterned using advanced lithography and etching techniques. The details of geometries and spacings between the various layers are critical in determining both the performance of the SRAM and its manufacturability. Solid geometry models and three-dimensional simulations of both materials interactions and electrical performance are invaluable in optimizing such high density chips.
(Figure Courtesy of Cypress Semiconductor)

The applications now running on the new systems handle substantially more data -- both input and output -- than on traditional systems. Graphical display is critical to analyzing these data quickly and effectively. For example, output from three-dimensional weather models must be displayed and overlaid with real-time data collected from networked instruments at remote observation stations. Hardware to handle this task well, such as workstations for scientific visualization, is part of a high performance computing environment.

Comparison of Josephson Junction technology with gallium arsenide and CMOS (complementary metal oxide semiconductor) technologies showing potential for dramatic improvements in performance with low power.
The HPCC Program develops and evaluates a variety of innovative technologies that have potential for future use beyond the next generation of systems. Included in these are superconductive devices. These devices have demonstrated blinding speed and exceptionally low power consumption at the chip level, but need to be scaled up to more complex components to be useful. If transferable to the system level, these devices would have major impact on computing and communications switching systems. NSA is developing a technology demonstration of a multigigabit per second 128x128 crossbar switch that is potentially expandable to 1000x1000 at very low power. If successful, the technology will be evaluated in a system level computing application.

3. High Performance Computing Research Centers (HPCRCs)

HPCRCs are a cornerstone of the HPCC Program. HPCRCs are the home of a variety of systems: early prototypes, small versions of mature systems, full-scale production systems, and advanced visualization systems. The current production systems, capable of hundreds of gigaflops of sustained throughput, will be succeeded by teraflops systems. These systems are being used on Grand Challenge class applications that cannot be scaled down without modifying the problem being addressed or requiring unacceptably long execution times. The largest of these applications are being run on multiple high performance computers located around the country and networked in the gigabit testbeds.
An interdisciplinary group of experts meets at these centers to address common problems. These include staff from the HPCRCs themselves, hardware and software vendors, Grand Challenge applications researchers, industrial affiliates that want to develop industry-specific software, and academic researchers interested in advancing the frontier of high performance computing. Funding is heavily leveraged, with HPCC agencies often contributing discretionary funds, hardware vendors providing equipment and personnel, and affiliate industries paying their fair share. Industrial affiliation offers a low risk environment for exploring and ultimately exploiting HPCC technology. Two of these industrial affiliations are:
    • The Oil Reservoir Modeling Grand Challenge in which more than 20 companies and several universities participate
    • The High Performance Storage System Project in which more than 12 companies and national laboratories participate
Production-quality operating systems and software tools are developed at these centers, thereby removing barriers to efficient hardware use. Applications software tailored to high performance systems is developed by early users, many of whom access these systems over the Internet, and increasingly over the gigabit testbeds, from their workstations. Production-quality applications software is often first run on HPCRC hardware. The wide range of hardware at HPCRCs makes them ideal sites for developing the conventions and standards that enable and test interoperability, and for benchmarking systems and applications software.

Production-quality applications software often is run first on computing systems at HPCRCs.

The major HPCRCs are:
NSF Supercomputer Centers --
    • Cornell Theory Center, Ithaca, NY
    • National Center for Supercomputer Applications, Champaign-Urbana, IL
    • Pittsburgh Supercomputer Center, Pittsburgh, PA
    • San Diego Supercomputer Center, San Diego, CA
Tens of thousands of users from more than 800 institutions in 49 states and 111 industrial partners have computed on systems at the NSF centers. Currently there are 8,000 users and 78 partners. The centers are developing a National Metacenter Environment in which a user will view multiple centers as one. The National Center for Atmospheric Research (NCAR) in Boulder, CO, also receives HPCC funds.
NSF Science and Technology Centers
    • Center in Computer Graphics and Scientific Visualization -- Brown University, Providence, RI; CalTech, Pasadena, CA; Cornell University, Ithaca, NY; University of North Carolina, Chapel Hill, NC; University of Utah, Salt Lake City, UT
    • Center for Research on Parallel Computation, Rice University, Houston, TX
NASA Centers --
    • Ames Research Center, Mountain View, CA
    • Goddard Space Flight Center, Greenbelt, MD
DOE Centers --
    • Los Alamos National Laboratory, Los Alamos, NM
    • National Energy Research Supercomputer Center, Lawrence Livermore National Laboratory, Livermore, CA
    • Oak Ridge National Laboratory, Oak Ridge, TN
The DOE centers accommodate more than 4,000 users from national laboratories, industry, and academia.
Major systems at HPCRCs include one or more of each of the following (the number of processors in the largest machine at an HPCRC is shown in parentheses):
    • Convex
- C3880 (8 vector processors)
    • Cray Research
- C90 (16 vector processors)
- T3D (512 processors)
- YMP (8 vector processors)
    • Digital Equipment Corp.
- Workstation Cluster
    • Hewlett-Packard
- H-P Workstation Cluster
    • IBM
- ES9000/900 (6 vector processors)
- PVS
- SP1 (512 processors)
- Workstation Cluster
    • Intel
- iPSC 860 (64 processors)
- Paragon (512 processors)
    • Kendall Square Research
- KSR 1 (160 processors)
    • MasPar
- MasPar 2 (16,000 processors)
- MasPar MP-1 (16,000 processors)
    • nCube
- nCUBE2
    • Thinking Machines
- CM2 (32,000 processors)
- CM5 (1,024 processors)
Smaller versions of some of these scalable high performance systems have been installed at more than a dozen universities. The HPCRCs also use a variety of scientific workstations, such as those from Silicon Graphics and Sun Microsystems, for numerous tasks.
FY 1995 Plans
    • NSF will install new scalable parallel hardware and hardware upgrades, enhance Metacenter resources, and establish several more regional alliances.
    • DOE will install two different 150 gigaflop machines at two sites.
    • NASA will establish a prototype high performance computing facility comparable in nature but not in performance to the ultimate teraflops facility. It will be configured with advanced high performance machines, early systems or advanced prototypes of important storage hierarchy subsystems, and sufficient advanced visualization facilities to enable system scaling experiments. NASA Grand Challenge researchers in Federal laboratories, industry, and academia will access these advanced systems using the Internet and gigabit speed networks. These researchers will provide a spectrum of experiments for scalability studies. Prototype systems and subsystem interfaces and protocol standards will be established and evaluated, accelerating the understanding of the character of future teraflops computing systems.
    • NOAA will acquire a high performance computing system for its Geophysical Fluid Dynamics Laboratory at Princeton, NJ to develop new scalable parallel models for improved weather forecasting and for improved accuracy and dependability of global change models.
    • EPA will acquire a scalable parallel system to support more complex multipollution and cross-media (combined air and water) impact and control assessments. 

Software & System


 Software 
HPCC Program software development efforts were originally planned to address the Grand Challenges associated with agency missions. As the Program has matured, these efforts have been expanded to support the needs of industry and improve U.S. competitiveness.
The range of applications software being developed under the Program will assure that high performance computing systems can be broadly useful to the American economy. Now these systems must be made easier to use. Experienced software developers and applications researchers, many at or connected to HPCRCs, are working to meet these needs.
It took a decade to develop a collection of efficient and robust software for vector machines, and it is widely believed that it will take at least that long for parallel systems. Performance is dramatically improving on these systems, due to new algorithms, systems, and experienced people. Continued software work is needed to realize their full potential. The user community is growing so fast that demand for computer time on these systems exceeds supply.

4.1. Systems Software and Software Tools

The high performance computing environment model involves workstation interaction with high performance systems. This approach makes it possible for users to almost transparently access higher performance machines as problem size grows and the software on these machines matures.
This environment is fundamentally and profoundly different from and more complicated than traditional computing environments. Much of the systems software and software tools for parallel computing has been redesigned and rewritten to take advantage of the theoretical benefit of parallelism and enhance user productivity:
    • Operating systems manage dozens to thousands of processors, their memory, and networked heterogeneous systems.
    • New programming languages allow straightforward expression of parallel constructs.
    • Mechanisms express bit manipulation in a parallel environment.
    • Precompilers automatically optimize and parallelize.
    • Compilers generate instructions to distribute the computation across the processors, memory, and networks.
    • Debuggers help developers find coding mistakes.
    • Performance monitors and displays assist developers in identifying where optimization efforts might best be spent, facilitate development of dynamic resource management strategies, and are used to evaluate different architectures (in part to recommend changes in vendor-provided utilities).
    • Software manages the parallel computer's input from and output to other computers, distributed hierarchical mass storage, and data collection hardware such as satellites.
    • New scientific visualization methods and software display the large amounts of data used and produced by high performance computers.
    • Software tools enable public dissemination of advanced software and documentation.
    • Production environment tools schedule jobs, multitask (run several jobs simultaneously), implement quotas, and provide "checkpoint and restart" and on-line documentation.
Evolving conventions and standards enable developers to transport software to different architectures and make it look the same to users. High Performance Fortran (HPF) is an example. Coordinated by the Center for Research on Parallel Computation at Rice University, the HPF Forum is a coalition of government, the high performance computer industry, and academic groups that is developing standard extensions to Fortran on vector processors and massively parallel SIMD (Single Instruction Multiple Data) and MIMD (Multiple Instruction Multiple Data) systems.
The software development process is evolutionary -- tool developers debug codes and make them more efficient while users and their applications place new demands on these tools, resulting in further improvements, refinements, speed-ups, and increased user-friendliness. Accelerating this cycle is one objective of the HPCC Program.

4.2. Scientific Computation Techniques

The development and analysis of fundamental algorithms for use in computational modeling represented in the Grand Challenges are as critical to realizing peak performance of scalable systems as are improvements in hardware. Such research includes studies of algorithms applicable to a wide range of parallel machines as well as those that take advantage of the strengths of specific architectures.
These algorithms address both numerical computation (where arithmetic calculations predominate) and non-numeric (where finding and moving data rule). Widely used numerical computations include multidimensional fast Fourier transforms, fast elliptic and Riemann solvers in partial differential equations, and numerical linear algebra. The latter includes manipulating vectors and matrices, solving systems of linear equations, and computing eigenvalues and eigenvectors. Efficient algorithms attuned to specialized matrix structure (for example, dense or sparse) are especially sought. Numerical linear algebra is an area that was assumed to be well understood, having been subject to substantial research for vector processors. Somewhat surprisingly, algorithmic breakthroughs made as these codes were ported to parallel systems have also resulted in improved performance on vector processors.
These computations are common to so many applications that they are developed by experts to attain maximal efficiency, and their implementations are included in general-purpose reusable software libraries. When these libraries are updated with the more efficient software, users immediately observe faster execution times for their applications. Several HPCC agencies are building such libraries and making them widely available.

4.3. Grand Challenge Applications

The successful use of scalable parallel systems for Grand Challenge applications requires designing new hardware, developing new systems software and software tools, and integrating these with the idealized setting of mathematics and with the complex environment of real world applications and real world users. The maturing HPCC Program is placing increased emphasis on facilitating this integration. For example, the Program sponsored the first "Workshop and Conference on Grand Challenge Applications and Software Technology" in May 1993. Some 250 people representing 34 Grand Challenge teams evaluated progress and planned future activities. A second workshop is scheduled for 1995.
As this new software becomes more efficient, stable, and robust, applications researchers are porting their software to the new parallel systems more quickly and are achieving faster run times. They are also obtaining more realistic results by taking advantage of the faster speeds, larger memory, and the opportunity to add complexity, which was not possible before the new architectures became available. This realism comes through:
    • Higher resolution (for example, modeling a beat of the human heart at time steps of a tenth of a second instead of each second, and at every hundredth centimeter rather than every centimeter)
    • Faster execution times (for example, models that took days of execution time now take hours, enabling researchers to explore a wider range of parameters and time scales -- 100 year climate models can now be executed in the same time it used to take for 10 year models)
    • More realistic physics -- for example, including in weather models the physics that better model the effects of clouds
    • More realistic models -- for example, one "multidisciplinary" model combining separate atmosphere and ocean models, or one combining "single discipline" air and water pollution models
Entirely new approaches are being developed for cases in which existing models or their algorithmic components are inappropriate for parallel architectures.
Researchers are now benchmarking Grand Challenge applications on a variety of high performance systems to determine not only where they run fastest but also the reasons why, since demand for processors, memory, storage, and communication all affect speed. These researchers work closely with systems software and software tool developers, and they communicate intensively with hardware vendors. This feedback loop results in a wide range of improvements in high performance software and hardware.
As Grand Challenge applications prove the mettle of scalable parallel architectures, commercial software vendors are becoming more active in moving their software to these new machines. The success of commercial applications software is crucial to the success of the high performance computing industry.
NSF, with assistance from ARPA, has funded 16 Grand Challenge Applications Groups for three years beginning in FY 1993 or FY 1994. DOE has funded nine multi-year Grand Challenge projects, some jointly with other DOE programs, HPCC agencies, and industry. NASA, NIH, NIST, NOAA, and EPA have similar Grand Challenge groups. These groups are addressing problems in the following areas.

Aircraft

Improved and more realistic models and computer simulations of aerospace vehicles and propulsion systems are being developed. These will allow for analysis and optimization of designs over a broad range of vehicle classes, speeds, and physical phenomena, using affordable, flexible, and fast computing resources. Development of parallel benchmarks is an area of intensive activity. These applications are computationally unrealistic with traditional computing technology. NASA's Computational Aeroscience Grand Challenges include the High Speed Civil Transport, Advanced Subsonic Civil Transport, the High-Performance Aircraft, and Rotorcraft. This clearly is an area of significant mutual benefit to both the HPCC Program and other major NASA programs.

Simulation of a tiltrotor aircraft (V-22 Osprey) during takeoff. Shown are streaklines, rendered as smoke and computed using UFAT (Unsteady Flow Analysis Toolkit), a new time-dependent particle tracing code.

In addition, NSF has funded a Grand Challenge Applications Group to address fundamental problems in coupled field problems and geophysical and astrophysical fluid dynamics turbulence.

Computer Science

NSF has funded Grand Challenge Applications Groups in:
    • High performance computing for learning
    • Parallel I/O (input/output) methods for I/O-intensive Grand Challenge applications

Energy

DOE's Grand Challenge projects are exploring:
    • Mathematical combustion modeling -- developing adaptive parallel algorithms for computational fluid dynamics and applying them to combustion models
    • Quantum chromodynamics calculations -- developing lattice gauge algorithms on massively parallel machines for high energy physics and particle physics applications
    • Oil reservoir modeling -- constructing efficient parallel algorithms to simulate fluid flow through permeable media
    • The numerical Tokamak project -- developing and integrating particle and fluid plasma models on massively parallel machines as part of the multidisciplinary study of Tokamak fusion reactors

Model showing initiation of water heater combustion.

Environmental Monitoring and Prediction

The environmental Grand Challenges include weather forecasting, predicting global climate change, and assessing the impacts of pollutants. High performance computers allow better modeling of the Earth and its atmosphere, resulting in improved guidance for weather forecasts and warnings, and improved global change models.
High resolution local and regional weather models are being incorporated into larger national and global weather forecasting counterparts. Several of these models are used widely by researchers to investigate and monitor the behavior of the atmosphere through numerical simulation. NOAA scientists are redesigning some of these models to take full advantage of new scalable systems. Some global models are "community models," used by researchers worldwide to compare results with observations and with other related models, and to evaluate performance. For example, a set of modular, portable benchmark codes is being developed and evaluated on several scalable systems and networked workstations at the Boulder Front Range Consortium. Funding for this work comes from ARPA's National Consortium for High Performance Computing, NOAA's Forecast Systems Laboratory, the NSF-funded National Center for Atmospheric Research, and the University of Colorado. Users of these improved models include the Federal Aviation Administration and the National Weather Service.

The track of Hurricane Emily predicted by NOAA's GFDL (yellow) and the observed track (orange).

Models of the atmosphere and the oceans are being rewritten in modular form and transported to several large parallel systems; funding sources include DOE's Los Alamos National Laboratory, NOAA's Geophysical Fluid Dynamics Laboratory (GFDL), and the Navy. A much greater level of detail is now possible -- local phenomena such as eddies in the Gulf of Mexico are being modeled, which allows for better warning of weather emergencies and improved design of equipment such as oil rigs.
Separate air and water quality models are being combined into a single model which is being transported to a variety of massively parallel systems where benchmarks are being used to evaluate performance. These models are needed to assess the impact of pollutant contributions from multimedia sources and to ensure adequate accuracy and responsiveness to complex environmental issues. Nutrient loading in the Chesapeake Bay and PCB transport in the Great Lakes are being targeted beginning in FY 1994.
In FY 1995, complexities such as aerosols, visibility, and particulates will be added; entire environmental models will be integrated into parallel computing environments with a focus on emissions modeling systems and integration with Geographical Information Systems.
EPA will acquire a large massively parallel system to be used initially for this research. A transition to an operational computing resource that supports improved environmental assessment tools is planned.

Model prediction of the amount of lead, a toxic pollutant, deposited by atmospheric processes during August 1989.

NSF has funded Grand Challenge Applications Groups to address fundamental problems in:
    • Large scale environmental modeling
    • Adaptive coordination of predictive models with experimental observations
    • Earthquake ground motion modeling in large basins
    • High performance computing for land cover dynamics
    • Massively parallel simulation of large scale, high resolution ecosystem models
DOE's Grand Challenge projects are exploring:
    • Computational chemistry -- parallelize codes and develop modeling systems for critical environmental problems and remediation methods
    • Global climate modeling -- numerical studies with large atmosphere and ocean general circulation models
    • Groundwater transport and remediation -- develop comprehensive multiphase, multicomponent groundwater flow and transport software
NASA has funded Grand Challenge research teams in:
    • Atmosphere/ocean dynamics and trace chemistry
    • Climate models
    • Four-dimensional data assimilation for massive Earth system data analysis
    • Discovering knowledge in geophysical databases

The environmental monitoring and prediction Grand Challenge enables better decision making by government and industry on issues that affect both the economy and the environment.

Molecular Biology and Biomedical Imaging

FY 1994 NIH accomplishments include:
    • Development and field testing of the Diagnostic X-ray Prototype Network (DXPnet), a nationwide radiology research image system
    • Deployment of Network Entrez, an Internet-based client-server system that enables integrated searching of DNA and protein sequences and the medical literature linked to those sequences
    • Usage of the Internet-based BLAST (Basic Local Alignment Sequence Tool) for advanced biological similarity searching reached 3 million queries per year.
    • Improved understanding of heart function through the three-dimensional simulation of a single heartbeat (this required 150 hours on the fastest Cray and received a Smithsonian Award)

A closeup view looking down on the aortic valve in a computational model of blood flow in the heart. The model was developed by Charles Peskin and David McQueen of New York University and run on the Pittsburgh Supercomputing Center's Cray C90.

    • First simulation of an entire biological membrane including all lipid and protein components, enabling better understanding of the mechanism of inflammation of tissues in diseases such as asthma and arthritis (collaborative work with Eli Lilly)
    • Order of magnitude speedups in several molecular analysis algorithms
    • New software for coupling vector processors with massively parallel systems, providing a new avenue for molecular dynamics calculations
    • New algorithms for registration and rendering of three-dimensional images from two-dimensional clinical images and micrographs
    • Assistance in the design of new drugs to inhibit HIV replication
    • Further elucidation of the structure of the herpes virus using high performance computing to obtain three-dimensional images from electron micrographs
    • Determined the three-dimensional structure of several proteins by using a genetic algorithm approach to automate the spectral assignment task in NMR spectroscopy
    • Developed parallel molecular dynamics simulation software to determine the effects of hydration on protein structure
    • Developed benchmark codes for evaluating high performance systems
FY 1995 NIH plans include:
    • Expand research in algorithms for real-time acquisition and processing of multi-model medical images for use in telemedicine and virtual environments for surgery
    • Develop new algorithms and software for receptor-based drug design; study basic mechanisms of receptor structure and function; and develop software to model specific populations at risk for disease
    • R&D in medical image processing for basic research, clinical research, and health care delivery
    • Support development of telemammography, addressing high resolution, wide field-of-view displays and high performance, low cost networks for image transmission
NSF has funded Grand Challenge Applications Groups in:
    • Biomolecular design
    • Imaging in biological research
    • Advanced computational approaches to biomolecular modeling and structure determination
    • Understanding human joint mechanisms through advanced computational models
DOE's Grand Challenge projects are exploring computational structural biology -- understanding the components of genomes and developing a parallel programming environment for structural biology.

Product Design and Process Optimization

Beginning in FY 1995, NIST will develop new high performance computing tools for applying computational chemistry and physics to the design, simulation, and optimization of efficient and environmentally sound products and manufacturing processes. Initial focus will be on the chemical process industry, microelectronics industry, and biotechnology manufacturing. Existing software for molecular modeling and visualization will be adapted for distributed, scalable computer architectures. New computational methods for first-principles modeling and analysis of molecular and electronic structure, properties, interactions and dynamics will be developed. Macromolecular structure, molecular recognition, nanoscale materials formation, reacting flows, the automated analysis of complex chemical systems, and electronic and magnetic properties of thin films will receive particular emphasis.

Diamond-tool turning and grinding machines are the epitome of precision manufacturing tools, capable of machining high precision optical finishes without additional polishing (such as on this copper mirror for a laser system). NIST researchers and their industrial partners are developing methods for monitoring and controlling diamond turning machines to improve the precision and production of highly efficient optics such as mirrors for laser welders.

NSF has funded a Grand Challenge Applications Group to address fundamental problems in high capacity atomic-level simulations for the design of materials.
A DOE Grand Challenge project involves first-principles simulation of materials properties using a hierarchy of increasingly more accurate techniques that exploit the power of massively parallel computing systems.

Space Science

NASA's Earth and Space Science projects support research in:
    • Large scale structure and galaxy formation
    • Cosmology and accretion astrophysics
    • Convective turbulence and mixing in astrophysics
    • Solar activity and heliospheric dynamics
NSF has funded Grand Challenge Applications Groups to address fundamental problems in:
    • Black hole binaries: coalescence and gravitational radiation
    • The formation of galaxies and large-scale structure
    • Radio synthesis imaging

5. Technologies for the NII

The HPCC Program is helping to develop much of the technology underlying the NII in order to address National Challenge problems of significant social and economic impact. This technology includes advanced information services, software development environments, and user interfaces:

5.1. Information Infrastructure Services

These services provide the underlying building blocks upon which the National Challenges can be constructed. They provide layers of increasing intelligence and sophistication on top of the "communications bitways." These include:
    • Universal network services. These are extensions to existing Internet technology that enable more widespread use by a much larger user population. They include techniques for improved ease-of-use, "plug and play" network interoperation, remote maintenance, exploitation of new "last mile" technologies such as cable TV and wireless, management of hybrid/asymmetric network bandwidth, and guaranteed quality of service for continuous media streams such as video.
    • Integration and translation services. These support the migration of existing data files, databases, libraries, and software to new, better- integrated models of computing such as object-oriented systems. They provide mechanisms to support continued access to older "legacy" forms of data as the models evolve. Included are services for data format translation and interchange as well as tools to translate the access portions of existing software. Techniques include "wrappers" that surround existing elements with new interfaces, integration frameworks that define application-specific common interfaces and data formats, and "mediators" that extend generic translation capabilities with domain knowledge-based computations, permitting abstraction and fusion of data.
    • System software services. These include operating system services to support complex, distributed, time-sensitive, and bandwidth-sensitive applications such as the National Challenges. They support the distribution of processing across processing nodes within the network; the partitioning of the application logic among heterogeneous nodes based on their specialized capabilities or considerations of asymmetric or limited interconnection bandwidth; guaranteed real-time response to applications for continuous media streams; and storage, retrieval, and I/O capabilities suitable for delivering large volumes of data to very large numbers of users. Techniques include persistent storage, programming language support, and file systems.
    • Data and knowledge management services. These include extensions to existing database management technology for combining knowledge and expertise with data. These include methods for tracking the ways in which information has been transformed. Techniques include distributed databases, mechanisms for search, discovery, dissemination, and interchange, aggregating base data and programmed methods into "objects," and support for persistent object stores incorporating data, rules, multimedia, and computation.
    • Information security services. These help in protecting the security of information, enhancing privacy and confidentiality, protecting intellectual property rights, and authenticating information sources. Techniques include privacy-enhanced mail, methods of encryption and key-escrow, and digital signatures. Also included are techniques for protecting the infrastructure (such as authorization mechanisms and firewalls) against intrusion attacks (such as by worms, viruses, and trojan horses).
    • Reliable computing and communications services. These include services for non-stop, highly reliable computer and communications systems operating 24 hours a day, 7 days a week. The techniques include mechanisms for fast system restart such as process shadowing, reliable distributed transaction commit protocols, and event and data redo logging to keep data consistent and up-to-date in the face of system failures.

5.2. Systems Development and Support Environments

These will provide the network-based software development tools and environments needed to build the advanced user interfaces and the information-intensive National Challenges themselves. These include:
    • Rapid system prototyping. These consist of software tools and methods that enable the incremental integration and cost effective evolution of software systems. Technologies include tools and languages that facilitate end-user specification, architecture design and analysis, and component reuse and prototyping; testing and on-line configuration management tools; and tools to support the integration and interoperation of heterogeneous software systems.
    • Distributed simulation and synthetic environments. These software development environments support the creation of synthetic worlds that can integrate real as well as virtual objects that have both visual and computational aspects. Methods include geometric models and data structures; tools for scene creation, description, and animation; integration of geometric and computational models of behavior into a combined system description; and distributed simulation algorithms.
    • Problem solving and system design environments. These provide automated tools for software and system design that are flexible and can be tailored to individual needs. Examples include efficient algorithms for searching huge planning spaces; more powerful and expressive representations of goals, plans, operators, and constraints; and efficient scheduling and resource allocation methods. The effects of uncertainty and the interactions of goals will be addressed.
    • Software libraries and composition support. Common architectures and interfaces will increase the likelihood of software reuse across different computational models, programming languages, and quality assurance. By developing underlying methodology, data structure, data distribution concepts, operating systems interfaces, synchronization fea ures, and language extensions, scalable library frameworks can be constructed.
    • Collaboration and group software. These tools support group cooperative work environments that span time and space. They will make it possible to join conferences in progress and automatically be brought up to date by agents with memory. Methods include network-based video conferencing support, shared writing surfaces and "live boards," document exchange, electronic multimedia design notebooks, capturing design history and rationale, agents or intermediaries to multimedia repositories, and version and configuration management.

5.3. Intelligent Interfaces

Many of the National Challenge applications require complex interfacing between humans and intelligent control systems and sensors, and among multiple control systems and sensors. These applications must understand their environment and react to it. High level user interfaces are needed to satisfy the many different requirements and preferences of vast numbers of citizens who will interact with the NII.
    • Human-computer interface. A broad range of integrated technologies will allow humans and computers to interact effectively, efficiently, and naturally. Technologies will be developed for speech recognition and generation; graphical user interfaces will allow rapid browsing of large quantities of data; user-sensitive interfaces will customize and present information for particular levels of understanding; people will use touch, facial expressions, and gestures to interact with machines; and these technologies will adapt to different human senses and abilities. These new integrated, real-time communication modalities will be demonstrated in multimedia, multi-sensory environments.
    • Heterogeneous database interfaces. Methods to integrate and access heterogeneously structured databases composed of multi-formatted data will be developed. In a future NII's information dissemination environment, a user could issue a query that is broadcast to appropriate databases and would receive a timely response translated into the context of the query. Examples of multi-formatted data include ASCII text, data that are univariate (such as a one-dimensional time series) or multivariate (such as multi-dimensional measurement data), and time series of digital images (such as a video).
    • Image processing and computer vision. Images, graphics, and other visual information will become more useful means of human-computer communication. Research will address the theory, models, algorithms, architectures, and experimental systems for low level image processing through high level computer vision. Advances in pattern recognition will allow automated extraction of information from large databases such as digital image databases. Emphasis is placed on easily accessing and using visual information in real-world problems in an environment that is integrated and scalable.
    • User-centered design tools/systems. New models and methods that lead to interactive tools and software systems for user-centered activities such as design will be developed. Ubiquitous, easy-to-use, and highly effective interactive tools are emphasized. A new research area is user- friendly tools that combine data-driven and knowledge-based capabilities.
    • Virtual reality and telepresence. Tools and methods for creating synthetic (virtual) environments to allow real-time, interactive human participation in the computing/communication loop will be addressed. Participation can be through sensors, effectors, and other computational resources. In support of National Challenge application areas, efforts will focus on creating shared virtual environments that can be accessed and manipulated by many users at a distance.


6. National Challenges

These are large-scale, distributed applications of high social and economic impact that contain an extensive information-processing component and that can benefit greatly by building an underlying information infrastructure. National Challenges to be addressed by the HPCC Program in FY 1994 and FY 1995 include:

Digital Libraries

A digital library is the foundation of a knowledge center without walls, open 24 hours a day, and accessible over a network. The HPCC Program supports basic and strategic digital libraries research and the development and demonstration of associated technologies. These technologies are used in all of the other National Challenge applications.
Beginning in FY 1994, the Program will support the following R&D, much of it funded by a joint NSF/ARPA/NASA "Research in Digital Libraries" initiative:
    • Technologies for automatically capturing data of all forms (text, images, speech, sound, etc.), generating descriptive information about such data (including translation into other languages), and categorizing and organizing electronic information in a variety of formats.
    • Advanced algorithms and intelligent interactive Internet-based tools for creating and managing distributed multimedia databases and for browsing, navigating, searching, filtering, retrieving, combining, integrating, displaying, visualizing, and analyzing very large amounts of information that are inherently in different formats. These databases are frequently stored on different media that are distributed among heterogeneous systems across the Nation and around the world. Research will also address standards that enable interoperability.
NOAA will provide Internet-based access to and distribution of remote sensing imagery and other satellite products from its geostationary and polar orbiting operational environmental remote sensing satellites. NASA will provide access to other remote sensing images and data over the Internet and the gigabit testbeds. This includes making observational data from satellites available to state and local governments, the agriculture and transportation industries, and to libraries and educational institutions.
NSA will develop a prototype environment of the future in which a user, an application developer, and a data administrator each sees an integrated information space in terms directly meaningful and accessible to them, rather than as a collection of relatively unintelligible, difficult-to- access databases.

Crisis and Emergency Management

Large-scale, time-critical, resource-limited problems such as managing natural and man-made disasters are another vital National Challenge. Effective management involves the use of command, control, communications, and intelligence information systems to support decision makers in anticipating threats, formulating plans, and executing these plans through coordinated response. Many other National Challenge projects provide information and information management tools for use in crisis and emergency management. HPCC efforts include ARPA's research projects on ubiquitous data communications infrastructure in the face of disasters, including the timely development and transmission of plans to operational units, exploitation of technical and human sources of information, and input to command. NOAA plans to make available environmental warnings and forecasts and other relevant information to support emergency management through the Internet. In FY 1994, NSF initiated a program to support research leading to development of information infrastructure technologies that can be integrated into the civil infrastructure, including transportation, water quality, safety of waste removal, and access to energy sources.

Education and Lifelong Learning

HPCC support for this National Challenge involves making HPCC technologies a resource for the Nation's education, training, and learning systems for people of all ages and abilities nationwide. The NII approaches this challenge from several directions:
    • Distance learning will bring specialized resources in a timely manner to geographically widespread students.
    • Teacher training and coordination enhances the resources available to teachers at all educational levels.
    • Students throughout the country will have access to information and resources previously only available at research and library centers.
    • Lifelong learning provides educational opportunities to populations regardless of age or location.
    • Digital libraries will make information available throughout the network -- both for professionals as well as students at all levels.
The HPCC Program is providing network access and conducting pilot projects that demonstrate HPCC technologies for improving learning and training and that can be scaled to nationwide coverage. A program in networking infrastructure for education was begun by NSF in FY 1994. On- going K-12 programs in science, engineering, and biomedical and health applications are conducted by almost all HPCC agencies.

Student Essay: "One Giant Leap . . . Networks: Where Have You Been All My Life?"

Midway through my junior year at New Hanover High School in Wilmington, North Carolina, an experience began for me that has re-routed the path of my entire education and learning adventures. I, and three other students, won a national scientific computing contest called SuperQuest, sponsored by the Cornell University and the National Science Foundation. SuperQuest was no ordinary science fair -- rather, it was a "take a giant leap outside of your mind" contest. It was a fortuitous opportunity for us high schoolers. Our sudden introduction to the world of high performance computers included IBM RISC clusters, an ES/9000 vector processor, and a KSR parallel processor.
One of the greatest benefits of participating in the SuperQuest program has been my exposure to computer networks and telecommunications. I delved into the online world for my first time before we had won, when the team and I were constructing a science project the SuperQuest judges might deem a winner. We spent just a semester reaching consensus on the topic! Each of us had ideas -- from investigating mag-lev trains, to orb spiderwebs, to the pitching of a baseball. Which of these were within our capability? And which should we perhaps leave to the Princeton researchers? We turned to the Internet to gather the advice and experience of more knowledgeable folks. Our first action was to post questions on as many science bulletin boards and online services as possible. The two we frequented were the SuperQuest homebase at the Cornell Theory Center, and the High Performance Computing network (HPC Wire) in Colorado. Both were the ideal sources to check the feasibility of modeling an orb spider web or the flight of a baseball on a supercomputer. Sometimes we received answers in a day -- and sometimes within an hour. We were wowed not only by the speed of the knowledge transfer, but also, by the altruism of the network community. Through our communications, we were able to quickly focus on the orb spider web as our project. It was within our scope.
In the archaic tradition of our 12 years of schooling, we trekked to the local library to research spider webs. This proved to be time consuming. One of our teachers introduced us to WAIS, an Internet indexed database search function. Ecstasy! We sat with him as he logged onto the net from his desktop PC, called up WAIS, and ordered it to do both a worldwide search of indexes with the words "spider" or "web" in them, and a cross- reference follow-up. Within 20 minutes, the net had spewed 20 pages of information sources. We were filled with the the sudden comprehension of the scientific process -- the gathering of data and the elimination of possibilities.
As part of the SuperQuest prize, we attended a three-week summer seminar at the Cornell Theory Center. There we were introduced to LAN networks and file-sharing, mainframes and X-terminals, and the minute details of the Internet. Thus, we could suddenly access the libraries of each of the seven schools at Cornell, as well as data from specialized projects such as the synchrotron and recent biomedical research. Again, the process of gathering data was accomplished primarily through using computer communications.
Upon our return home, we were able to use our Internet connection (which was another SuperQuest reward) to continue our research and also access the supercomputer facilities back at Cornell. After two months of a maniacal pace, and with our interim report mailed, we decided to take a week-long break. However, I found myself drawn to the computer --there were so many interesting things to explore -- and I was so curious. I logged into "Sunsite" at the University of North Carolina at Chapel Hill and began browsing archives, just for the fun of it. I pulled up a picture of an ancient Vatican manuscript, (Click here to visit the Vatican Exhibit.) with its crinkled brown pages and mottled writing. However, disappointed that I could not make out any of the words, I was about to close the window, when I had an idea! Using the XV software (also obtained from the net), I zoomed into a word and smoothed it into something legible. I had stumbled into a combination of resources that allowed me to examine the document.
And I didn't stop there. My school system's current agenda involves the reforming of the traditional daily schedule; we may initiate "block" schedules in the coming year. As student body president, I represent the school on the Superintendent's Advisory Board -- our mission: to investigate the pros and cons of block scheduling. We decided that most important, we needed to hear from students in other schools that had implemented the program. The nearest one was five hours away. A group of teachers had been bused in the previous month to visit for an hour or two of questions -- the only time available after travel time. No student on the advisory board wanted to repeat this exercise and in fact, we were almost reduced to drawing straws for a victim, when I had a sudden flash. A bit of background: The state of North Carolina has, within the past five years, set up a fiberoptics network that enables a teacher at one school to simultaneously teach her home class as well as classes that are located across the county and even the state. My mother was teaching an oceanography class over the system and I knew that at least one of those schools was block-scheduled. At the next advisory board meeting, I moved that we postpone our road trip and use the fiberoptics network for an afterschool teleconference. Within four weeks, we had teleconferenced with two block-scheduled schools. We asked questions that mattered to students: what happens when you miss school; how are athletics affected; and, how are advanced placement classes organized. Moreover, the two block-scheduled schools were able to discuss their own variations of block scheduling.
Computer technology has not simply affected my education; it has changed my personality. I have travelled from having a daydream about why spider webs are so strong, to performing concrete scientific research, to making an obscure document understandable (one that I did not know existed, but for the Internet), and I initiated a solution to a real-life organizational problem. I'm feeling pretty good.
Frank "Gib" Gibson, 1994 national high school winner of the NSF/NASA/ED telecommunications essay with teacher, Abigail Saxon.

Electronic Commerce (EC) 

This National Challenge integrates communications, data management, and security services to allow different organizations to automatically exchange business information. Communications services transfer the information from the originator to the recipient. Data management services define the interchange format of the information. Security services authenticate the source; verify the integrity of the information received; prevent disclosure by unauthorized users; and verify that the information was received by the intended recipient. Electronic commerce applies and integrates these services to support business and commercial applications such as electronic bidding, ordering and payments, and exchange of digital product specifications and design data.
ARPA will develop a common underlying infrastructure for authentication, authorization, accounting and banking services, usage metering, and fee- for-access within networks and distributed systems. ARPA is also developing mechanisms for active commerce that will seek out qualified bidders on behalf of customers, based on extensive knowledge of the bidders capabilities and the customers needs.
Beginning in FY 1995, NIST will collaborate with industry to develop and apply technologies that enable electronic commerce in general, with initial emphasis on the manufacturing of electronic and mechanical components and subsystems. The agency will conduct R&D in security services and establish facilities to support interoperability testing.

Energy Management 

Oil consumption, capital investment in power plants, and foreign trade deficits all benefit from improved management of energy demand and supply. Beginning in FY 1994, DOE and the power utilities will document and assess the tools and technologies needed to implement the National Challenge of energy demand and supply management. They will also document the expected economic benefits and identify policy or regulatory changes needed so that the utilities can participate in the deployment of the NII.

Environmental Monitoring and Waste Minimization

Improved methods and information will dramatically increase the competitiveness of U.S. companies in the world's $100 billion per year environmental monitoring and waste management industries. Beginning in FY 1995, digital libraries of the large volume and wide range of environmental and waste information will be assembled and tools will be developed to make these libraries useful. These include:
    • DOE site survey and regulatory information and tools to use these libraries (the DOE's existing weapons complex will serve as a natural testbed)
    • A coordinated effort by NASA, NOAA, and EPA will jointly provide public access to a wide variety of Earth science databases, including satellite images, Earth science measurements, and in situ and satellite data from NOAA's environmental data centers.
    • A joint NASA/NOAA/EPA effort will develop training and education to satisfy the publics environmental information needs.
    • A National Environmental Information Index, as directed by the National Performance Review
    • A NOAA Earth Watch pilot information system will provide integrated access to environmental information together with relevant economic and statistical data for policy makers and others.

Health Care

Advanced HPCC communication and information technologies promise to improve the quality, effectiveness and efficiency of today's health care system. This National Challenge will complement the biomedical Grand Challenges. It will include testbed networks and collaborative applications to link remote and urban patients and providers to the information they need, database technologies to collect and share patient health records in secure, privacy-assured environments, advanced biomedical devices and sensors, and system architectures to build and maintain the complex health information infrastructure.
Using a Broad Agency Announcement, NIH began funding the following activities in FY 1993, and is expanding efforts beginning in FY 1994:
    • Testbed networks to link hospitals, clinics, doctors' offices, medical schools, medical libraries, and universities to enable health care providers and researchers to share medical data and imagery
    • Software and visualization technology to visualize the human anatomy and analyze images from X-rays, CAT scans, PET scans, and other diagnostic tools
    • Virtual reality technology to simulate operations and other medical procedures
    • Collaborative technology to allow several health care providers in remote locations to provide real-time treatment to patients
    • Database technology to provide health care providers with access to relevant medical information and literature
    • Database technology to store, access, and transmit patients' medical records while protecting the accuracy and privacy of those records
A three-year contract was awarded in FY 1993 to a consortium of nine West Virginia institutions to use advanced networking technologies to deliver health services in both rural and urban areas. Other proposals received in response to the announcement will be funded in FY 1994.
Beginning in FY 1995, NIH will provide cancer prevention and treatment information to the public via several multimedia systems including Mosaic (described in Section 10).
An ARPA Biomedical Program will develop advanced biomedical devices and tools to build next-generation health care information systems.
NSF will expand a program in health care delivery systems begun in FY 1994. Included are activities in cost-effective telemedicine systems for distance medicine applications. 

Manufacturing Processes and Products

Advancing manufacturing through the use of HPCC technologies in design, processing, and production of manufactured products is another National Challenge. A key element is the development of the infrastructure technology and standards necessary to make the processes and product information accessible to both enterprises and customers. This Challenge relies on network security and on the Digital Libraries and Electronic Commerce National Challenges, and is closely related to the Energy Management Challenge. On-going multi-year projects include:
    • ARPA's MADE (Manufacturing Automation and Design Engineering) in America Program to develop engineering tools and information integration capabilities to support future engineering and manufacturing processes. These include the Center for Advanced Technology, a joint industry- government facility that offers world class manufacturing technology training; centers and networks for agile manufacturing; and a virtual library for engineering.
    • Beginning in FY 1995, expanded NASA/industry/academia efforts in the multidisciplinary design of aeronautical airframes and aircraft engines will develop an integrated product/process development capability. This is intended to shorten the product development cycle, maximize capability, lower the life cycle cost, and obtain new insight into and understanding of the advanced manufacturing process, all critical to more competitive airframe and propulsion industries.
    • NIST's System Integration for Manufacturing Applications Program emphasizes technologies that support flexible and rapid access to information for manufacturing applications. It includes a standards-based data exchange effort for computer integrated manufacturing that focuses on improving data exchange among design, planning, and production activities. Results will be made available to U.S. industry through workshops, training materials, electronic data repositories, and pre- commercial prototype systems.
    • At NIST, an Advanced Manufacturing System and Network Testbed supports R&D in high performance manufacturing systems and testing in a manufacturing environment. The testbed will be extended to include manufacturing applications in mechanical, electronics, construction, and chemical industries; electronic commerce applications for mechanical and electronic products; and an integrated Standard Reference Data system. The testbed will serve as a demonstration site for use by industrial technology suppliers and users, and to assist industry in the development and implementation of voluntary standards.
    • NSF will support research leading to the development of information infrastructure technologies that support manufacturing design. Initial focus will be on virtual and rapid prototyping.

Public Access to Government Information

This National Challenge will vastly improve public access to information generated by Federal, state, and local governments through the application of HPCC technology. On-going efforts include connecting agency depository libraries and other sources of government information to the Internet to enable public access; and demonstrating, testing, and evaluating technologies to increase such access and effective use of the information.
For example, the White House, the U.S. Congress (House of Representatives and Senate), and the HPCC Program make information available on the Internet. Other examples are ARPA's research in delivering computer science reports and literature to researchers and the public; NSF's Science and Technology Information Service and its support for the Securities and Exchange Commissions Edgar system; NSF's pilot project to demonstrate the use of the Internet and Mosaic in disseminating NSF information about program activities and accomplishments; DOE efforts to make energy statistics available to the public; NASA/NOAA/EPA efforts to make environmental data available to researchers and the general public; and Public Health Service (PHS) sponsorship of a variety of electronic information services, including NIH and NLM Internet servers that also provide connectivity to other health information services such as the bulletin board at the Food and Drug Administration and the PHS AIDS bulletin board.

The concept of the NII's "Information Superhighway" has captured the imagination of the Nation.


7. Basic Research

Basic research projects focus on developing new methods to address fundamental limitations in HPCC technology as the Program proceeds and ensuring that the foundations for the next generation of HPCC technology are developed. Much of the advanced basic research is carried out in the academic community in cooperation with industry.
    • ARPA funds basic research coupled to its other efforts in high performance computing. Basic research areas include design science, human/computer interaction, human language technology, persistent object bases, and software foundations. In addition, ARPA funds a high performance computing graduate fellowship program to focus attention on the critical need for people trained in this field.
    • NSF funds long-term investigator-initiated research and continues to encourage interdisciplinary research, collaboration between computer scientists and applications scientists in solving Grand and National Challenges, and cross-sector partnerships. It funded 350 investigator- initiated projects and awarded 77 postdoctoral research and training grants (one grantee was a 1993 Supercomputing Forefront Award Winner). NSF-supported researchers contribute to fundamental networking, memory, interconnectivity, storage, and compiler technology, and the agency plans to support research in virtual reality. NSF, ARPA, and NASA jointly funded High Performance Fortran development.
    • In FY 1995 NSF plans to support 100 new investigator-initiated projects in new areas, support 30 new postdoctoral fellows, and initiate three new programs -- graduate fellowship (20 awards initially), Industry/High Performance Computing Centers visitor program (16), and Software Infrastructure Capitalization (2).
    • NSF also supports the procurement of scalable parallel systems for basic research and in FY 1995 will support infrastructure for National Challenges.
    • DOE funds basic research at agency laboratories and at 30 universities including over 40 postdoctoral associates and over 60 graduate students. Subjects include numerical analysis and scientific computing, modeling and analysis of physical systems, dynamical systems theory and chaos, geometric and symbolic computation, and optimization theory and mathematical programming.
    • NASA sustains research efforts in architeures, algorithms, networked distributed computing, numerical analysis, and in applications-specific algorithms. It has research institutes and centers of excellence at the Illinois Computer Laboratory for Aerospace Systems and Software, and at its Ames, Langley, and Goddard centers. It is expanding support for postdoctoral research, new professors, and its Graduate Student Researchers Program at NASA centers.
    • NIH has formal degree-granting fellowships in medical informatics and cross-disciplinary training of established investigators. The agency sponsored hands-on training of biomedical researchers in using computational biology tools at NSF Supercomputer Centers.
    • EPA supports cross training of computational and environmental scientists.


8. Training and Education

A natural consequence of basic research in HPCC technology is education and career development. Each generation of HPCC researchers trains and educates the next generation. The workforce becomes increasingly technologically sophisticated, providing myriad economic and social benefits to the Nation.
    • NSF Supercomputer Centers conduct some 200 training events for 3,000 trainees each year. The agency provided about 20 computer systems to colleges and universities, including minority institutions, and funded five SuperQuest teams each with four students and two teachers investigating their own scientific projects; over the years 64 such projects have been funded. In FY 1995 the agency will provide 13 more entry-level systems to universities and expand SuperQuest.
    • NSF provides on-going support for training and education programs for teachers and students at all levels. In their program supporting VLSI (very large scale integration) fabrication by Mosis (metal oxide semiconductor implementation service), NSF and ARPA fund 1,200 student projects per year; over six years 30,000 students from 170 universities in 48 states and the District of Columbia have been funded. The agency supports pilot projects demonstrating the application of advanced technologies in education. Included are the "Common Knowledge" project involving the Pittsburgh school district, the "National School Network" involving teachers and students at all pre-college levels, and the "Learning through Collaborative Visualization" testbed.

Father and daughter using the KidPix graphics program on a Macintosh during "SDSC Kids' Day" at the San Diego Supercomputer Center.

    • DOE educational programs include Adventures in Supercomputing for in- service teacher training (50 teachers and 25 school districts in five states in FY 1995), Superkids for high school student summer enrichment, and undergraduate and graduate electronic textbook projects.
    • NASA supports undergraduate and graduate level training. New NASA mechanisms will support students and new faculty interested in applying HPCC technology, especially by directly funding students with advisers interested in NASA applications.
    • NASA will continue its K-12 Educational Outreach Program at seven NASA field centers nationwide to develop curriculum products and teaching aids in computational science and networking for education. Teachers located near the field centers will participate in each project. Finished products will be available to all teachers and students via the Internet.
    • ARPA has a unique program supporting historically black colleges and universities; strongly encourages affiliation with local communities; and is accelerating technology transition through community-wide efforts such as the National Consortium for High Performance Computing.
    • NIH will support development of interactive learning tools for improved science education. The agency has a pilot project to educate high school science teachers and students about computational techniques for biomedical science.
    • NOAA provides its agency staff with education and training in the use of scalable systems.
    • EPA is developing a prototype training program that includes a series of pilot projects with Federal, state, and industrial environmental groups. The agency funds fellowships and graduate student support in environmental modeling, supports undergraduate students, and supports environmental learning experiences for junior and senior high school students. It will develop environmen al education tools for grades 9 through 12.
Through these efforts, more students will choose careers in HPCC technology and its application, resulting in more widespread and advanced knowledge of these fields. The long term benefits of these education programs will include increased awareness and education of the general public about high performance computing and communications and the application of HPCC technology to improve the quality of life in the U.S.