« PreviousContinue »
190 - 200 180 - 190 170 - 180 | 160 - 170 | 150 - 160 140 - 150 130 - 140 | 120 - 130 110 - 120
100 - 110 BELOW 100
10 - 200 | 180 - 190 | 170 - 180 160 - 170 150 - 160 140 - 150 130 - 140 120 - 130 110 - 120 100 - 110 BELOW 100
. .. ..
Information Systems Activities
The Information Systems Division provides sup
port and services to the Director of the U.S.
· Geological Survey, to major programs in each division of the USGS, to the Department of the Interior, and to other government agencies on information technology and automated data processing (ADP). The Division operates the Survey's mainframe computer located in Reston, Va., and Technology Information Centers and minicomputers in four ADP Service Centers nationwide. The Division assists users in acquiring ADP and telecommunications hardware, software, and services; coordinates and improves information systems through system analysis and design; provides user education and assistance; and conducts research into better ways to use computer technology to solve missionrelated problems. The Division coordinates, manages, and operates voice, data, and radio communications for the USGS, including GEONET, the data communications network of the Department of the Interior, from which gateways provide access to other national networks and supercomputer systems.
(Facing page) One of the functions of the Information Systems Division is to assess computer technology for the other USGS divisions. These perspective images were produced for geographic information system training and were later used to assess vendor software before purchase. The images are two views of elevation data overlain by contoured gravity data for the Nabesna, Alaska, area. They were produced on a personal computer and plotted on an in-house inkjet plotter. (Above) This Thematic Mapper Landsat image is the result of an assessment project performed by the Geologic and Information Systems Divisions. It shows the arid region around Delta, Utah, where mineralized alteration zones may be detected through spectral signature analysis.
Supercomputing Activities in the USGS
By Rick MacDonald
A number of significant events during fiscal year 1988 marked the U.S. Geological Survey's first steps toward the use of supercomputers for processing earth-science data. For some time, this special class of computational resource had been used primarily by the physical science and engineering disciplines. The benefits offered by high-speed computing to these fields are equally applicable to the solid earth sciences. Several conferences and workshops recently concluded that much could be gained by applying supercomputers to common problems in the earth sciences, such as surface- and ground-water modeling problems, numerical simulation of geologic processes (especially those involving solid three-dimensional visualizations of tectonic movement, volcanic phenomena, and seismological processes), and the integration of raster and vector data sets. As a participant in this ongoing dialog in the earth-science community, the USGS initiated two activities, a supercomputer technology study and a survey of supercomputer centers, to determine how the USGS could apply this technology to its best advantage. Some background about supercomputers will provide a basis for understanding the importance of those activities.
expensive supercomputers can solve some problems hundreds of times faster than popular general-purpose computers. Table 1 is an extraction from a series of tables published by the Argonne National Laboratories that attempts to measure the performance of computers processing a standard suite of numerically intensive problems. Some of the computers used in the USGS are included in this table. Caution should be used when interpreting such results, as they merely portray a crude reference to relative performance. Benchmarks of actual applications are much more desirable for refined comparisons.
How do supercomputers achieve their performance gains over generalpurpose computers? Three basic techniques are usually employed. First, the use of high-performance components can accelerate the switching times, or processing speeds, associated with numeric computations. As an example, the CRAY-2 supercomputer has a central processing unit clock speed of 4.2 nanoseconds, or 4.2 billionths of a second. Highperformance components are extremely difficult to manufacture and are very expensive. This technique has obvious limitations, because components cannot switch faster than the speed of light. Technology advances provided by exploitation of superconductivity may allow further performance gains with switching circuits.
Table 1. Comparison of selected computers' computational speeds in solving a system of linear equations using FORTRAN programming language
Supercomputers elude an exact definition because their capabilities change so rapidly with the continuous infusion of technological innovation. One characteristic seems definite. Supercomputers can solve in minutes numerically intensive computing problems or answer other kinds of questions that would take nonsupercomputers days or weeks to solve. By some measures, the larger and more
Survey of Supercomputer Centers
Second, special circuits called vector processors enable the supercomputers to process with one instruction an entire sequence of data that would need several hundreds or thousands of instructions in a general-purpose computer. Data of this type are frequently used in mathematical models of earth-science processes. One possible application in the USGS would be the three-dimensional electromagnetic modeling of arbitrary earth structures.
Finally, the use of multiple processors operating concurrently can speed up the solution of problems that can be partitioned into relatively independent steps. This process is called parallel processing, as opposed to sequential processing.
Supercomputers will often use all three techniques to achieve dramatic performance gains. Some special-purpose computers rely exclusively on the use of many processors that can be linked in parallel and hence are called massively parallel processors.
This group assessed the resources available at the 12 supercomputing facilities throughout the United States. The issues considered by this group were similar to those considered by the technology study team but were oriented to the capabilities of facilities that might be used by USGS scientists, rather than to the inherent characteristics of supercomputers being offered in the marketplace.
Supercomputer Technology Study
The USGS formed a technology study team to examine several issues and to provide an information base from which the bureau could make future decisions with respect to acquisition of supercomputing resources. Among the most important issues were the operating systems used by supercomputers, the ability of supercomputers to interface with communications networks and other equipment, the visualization or graphical capabilities offered by supercomputers, the software available for supercomputers and its ease of use, and the cost of acquiring supercomputing resources.
The study team discovered that much progress has been made in improving the usability of supercomputers, especially in interfacing with the operating systems and in automating the process of writing programs that take advantage of the vector processors for increased speed. A distinct trend toward use of Unix, a popular operating system used by many scientists, was noted. The results of the study group were reported to the Survey's Information Systems Council and provided to a second group that was surveying supercomputer centers.
Using the information gained from the technology study and the survey of supercomputer centers, the USGS developed agreements with the Department of Energy's Los Alamos National Laboratory (LANL), the San Diego Supercomputer Center (SDSC), where CRAY supercomputers are used, and the Florida State University Computer Center, where Control Data Cyber 205 and ETA-10 supercomputers are used. These centers will provide supercomputer support to USGS scientists and will also collaborate with USGS scientists on joint projects in computational geoscience. A special fund to encourage use of the LANL supercomputers was established by the Director and is being administered by the Information Systems Division. The Information Systems Division also arranged high-speed communications links to all 3 locations via GEONET and the National Science Foundation network and arranged for initial training on supercomputers at SDSC, where 23 earth and computer scientists received introductory training.
The Information Systems Division has established a data base of information about supercomputing resources that includes technical characteristics of current hardware and software products, supercomputing centers, and networking resources. Training and education in the techniques of supercomputer use is underway so that USGS scientists will be able to employ supercomputers as a tool in their research effectively. The team studying supercomputing technology is continuing its efforts to keep the USGS up-to-date in understanding current and future supercomputer capabilities.
Wide-Area Data Communications Networks Used Within the USGS
By Jim Hott and Pete Cadenas
During fiscal year 1988, the USGS used several data communications networks. These networks included GEONET, managed and operated by the USGS to provide nationwide data communications for the Department of the Interior; SPAN, the National Aeronautics and Space Administration's (NASA) Space Physics Analysis Network, which allows information exchange with the NASA scientific community; NSFNET, the National Science Foundation Network, which enables USGS scientists to access supercomputers; BITNET, which connects hundreds of universities and research centers; and the Federal Telecommunications System (FTS), which provides dial-access communications.
GEONET connects users to 150 computer systems and enables USGS scientists to share data easily and quickly. The network provides a path for terminal
users and remote job-entry stations to reach USGS computers including the newly formed Administrative Service Center in Reston, Va. During 1988, GEONET was reconfigured from a relatively small shared-backbone, large dedicated-star network (December 1987 map) to a more extensive sharedbackbone, smaller star network (December 1988 map). This change in GEONET was motivated by the increasing need to share facilities and lower overall USGS data communications expenditures. In December 1987, GEONET cost components represented about 40 percent shared facilities and 60 percent dedicated facilities; by December 1988, GEONET cost components were expected to change to 65 percent shared and 35 percent dedicated. Shared facilities equipment, software, and circuits-are those used by more than one USGS division or DOI bureau. With the reconfiguration, overall cost of end-to-end GEONET will drop, while network performance will remain at prior levels. While the number of network node locations increased 50 percent in 1988, the overall end-to-end circuit mileage actually decreased. Additionally, during 1988, Alaska service was added to GEONET as a shared service for DOI bureaus.
GEONET configuration completed in
spurs GEONET non-switching user
dedicated nodes User host sites
4.8/9.6 user terminal circuits