 Welcome to Caltrans LSIT LS exam preparation course. One aid in your preparation for California licensure examinations. A word of caution, don't use this course as your only preparation. Devise and follow a regular schedule of study which begins months before the test. Work many problems in each area, not just those in this course's workbook, but problems from other sources as well. This course is funded by Caltrans, but you and I owe a profound thanks to others, the courses instructors from the academic community, the private sector, other public agencies, and from Caltrans as well. We wish you well in your study toward becoming a member of California's professional land surveying community. Hello, I'm Don D'Onofrio with the National Geodetic Survey. I am presently serving as the National Geodetic Survey Coordinator to the California Department of Transportation. This is one of a series of presentations aimed at providing information to assist you in preparing for the professional land surveyors examination. For the next hour, I will be speaking about issues related to the Global Positioning System, or GPS. Because of the complexity of the technology and the rapid evolutionary changes, I will touch briefly on a variety of issues associated with the technology. My intent will be to raise your level of awareness about the use of GPS. I urge you to accept the technology for the capabilities it brings to the surveying profession. I would also like to make two points at the beginning. First, GPS is not yet the answer to all surveying needs. And secondly, it is possible to misapply the technology. With this in mind, I would like to begin by giving you a brief outline of what I plan to discuss in the presentation. I will begin with a brief history of satellite geodesy, or satellite surveying. Then I will discuss how we use GPS to extend control and survey coordinates into survey project areas. I will discuss some of the methodologies of satellite positioning, such as static positioning, kinematic positioning, and pseudo-kinematic positioning methodologies. I will then discuss issues associated with project planning. And the next element will be the use of GPS for establishing vertical control. I will then discuss the recently completed statewide precision GPS network established by Caltrans and NGS cooperatively. I will conclude with a discussion on the latest methodology developments in GPS positioning. There is one issue I want to stress here, and will probably do so throughout this presentation. GPS is still in its evolutionary phase. New methods are continually being investigated. The primary driving force behind the evolution is how the technology can be used to provide greater accuracy or how it can be used to provide a given level of accuracy in a shorter period of time. I think you will find that all of the advancements in the use of GPS meet either one or both of these criteria. Before moving into the presentation, I would like to discuss the term geodesy and its relationship to the types of surveying many of you may be associated with in your careers as surveyors. Many survey projects are relatively limited in scope. In performing these surveys, it is often practical to consider the area as a plain surface. As projects become larger in scope and larger in area, one can no longer use this assumption. Since the earth is not flat, one must take into account allowances for curvature. Geodetic surveying takes into account the size and shape of the earth. As you get involved with GPS surveys, you will need to understand this concept. Now moving into a brief history of satellite geodesy, it may be difficult to understand how a seemingly new technology can burst onto the scene and achieve the accuracies attributed to the global positioning system. In fact, GPS is the next step in a series of satellite positioning technologies started about 30 years ago. I will spend a very few minutes on this brief history. It can be broken down into three basic phases. The early years, the 1960s, the Doppler years, the 1970s, and the GPS years, the 1980s and beyond. In the early years, accuracies were probably no better than about five meters. There were four systems in use at that time. Two of the systems were electronic. The early Doppler system run by the Navy. And the C-Core system run by the Army. C-Core standing for the Sequential Collation of Range. And there were two systems which were photogrammetric. The BC-4 system run by the Coast and Geodetic Survey, the precursor to the National Geodetic Survey, and the PC-1000 system used by the US Air Force. In the 1970s, the Transit Navigation System and an improved and miniaturized Doppler receiver, the geosever, provided accuracies approaching one half meter. Until the advent of the global positioning system, these accuracies were not capable of meeting survey requirements for all but a few applications. The global positioning system was a natural evolution in satellite positioning technology. It is, in many respects, similar to the Doppler system. It is the differences among these similarities which enable us to achieve these remarkable accuracies with GPS. Doppler satellites were in a very low polar orbit. GPS satellites are in a very high orbit, about 20,000 kilometers, and deployed in a series of orbital planes, six orbital planes, to be exact. Doppler satellites were equipped with less accurate rubidium frequency standards. GPS satellites are equipped with highly accurate cesium or atomic clocks. Doppler satellites broadcast at relatively low frequencies at 150 and 400 megahertz. And GPS satellites broadcast at significantly higher frequencies, 1,200 and 1,500 megahertz. And there were only six Doppler satellites compared to the proposed 24 GPS satellites. Of equal significance was the ability of the GPS receivers to track up to 12 satellites in both frequencies simultaneously. The Doppler G-ceiver was only able to track one satellite at a time. The foregoing is simply to let you know that some of the aspects of GPS technology did not just burst onto the scene, but that they were part of an ongoing evolution of satellite positioning technology. A similar analogy could be made in the evolution of electronic distance measuring equipment, as removed from microwave to light wave to two color laser geotometer, for example. It is difficult to envision another evolutionary advance beyond the GPS system. The advances which are being made seem to be more in the development of software, which continues to improve, and which seems to be taking the GPS technology far beyond what was originally intended. Some of these advances will be discussed in this presentation. Some of the advances are in current everyday use, and some of them have been demonstrated but are not yet suitably documented and tested for use by the surveying community in a production mode. Before going further into the discussion, I would like to give you an oversimplified view of how the basic GPS technology works. Each satellite broadcasts a signal which is received by a suitable GPS receiver. If we know the time a signal is broadcast from the satellite very accurately, and we know the time the same signal was received at the receiver, we can determine the distance or range from the satellite. How do we know this range? Well, we know that these signals travel at the speed of light, 186,000 miles per second. Thus, we multiply the speed of light by the fraction of a second it takes the signal to travel from the satellite to the receiver. If we could measure the time to as accurate as a microsecond, we would still have an inaccuracy of almost 300 meters along this line. But if we have the ability to measure signals simultaneously from more than one satellite, we know that we are at the intersection of the lines connecting the respective satellites to the receiver. If it was this simple, we would all be out doing GPS surveys. But it is, in fact, almost as simple when we employ suitably designed GPS receivers, fast computers, and reliable software. There are a number of technical terms used in conjunction with GPS tracking and operations. I will not use some of the terms in this presentation. My purpose here in this limited time is to provide you with some idea of how the technology is used in survey applications. Different methodologies take advantage of some of these more specific issues. These terms are explained in the materials accompanying this video. You cannot have a complete understanding of the technology and how best to apply it to your needs without an understanding of these terms. For purposes of this presentation, we will focus on those issues related to GPS as a surveying technology, rather than include GPS as a navigation technology. Navigation generally implies point positioning with a single receiver. Generally, this category of receivers is not used to provide survey accuracies. However, you should understand that some of these GPS receivers have been designed for and are being used by geographic information system, GIS, proponents for a variety of positioning applications which may not require survey accuracies. This is not to imply that GIS should not ultimately be based on accurate survey coordinates. Surveying generally employs GPS in the differential or relative mode. That is, two or more receivers are used. What is the difference between the two positioning methods, point positioning and differential or relative positioning? Point positioning involves determining the position of a single point by tracking a minimum number of satellites, usually with one GPS receiver. While this method takes full advantage of the GPS technological capabilities, it still has some drawbacks. These drawbacks are significant. With one receiver, you are usually unable to resolve errors in the satellite clock, the satellite ephemeris or its orbital parameters and the atmosphere, primarily the ionosphere. We can remove or greatly reduce the effect of these errors through a technique called differential or relative positioning. This is accomplished by deploying one and preferably two receivers to stations with known geodetic or state plane coordinates. In California, it would seem advisable to use the stations established during the high precision geodetic network for this purpose. However, any pair of stations whose coordinates have been determined to a suitable level of accuracy depending upon project requirements could be used. The remaining receivers would be placed over survey points for which coordinates are to be determined. If the survey area is not overly large, say for example, 60 kilometers on a side, we can eliminate most of the errors associated with point positioning. We eliminate the effects of distortion of the signal as it travels through the ionosphere since the signals travel through the same ionosphere and route to all deployed receivers. Similarly, the error in the satellite's ephemeris, that is the position of the satellite in space and the satellite's clocks, can likewise be eliminated since these errors are the same for all receivers. In effect, since we know the absolute position of our known static station, we can determine the errors and eliminate them from the receiver deployed at the unknown station or stations. In a relative sense, these errors can be eliminated if we occupied only stations for which there were no known coordinates. Such a survey would not be related to any network or datum. But by initiating a survey over two known points, such as the HPGN, which has California coordinate system coordinates, we obtain equally accurate coordinates for the new points. An analogy could be made to a ground-based traverse. In the former case, we could perform a traverse based on project coordinates. It is a precise traverse, but you cannot relate it to say the California coordinate system. In the latter case, if we begin the traverse on a point of known CCS coordinates and ended on another point of known CCS coordinates, we know the relationship of all of the traverse points to some common datum and can provide accurate CCS coordinates for all of the stations in the traverse. In effect, differential positioning is the key to virtually all survey applications of the GPS technology. Notice the emphasis on survey. We have not touched on one other error associated with GPS tracking operations. This is the receiver clock error, which is different for each receiver. Each receiver is equipped with an internal clock and the ability to compute positions based on the data it receives from the satellites. In simple terms, the receiver attempts to compute a position based on a series of range measurements from the satellites. This is done by multiplying the time the signal takes to travel from the satellite to the receiver at the speed of light. Receiver clock errors can cause the individual range measurements to diverge from a point position. The receiver computer applies a correction to allow for a best fit solution. This correction is the individual receiver clock error. There is one other current source of error. It is potentially the worst source and is derived from a process called selective availability or SA. SA usually refers to the process of deliberately falsifying the time the signal is broadcast by the satellites. Another term for this is clock dither. With SA employed, the accuracy of a position may be no better than 100 meters versus 30 meters without SA in real-time positioning mode. This problem is more severe in the navigation mode, but it can be effectively eliminated in the survey differential operations mode since SA is intended to interfere with real-time operations. Most GPS surveys do not require position determination in real-time. And the use of the differential positioning technique helps also to eliminate the errors introduced by SA. These errors are the same for all receivers in the same manner that satellite ephemeris and clock errors and ionospheric errors are the same for all receivers and can thus be removed. There is another source of concern with GPS positioning. It is anti-spoofing. Anti-spoofing will further degrade the satellite signals so as to make them potentially unusable. Some receiver manufacturers believe they will have a way around this problem in the near future. But the GPS satellite constellation is controlled by the U.S. Department of Defense and they are the ultimate authority on whether anti-spoofing will be employed. There is ongoing discussion among the GPS user community in an attempt to deter the Department of Defense from implementing anti-spoofing. In any event, it is not scheduled to be implemented until the full 24 satellite constellation is in place about early 1994. At the present time, there are three basic tracking methodologies. These are static, kinematic, and pseudo-kinematic or pseudo-static. These are in more or less general use. Software has been well tested and accuracy results verified. There are two more methodologies currently in the spring of 1992 under development. These are referred to as rapid static or fast ambiguity resolution and GPS-controlled photogrammetry. These latter two will be also discussed if only briefly towards the end of the presentation. They have been demonstrated, but there are probably more tests to be run before they are placed in a routine production mode. There are several receiver manufacturers and some of these are investigating these newer uses of the technology. Some will get there sooner than others. It is for this reason that I feel we are a year or so away from being able to have all receiver software packages provide the necessary reliability for these newer methodologies. Before going further, I would like to touch on the accuracy potential of GPS. When tests were run with GPS, it became apparent that the technology was capable of achieving accuracies greater than that of existing terrestrial survey methodologies. That is triangulation, trilateration, and travers. I assume that you are familiar with the terminology used to describe survey accuracies. That is first, second, and third order. But GPS has demonstrated its capability to significantly exceed these accuracies. For this reason, three new categories of accuracies were developed. In increasing accuracy, these are order B, the next level above first order, order A, and order AA. As you can see, the order B accuracy level is nominally one part per million of the line length between adjacent stations. The letter designation was used to ensure that the continued usage of the terminology of first, second, and third order was preserved. Statewide high precision networks have been designated in surveyed to order B accuracy. Order A level accuracies at one part per 10 million are used to control these high order statewide networks. There may ultimately be about 200 such stations in the United States. This will be the primary reference frame for the country. The order AA stations are used almost exclusively for geodynamic concerns, those dealing with issues related to crustal motions or plate tectonics on an international scale. One thing you should notice is that the GPS satellite system has some base error associated with it. At the order AA level, it is only three millimeters. At the order B level, this increases to eight millimeters. These accuracies are defined and explained in the Federal Geodetic Control Committee publication, Geometric Geodetic Accuracy Standards and Specifications for using GPS Relative Positioning Techniques, version five, May 11, 1988, reprinted with corrections, August one, 1989. The publication is intended only as a guideline because of the rapidly changing nature of GPS survey. I will be discussing the California High Precision Geodetic Network later in the presentation. This survey was performed using relative positioning techniques and was done to FGCC Order B Accuracy and Specifications. It was further related to the Widely Space National Order A Network. Now that we have some feel for the different levels of accuracy, we will move to GPS survey procedures. Remember that all of our discussion about GPS surveying refers to the differential positioning technique. Irrespective of the methodology being employed, for example, static or kinematic, there are issues which are common to the methodologies. These include satellite elevation cutoff, satellite configuration, the minimum number of satellites, the tracking epic, and station site selection to name them more important. Current procedures call for tracking of all satellites until they fall below the limit of 15 degrees above the horizon. As satellites fall below this limit, the signals they transmit have to travel through more of the ionosphere and the troposphere, the near earth atmosphere. There is the potential for introducing more error here or unduly increasing the amount of time on data reduction if we track below this elevation. Thus, most survey applications place the 15 degree limit on tracking. Another issue related to GPS positioning is the tracking epic or how frequently we record satellite information. Static applications usually track data on a 15 second interval. Kinematic applications might require a one second interval. Most receivers allow you to set this time interval or epic. There are several other related issues. First and most obvious is the need to select a prospective survey station, either existing or new, which has no obstructions above 15 degrees. In some areas, this may be relatively simple, but in other areas, some judicious site selection may be necessary. The second issue is the number of satellites to be tracked. For stati... For s... Excuse me. For static positioning, a minimum of four satellites are required. For most of the other methods, five and maybe six satellites will be required. Another issue is related to the function called geometric dilution of precision or GDOP. There are several dilution of precision factors, but we will only deal with GDOP. This is basically a measure of the level of accuracy we can expect from tracking a series of GPS satellites over a given period of time. The better the satellites are distributed in the sky, the higher the achievable accuracy. You might consider this issue as similar to a survey intersection problem. If you are attempting to position a new station from a series of four known stations, spaced 90 degrees around the horizon, you would expect to have a more accurate position for the new station. If, on the other hand, you attempted to position the unknown station from four stations clustered in a band covering about 45 degrees in azimuth, you would expect a somewhat weaker solution. If you were to assign numeric values to the level of accuracy for these two examples, the lower the number, the greater the expected accuracy, you would have an analogous situation with the GDOP determination. Our first example here shows a case of poor GDOP. The satellites are clustered in a very narrow area. This will significantly bias the resulting position. In this example, we see that three of the satellites are distributed evenly around the horizon with the fourth satellite almost directly overhead. This provides for a very accurate solution position. But we must remember that the satellites are not stationary in space. As the satellites change position with respect to the GPS receiver antenna, position the GDOP value changes constantly. During the course of observations, it is possible that the GDOP value may change significantly. This usually occurs only for very brief periods of time during the selected observing session. GDOP is a measure of the geometric strength of the satellite configuration reduced to a numeric value. For all precise surveys, a GDOP figure of 6.0 or less is usual. And you can probably see how having more than four satellites in view increases the potential for a better GDOP determination. Most, if not all GPS receiver manufacturers provide software which allows you to determine the GDOP automatically. The GDOP factor will probably play a less important role when the full constellation of satellites is in orbit. This is because the design of the full 24 satellite constellation is such that there will always be four satellites in view and in such a configuration that the GDOP will normally be suitable for virtually all survey applications. The full constellation will probably not be in place until early 1994. What do we do in the interim? We simply pay much closer attention to the predicted satellite orbital information. This information is also available through equipment manufacturer software. There are two basic steps. The first is to review a 24 hour table of predictions for the existing constellation. Here we see a simplified view of the satellite availability for a 24 hour period on some given day. From about 0800 to 1400 California daylight time, there are four satellites available. Note that satellite six drops out of the field of view or more precisely it drops below the 15 degree elevation limit we have imposed. But at about the same time that satellite six drops below this limit, satellite 21 rises above this limit. Thus, we could track continuously until about 1400 when both satellites 11 and 16 fall below this limit and we are reduced to fewer than four satellites. The second step is to relate the GDOP values against this table of predictions. We would then determine the time periods when it would be suitable to perform tracking operations. That is, those times when a minimum of four satellites or five for kinematic are available and when the GDOP values are below the desired maximum, say 6.0. For purposes of this presentation, we will assume that the four satellite period met this GDOP criterion. Like many survey operations, there are often ways around a given problem. The same case can be made for GPS surveys. But sometimes the solution to one problem gives rise to another. Let us suppose we need to use GPS to survey an area which is not free of obstructions above our normal 15 degree cutoff. A careful review of the satellite orbital information might reveal that the obstructions do not block those parts of the satellite orbit in which we are interested. Maybe we can delay the beginning of GPS tracking until the satellite rises above the obstruction. These are suitable alternatives and may enable you to resolve an apparently improbable situation. But you should ensure that these obstructions do not introduce something called multi-path. Multi-path occurs when the receiver obtains a signal directly from the satellite and the same signal bounces off some nearby obstruction. The receiver is thus receiving the same signal at two different times or via two different paths. Multi-path can be caused by nearby mountains, fences, signs, buildings, and other similar objects above the level of the antenna. Careful site selection can help to eliminate this problem. Some multi-path problems can be resolved in the data reduction, but it is usually a time-consuming process. With this basic understanding of what we need to consider when performing GPS tracking measurements, we can now move on to some discussion of the different methodologies. Static positioning was the first method to be used for satellite positioning. One could make the case that the other methods are variations on this theme. As the name implies, static positioning requires that a receiver be set up on a control point or a new survey point for some set period of time, say, in excess of one hour. As a general rule of thumb, the higher the level of accuracy required, the longer you remain on station. In the example you will see on the screen in just a moment, I have chosen a seven-station network which we will be observing with three receivers. And I am planning to obtain first-order accuracies as defined in the previously mentioned FGCC guidelines. In static positioning, four satellites should be available for the duration of the observing window. Our limited time for this presentation will not allow me to fully develop this next point. I would refer you to the FGCC guidelines. Given this seven-station project and three receivers, several levels of accuracy are possible. The results depend on the number of observation sessions and the number of redundant station occupations. These additional sessions increase the number of times individual stations are observed. This is one of the critical issues when designing a project to meet some prescribed level of accuracy. Referring to the layout of the seven stations, I have indicated the potential accuracies with the number of sessions. For example, in the first scenario of only three sessions, only two stations are occupied more than once. There is only limited redundancy. According to the FGCC guidelines, this survey would not meet any accuracy level. You may recall something I mentioned in the introduction. It is possible to misapply the GPS technology. Here is just such a case. Reviewing the next scenario, we see that as the number of sessions and station occupations increase, the accuracy increases also. In scenario two, we see that the number of sessions has been increased by one. This scenario will provide for second order class two level of accuracy. This is still short of our stated accuracy requirement of first order. By adding one more session in the third scenario, we have met the prescribed accuracy level. In this scenario, only one station was occupied less than two times. Six stations were occupied at least twice, and two of these stations were occupied three times. This is a compilation of the foregoing discussion. I have included yet another session to indicate how we could further upgrade the network to an even higher order of accuracy. We will come back to this in our discussion of project planning. I hope you can begin to see that there is a certain level of planning which goes into ensuring that the FGCC guidelines are met. Let us now assume that we are performing a survey where higher orders of accuracy are not required, say order B, and where the surveying is being accomplished over a limited area. This provides us with other methodologies we may wish to consider. These are kinematic and pseudo kinematic GPS surveys. As the name kinematic implies, one or more of our receivers will be placed on a moving platform, say an automobile or a train. The methodology is even being used for GPS control of photogrammetry. Pseudokinematic is a variation on both the static and kinematic methods. Kinematic surveys involve the use of a base receiver at a known location and at least one rover receiver. Since kinematic surveys involve moving receivers, we need to track or sample the data more frequently than in static surveys. Many static surveys are performed in 15 second increments or epochs, and most kinematic surveys are performed in one second increments. A kinematic survey can be started in any of three ways. The rover antenna can be placed over a second existing control point for which coordinates are known to the same level of accuracy as the base station. The rover antenna can be placed over an unknown point and tracking can be done in the static mode and the receiver then switches to the kinematic mode, or the rover antenna can be swapped with the base station antenna. In the case where there are two known stations, the rover receiver would sit at the second station for a matter of a few minutes and begin the kinematic survey. In the case where there is not a second control point available, there are two basic alternatives. The first is to determine coordinates for the second point by tracking in the static mode for a period of an hour or longer. After this period, we would switch the receiver to the kinematic mode. A second alternative is to perform an antenna swap. In this process, two receivers and antennas are set up in close proximity to each other. After a brief observing time, the two antennas are moved to the other receiver station or swapped. In this method, both receivers are effectively tracking over the known station, but at different times. After another brief tracking time, the antennas are returned or swapped to their original station and the kinematic survey is begun. It is imperative that during the antenna swap, the receivers do not lose lock on any of the satellites. Any of these alternatives resolves the phase ambiguity between the two receivers. After any of the foregoing have been accomplished, the roving receiver is transported along some predefined route. It stops at selected stations en route for a minute or so before returning to the original known station. During the entire survey, we must maintain lock to all satellites. As we will see with pseudo kinematic, a second independent or redundant set of measurements at the same stations will be required on a second day or later the same day so long as sufficient time has elapsed between the two sets of observations. A second roving receiver could also be employed. It would be used in exactly the same manner as the first but it might start at survey about an hour later. This is another planning issue. If we have only two receivers, the choice is obvious. If we have three receivers, we might apply all three to the project and finish it in a shorter timeframe. But with four receivers, we have an option to perform two separate surveys simultaneously. The purpose of the antenna swap, the static determination of the second point or using a known second point is to resolve any phase ambiguities between the two receivers. In simple terms, this means that we want to be certain that both receivers begin the kinematic portion of the survey with the correct number of integer cycles for their respective receivers. Without going through this initialization process, we could be collecting potentially erroneous data making the survey worthless. In many respects, the kinematic methodology offers significant improvement in productivity since it requires continuous lock to a minimum of four and more appropriately five satellites, it has some drawbacks. It cannot be used in some urban environments where the moving platform, the vehicle, goes between buildings or through underpasses. Even an instantaneous loss of satellite signal, referred to as a loss of lock, could render the survey useless. And kinematic surveys are generally used over relatively small areas. Let us suppose that we are not concerned with the actual track of the vehicle, but we wish only to position selected points in a given project area. We might then consider using a slightly different methodology, pseudo kinematic surveying. The methodology of pseudo kinematic survey is similar in some respect to both static and kinematic. It is similar to static in that we spend more time, say 10 minutes, at the individual stations which make up our network, down the one minute or so recommended for kinematic. And we process the data similar to the static method. Unlike kinematic though, we do not need to maintain lock on the minimum number of satellites throughout the survey. We are not interested in what route we traveled to get to our stations. Therefore, this data is not required. We do acquire lock on the satellites after we have arrived at each specific station in our network. Pseudokinematic surveys also require a base or monitor station receiver, similar to kinematic. In pseudo kinematic surveys, like kinematic surveys, we would visit each station a second time, at least an hour after the first visit. This allows sufficient time for the satellite configuration to change enough to give an independent solution. Or the scenario could be run a second time on the next day. Either of these provides the redundancy we are so particular about in GPS survey. The Federal Geodetic Control Committee has not yet published guidelines for kinematic and pseudo kinematic surveys. However, these methodologies are in more or less general use and have been accepted. Exercise caution when considering these methods. They are not suitable for large scale projects at least at this time. This is true in part because the larger the area, the more likely you are to encounter obstructions. In addition, both of these methods recommend the use of the same satellites at the base and mobile receivers. As you move farther away from the base station, the greater is the likelihood that there will be fewer than the minimum number of common satellites. Even though there are obvious advantages to be gained with certain methodologies, there are equally good reasons why they might not work in particular projects. At the end of the presentation, I will discuss a newer method which offers some relief from the issue of maintaining continuous lock in kinematic surveys. This leads logically into the issue of planning a GPS survey. The first step is to attempt to define the absolute accuracy requirements for a project. This is not always easy, but assuming that a surveying entity is able to determine the level of accuracy, then it is wise to lay the project out on a suitable map. Then determine the station spacing requirements. With the project area and station spacing, you will then have an idea of the total number of stations required for the project. You will also have a feel for what methodology or combination of methodologies you should employ on the project. You should include at least two existing stations in the project whose coordinates are known to at least the level of accuracy requirements of the project. Again, I would strongly suggest you consider using stations established as part of the Caltransponsored high precision geodetic network as the basis for control for GPS projects. In my earlier example, I discussed how increased accuracy requirements can be met by adding the proper additional observing sessions. But I think you can see that one alternative we did not cover was adding additional receivers. Adding a fourth or fifth receiver would necessitate fewer sessions. But the additional receivers require additional observing personnel, vehicles, and other related equipment. So you might need to look at a few different ways to observe each project. There is probably no right way to plan a project. Each will be accomplished according to the resources which can be allocated to it. It is important to remember that whatever the number of receivers, there are still basic requirements which must be met in order to achieve the stated level of accuracy. The key here is redundancy. Consider a project requiring first order accuracy that is 10 millimeters plus one part for 100,000. 10% of the total number of stations must be occupied three times. 25% of existing control stations and 30% of all new stations must be occupied twice. California HPGN stations are the logical choice to be used for the basic control for this project. The HPGN coordinates are in order of magnitude higher order B than the first order required for this project. If there are only two existing control stations in the project, it is advisable to occupy both of them on two separate occasions since your redundancy is otherwise limited. The FGCC guidelines provide a rather quick method of determining the total number of observing sessions for a given project with any specific number of receivers. It takes into account redundancy requirements, how many sessions can be accomplished in a particular day, and safety and production factors. These last two factors are your estimate of the reliability of the equipment and logistic concerns. First order GPS observation usually require a minimum of 90 minutes of tracking time. Up to this point, we have not discussed the length of the observing session. A review of the satellite availability which we used earlier indicates an observing window of six hours from 0800 to 1400. If we were confident in the ability of our receiver operators, we might want to consider three sessions per day. We might, for example, track three 90 minute sessions from 0800 to 0930, from 1015 to 1145, and from 1230 to 1400. This would allow 45 minutes for the receiver operator to complete observations at one station, drive to the next station, and begin observations at the next station. We need to determine if this is reasonable. For if sessions are kept to their absolute minimum, how does a 15 minute delay in starting by one receiver operator affect the overall session integrity? If minimizing the tracking times could adversely affect our progress, then maybe we should only schedule two sessions per day. We might want to consider extending each operation to 120 minutes. We would have sufficient redundancy with the longer tracking time. Some minimal loss of tracking time at a station might not be a problem, and we can also allow for more travel time between stations. And perhaps more importantly, we might preclude some problems which might arise due to hasty operations. On the other hand, if we are considering kinematic or pseudo kinematic operations for this particular project, we immediately note that we do not have the required five satellite minimum availability. With the full constellation availability in 1994, we will probably have more options to consider. When you factor in what projects you have to accomplish and how many resources you can allocate to each of them, you have a means to determine how best to accomplish each project. You might find, for example, that leasing an additional receiver is more productive than operating one extra day without that receiver. So even after you have done your basic planning of the number, spacing, and accuracy of the stations in a project, there is still some flexibility in how the project is ultimately scheduled. I want to emphasize again that planning is the key to conducting good GPS surveys. To this point, we have extolled the virtues of the GPS technology because of the revolutionary changes it brings to surveying. The ability to provide highly accurate horizontal coordinates in a much shorter period of time and with significantly reduced expenditure of resources is remarkable. However, the same level of accuracy cannot yet be achieved in the vertical component. This is a rather complex issue, although it arises from a seemingly simple process. Conventional leveling, either spirit leveling or trigonometric leveling, provides elevations referred to the geoid and thus to the physical surface of the earth. GPS measurements are related to a reference ellipsoid, not the geoid, or the physical surface of the earth. So we must find a way to determine GPS-derived orthometric heights. One method of accomplishing this is to have a suitable number of vertical control monuments or benchmarks in the vicinity of the project. It might then be possible to determine the geoid height if we have both GPS and leveling data at these benchmarks. This might assume that we can interpolate the GPS, excuse me, interpolate the geoid at other GPS points where leveling data is not available. We might only be able to make this assumption in relatively flat terrain. There is currently available what is now the most accurate geoid model for the United States. It is called geoid 90. It was determined by combining all available gravity data and relating it to leveling data. Applying this method and this model to our project should improve our determination of the geoid height. However, it is still not accurate enough to equate horizontal accuracies of, say, one to two centimeters between adjacent stations to that same level of accuracy in the vertical component. This is due to the uncertainty in the random error in the vertical component of the GPS data, probably caused by tropospheric and ionospheric refraction. At the present time, vertical elevations can probably be determined no better than three to five centimeters in the best case. We can probably expect to improve this somewhat when the full constellation of satellites is deployed and better three-dimensional coverage is available. We might also expect to see some improvement when tracking the stronger P-code signals, perhaps down to the 10 degree level and maybe even to five degrees above the horizon. To complicate this issue, GPS is being used to determine ground subsidence and uplift. GPS provides a precise measure of vertical control. The key word here is precise. We can determine a height with a sufficient amount of GPS data. We can then return to the same stations a year or more later and note whether these same stations have remained stable or if they have undergone some vertical movement. We can determine this vertical movement in a relative sense very accurately, but we cannot prescribe actual accurate elevations. At present, you should just remember that vertical control elevations cannot be obtained as accurately as can horizontal control coordinates. For example, second-order class 2 leveling over a 16 kilometer line has a maximum allowable error of 6 millimeters times the square root of the distance or 24 millimeters. Thus, GPS over the same distance cannot equal this accuracy even under the best of circumstances. I have made several references to the California High Precision Geodetic Network, the HPGN. I would like to explain just what the HPGN is and why I am recommending its use. The HPGN is a statewide GPS project sponsored and funded by Caltrans in conjunction with the National Geodetic Survey. The project consists of some 244 stations spaced more or less equally throughout California. Station spacing is about 40 miles or 60 kilometers. About 200 of the stations are on major highway routes and thus readily accessible to the surveying community. The stations were established in 1991, and all GPS observations were completed in August of that year. The project was accomplished to meet GPS order B accuracy. Coordinates for the network are available as of about May 1992. The project was accomplished to provide a single, unified, and highly accurate network of geodetic control throughout California. In addition, the network is part of the National Geodetic Reference System as established and maintained by the National Geodetic Survey. The project is not unique to California, but is national in scope and accurately related to high order networks established in other states. Similar networks have been established in the neighboring states of Oregon and Arizona, as well as in about 20 other states. With a completed and adjusted HPGN in place, the surveying community in California will have a network to which all future surveys can be related. It is not necessary that future stations be established to this order B level of accuracy, but if the network becomes the framework to which future projects are related, then we can assume an accurate interrelationship among any of a variety of GPS surveys performed in the state of California. The station spacing for the network was determined so that follow-on or densification projects could be completed to at least first order using no more than single frequency GPS receivers. The network was designed to accommodate GPS surveys. However, azimuth marks were not set, nor were stations set so as to be intervisible, a rather difficult criterion to meet anyway because of the station spacing. Caltrans plans to monitor the network on an annual basis. As stations are in need of minor maintenance, this will be performed by the respective Caltrans district. If a station has been destroyed or is in danger of being destroyed, Caltrans will establish a new station in the same general vicinity as the law station. Additional observations will be made to the same criteria as when the station was originally established. This will maintain the ongoing accuracy and integrity of the HPGN. The coordinates of the network will be published relative to the North American Datum of 1983 or NAD83. The values will be listed as NAD83 1991.35. This means that there will be no datum change, but that the stations will be referred to the 1991.35 epoch. This corresponds to the midpoint of the five month field campaign to obtain the HPGN GPS coordinates. You may wonder how this relates to the NAD83 coordinates which have been in use for the past six or so years. The NAD83 readjustment was published in 1986. Thus, these coordinates are more appropriately NAD83 parentheses 1986. As I mentioned earlier, the HPGN is related to about 20 crustal motion or other order A stations. These stations are not exactly stationary with respect to one another because of crustal motions. It was necessary to determine their interrelationship at the epoch of the HPGN project. This is especially necessary in California because California is on two distinctive tectonic plates, the North American plate and the Pacific plate. The line of separation is basically the San Andreas fault. Crustal motions of as much as about four centimeters per year have been documented along the fault. If we were to hold NAD83 1986 epoch stations fixed, we would have had motions in excess of 20 centimeters in the intervening six years. This level of error is intolerable in GPS surveys which are capable of so much more accuracy. I should note that, where necessary, these several centimeter annual movements may need to be considered in future surveys. To this end, the National Geodetic Survey will be publishing velocity vectors for the coordinates of stations on the Pacific plate and in other areas where appreciable crustal motion is known. When comparing vectors between adjacent HPGN stations at some future time, these velocity vectors should be applied to the NAD81 1991.35 epoch vectors. For many survey applications, perhaps something as small as a few centimeter movement per year may not be significant. But large scale construction projects and crustal motion surveys for earthquake hazard reduction will need to take these motions into account. At this time, there are two significant developments in GPS positioning methodologies. The first goes by a set of different names, apparently depending on the equipment manufacturer. Current names for this methodology are rapid static, fast static, or fast ambiguity resolution. All GPS measurements are founded on the ability to determine the number of full wavelengths between the satellite and the receiver. As the term implies, this rapid or fast method is similar to the static method, except it is accomplished in a much shorter period of time. This is because we are able to rapidly determine the uncertainty or ambiguity in the number of full wavelengths received from the satellites, thus the more technical term, fast ambiguity resolution. To put this in perspective, a normal static occupation to provide control at the first order level, nominally one part in 100,000, might require a minimum of 90 minutes of tracking at individual sites. In general, dual frequency receivers of either the CA code or P code variety can be expected to reduce this to a maximum of about five minutes using fast ambiguity. The significance of this methodology is that it does not require you to maintain lock in any satellites as you move from one station to the other. If it is determined that we have to visit stations twice, as in the pseudo kinematic mode, we can still realize some significant savings employed in this method. Present tests seem to indicate that this method is suitable for small scale projects with line lengths between stations of maybe no more than two kilometers, although some proponents say this could be extended to about 10 kilometers. If we apply this method to our earlier seven station project, we could be done with the project in perhaps less than half a day. From a productivity standpoint, this should be a remarkable use of the technology. But as I indicated in the opening of the presentation, I believe we are still a year or so away from using this method in a truly operational mode. The other significant use of the technology is using GPS to control photogrammetry. Several tests and production projects of this methodology have been completed. In effect, a GPS receiver antenna are mounted in an aircraft. The method is similar to that employed in kinematic GPS surveys. In this case, the rover is in the aircraft, which is equipped with a photogrammetric camera. A precise determination is made between the GPS antenna and the focal plane of the camera. This is in fact a mini control survey. The underlying principle is that we are now able to determine exactly where the aircraft is during its photographic mission. We can in turn provide coordinates for the aerial photographs without the need for ground control. This will result in significant savings as the need for ground control and aerial panels is reduced or eliminated. Like many GPS methodologies, this too has its limitations. Continuous lock on the satellites is required between the base receiver and the aircraft receiver. A seemingly simple prospect, until you consider that the aircraft has to make very shallow bank turns so that to preclude the wings from coming between the satellite and the antenna. And the limit the aircraft can be from the base receiver is perhaps 50 miles. There are other potential uses of the technology in support of surveying operations which are being investigated. I urge you to keep abreast of these developments. Several trade magazines, including GPS World, POB, and professional surveyor provide continuing coverage of developing trends in GPS. Two primary manufacturers of GPS equipment are located here in California. They have been very valuable and accessible source of information about GPS technology. During the past hour or so, I have attempted to provide you with a broad range of issues related to the global positioning system technology. Probably the only sure thing which can be said about the use of the technology is that it's continually evolving. You should not lose sight of the fact that the surveying tools, the satellites themselves and the signals they broadcast are basically not changing. Rather, it is the new and revolutionary ways in which these signals are being used that is changing. As I discussed in the beginning, satellite geodesy positioning has been around for about 30 years. It was the advent of GPS which made satellite positioning technology usable by virtually every surveyor when properly used. It is incumbent upon you to follow the advances being made and to best determine how the technology can be employed in your particular survey applications. But after all, it is just another survey technology to be employed where it is most advantageous. Thank you and good luck.