When I started working at DigitalGlobe in 1999, our ability to capture such clear and highly accurate satellite imagery from a camera flying through space blew me away. I was curious about how it all worked so I began hanging around the spacecraft engineers. They taught me about a number of systems working together to keep track of where the satellite is in space and calibrate it to successfully image specific areas of the Earth. I may not be a rocket scientist myself but I like to drink a beer with rocket scientists and have learned a lot over the years. Here’s what you need to know:
There are a few basic components of a satellite that allow us to position it perfectly in order to take images of specific areas on the Earth. The Star Tracker camera sits on the back of the satellite and keeps track of its location in space using a map of the stars. The Global Positioning System unit sits on the satellite’s belly and keeps track of the satellite’s position over the Earth using the GPS network. As imaging operations begin, the Star Tracker orientation and GPS position are used as the starting point to kick off satellite rotations, which are executed by a set of gyroscopes and are controlled by an Inertial Reference Unit. These four systems work in concert to know where the satellite is in space and point and pan the satellite’s camera to collect imagery of a specific place on Earth. The systems also provide metadata that accompany the image so it can accurately be placed on a map of the surface of the Earth using ground system software.
It seems simple enough until you factor in that the spacecraft is over 400 miles above the Earth moving over 17,000 miles/hour from north to south and it’s trying to image a location down below which is moving up to 1,650 miles/hour west to east. (This rate gets slower towards the poles because the surface of the earth spins faster at the equator.) The first time I thought through all of these elements it gave me a headache, but now that I have been in the industry for 16 years it has become second nature. If it hurts your head too and you want to know more check out this excellent resource.
A couple weeks ago this high angle off nadir image of Colorado began circling around the office:
High off nadir image of Colorado
Knowing what we now know about how satellites function (and as outlined above), the systems had to have been pushed to their limits to make the picture possible. This image is definitely something special. I went back to the spacecraft engineers and asked how it was done. I learned it was a challenging process and difficult enough to hurt my head all over again.
Why was this so difficult to do? Information and decisions that that are normally automated had to be calculated manually to get the satellite and Earth geometry just right:
- Where does the satellite need to be in orbit?
- At what angle does it need to be pointing?
- At what precise time should it start and stop imaging?
- How does it need to be adjusted during imaging to account for the motion of the satellite and the target?
- Is the atmosphere clear?
- How will the sun illuminate the target?
The answers to these questions require custom calculations for a high angle off nadir collect. In addition, the ground system that processes the imagery is expecting the satellite’s camera to be “looking down” when the image is collected so it can accurately place the image on a map of the Earth’s surface. They’re not designed to “look sideways over the edge of the Earth.” The ground system software uses spacecraft metadata and models of the Earth’s surface to accurately orient an image on a map. The ground system software is not designed to orient an image that includes extremely stretched surfaces associated with high angles and it definitely was not designed to include sky for which there isn’t even Earth surface to accurately place the image on!
Exactly how it was done was quite clever. Rather than command the satellite camera to collect a “normal” image of the Earth, a series of commands were sent in order to collect a “calibration” image of the stars. By knowing where to start the calibration in orbit and by specifying the region of space to image, the team was able to orient the satellite in such a way that it was over the ocean off the coast of California looking back at Colorado from a very high off nadir angle. Then, by specifying how to perform the star calibrations, the spacecraft was rotated in such a way as to compensate for the satellite and Earth motions. So, while we were calibrating against a star field that was now below the Earth’s horizon we actually were capturing panoramic images covering hundreds of miles of Colorado.
In essence we told the satellite to image the stars and the Earth got in the way.
The end result was an image covering an area well east of Denver International Airport, almost to the Utah border.
Once the satellite snaps the image and the image is transferred to us through one of our ground stations, how do we reconstruct the raw image into one that can be accurately placed on the Earth? When we normally image the Earth, the process to correctly place that geo-referenced image on the Earth’s surface only takes seconds and is accurate to within a few meters. But when we take a high angle image like this and the Earth is “just in the way” of what we told the satellites to image, the process requires more advanced processing. In this case our calibration team (who are also imagery experts) manually reconstructed the images from the raw data. They had to deal with challenges unique to the high angle and the extreme image path length through the atmosphere. The team had to understand and correct registration of bands, exaggerated pixel distortions, pixel value changes, ortho-rectification, pan-sharpening, and color-correction, all of which were outside the scope of normal image processing.
In the end their processing was worth it because the result was an incredible image of Colorado; a view of the Earth that no one has seen before.
High off nadir image of Mt. Fuji
Click here to read more about how this image of Mt. Fuji was made.
High off nadir image of Dubai
I’m proud to work for DigitalGlobe, who encourages its teams to continuously push the boundaries of innovation. I had to laugh when my colleague in satellite operations finished his explanation to me by saying “The hardest part of this process is explaining it to people like you.” I’m glad he took the time to explain it to me and hopefully my explanation makes sense to you. If you want to learn more about pointing agility check out this blog post from Dr. Walter Scott, DigitalGlobe’s Founder and Chief Technical Officer.