Point a telescope without moving a thing

Radio telescopes allow us to listen in on distant sources and learn about fascinating objects such as pulsars, quasars, and even (maybe?) extraterrestrial civilizations. Directional antennas for these telescopes have greater sensitivity than omnidirectional antennas, but then they must be pointed in the appropriate direction. However, large telescopes can be prohibitively heavy. Arecibo, which at 300 meters wide is the largest dish in the world, doesn’t even try; it sits in a depression in the ground and lets the Earth’s rotation sweep it around on a daily basis. As a consequence, there are areas of the sky that it cannot study, and everything else can only be imaged for a short time each day. Other facilities such as Green Bank and the Deep Space Network use massive motors and gears to rotate their telescopes to reach other regions in the sky or to focus on a specific target for longer periods of time.

But why move if you don’t have to? Engineers have developed a clever way to simulate a directional antenna from a collection of smaller, stationary, omnidirectional ones. Given multiple antennas in a line, if you shift the phase of each one progressively more, then combine the signals together, the result is the same as if you had rotated a single larger antenna to point to the side. If you have the ability to digitally change the phase shift for each antenna, then you can “point” your array anywhere you like without moving anything physically. This is called “digital beamforming.” (Technically, beamforming permits the manipulation of both the phase and the amplitude of each component antenna’s signal.) The Allen Telescope Array in northern California is an example of an array that uses beamforming (e.g., to listen to the New Horizons spacecraft).

Even more exciting is the recent advance in adaptive digital beamforming. Here, each of the phase (and amplitude) shifts (weights) are modified on the fly to maximize the resulting signal quality. Apparently, some radio transmitters even send “training sequences” to help an adaptive receiver quickly identify the best weights to use.

Thanks to Toby Haynes for his excellent “Primer on Digital Beamforming,” which is both exceedingly accessible (even for those of us without a formal signal processing background) and satisfyingly detailed (with field strength diagrams for different antenna types and component diagrams for beamforming).

1 of 1 people learned something from this entry.

  1. Terran said,

    April 9, 2009 at 7:59 am

    (Learned something new!)

    Nifty! I was familiar with synthetic aperture radar and, by extension, beamforming for projection of a variety of sources — radio, microwave, potentially laser, etc. But I hadn’t considered receptive beamforming. It makes perfect sense, though — I’m sure that it’s essentially the same math, run in reverse (so you’re building target-specific input filters, rather than output phase shifting). Nice idea.

    I wonder if you could do this, say, with the seismometers on Erebus to get a “steerable” picture of what’s going on inside the volcano? You’d have a nasty problem with the severely nonlinear structure inside the cone. But I guess it should be, in principle, possible.

    What are the limits of the beamformer? Can you, say, oversample your input data and store an overcomplete representation and then, later, go back and “point” a beam through the accumulated data store? Hmmm… Might give you something like a post-hoc digital astronomy version of holography.

  2. wkiri said,

    April 9, 2009 at 8:27 am

    What are the limits of the beamformer? Can you, say, oversample your input data and store an overcomplete representation and then, later, go back and “point” a beam through the accumulated data store?

    Yes!! In fact, for one of the Square Kilometer Array “pathfinder” arrays, they are buffering several seconds of data at each antenna… so that if they get notification of, say, a pulsar, they can “go back” and retroactively form a beam into the past to observe it! How cool is that? (This isn’t even oversampling, just storing the raw input. Problem is that it’s a huge amount of data, so they can’t realistically buffer more than a few seconds … currently.)

Post a Comment

I knew this already. I learned something new!