Pretty cheap, good star tracking
Basic idea: While at the Minor Planet Workshop in Alfred, New York, there was some discussion of using cheap video cameras as finder scopes/cloud detectors. You hang one on the telescope, sit inside your warm room, and see the area around the telescope field on a TV. Some of these cameras run about $70.
That spawned the following thought. Imagine you've got such a camera with the image fed to a computer. (This could be either a cheap "Webcam", or maybe a video camera with a capture board such as a Snappy.) Now imagine that the computer looks for stars in the image and automatically pattern-matches the result to "known" star fields from a catalog. (This is basically what Charon does now for narrow-field views. With a wider field of view, it's quite possible for a computer to figure out, "Oh... this image is centered on thus-and-such RA/dec." Skeptics should click here.)
The computer now knows roughly where the telescope is pointed. The idea is very similar to the "star tracker" systems that used to be installed on various (mostly military) aircraft and rockets for navigation purposes; these gadgets also imaged star fields and pattern-matched them in order to figure out where they were pointed. (A human navigator with a sextant could do the same thing, but getting one to ride an ICBM could take some persuasion.)
Point the telescope at a single bright star, and the computer can find that star in the image and be in alignment. Thus, you need not be especially careful to make absolutely sure the scope and camera are pointing in exactly the same direction; after you've aligned on one star, the computer will know what pixel in the video image corresponds to the telescope's location.
The user might also be asked to hold a hand over the camera lens so we can get some dark frames, but that would be about all he would have to do to get set up.
At this point, you've got several features, such as...
Digital setting circle replacement: You can, of course, just use the video data in place of digital setting circles. Instead of reading encoders to determine the telescope position, the computer would "look" using the camera. DSCs require you to align on one or two known stars, and you have to know which ones they are and (usually) your latitude/longitude. With this system, you point at one bright star, the software identifies it for you, and your location on the earth is irrelevant. And as will be discussed below, the accuracy will probably be better.
Cloud sensor: The software could probably get a decent pattern-match with about seven stars (maybe less, but I'd hate to promise it.) If it got fewer than this, the computer would know that it must be a cloudy night, and might then close up the dome.
Possible closed-loop motorization: Theoretically, the computer could figure out the camera position, then feed suitable signals to scope motors to slew the telescope to a desired RA/dec. Since it's a closed loop, you could push the telescope by hand without destroying your alignment, and I think you could probably get away with using pretty cheap motors and control systems (though this is a little outside my realm of expertise.)
Autoguiding: This one is pretty ambitious. Greg Roberts mentioned to me that he's got a camera with about a 1.5 degree FOV, currently used for satellite tracking. This is admittedly narrow for these purposes, but I think it would be possible to figure out, given a 2.25 square degree image, its position in the sky. If so, the position reported should be precise enough to autoguide. (Greg's mention of this also caused me to wonder about autoguiding on the satellite... maybe someday!)
I'm currently considering the Connectix QuickCam (grayscale version), which uses a TC255 CCD chip. The chip measures 3.2 x 2.4 mm. Hook it up to a 50mm camera lens, and you've got about a 3.7 x 2.8 degree FOV. I may try this.
Teaching constellations: And now for something completely different... imagine someone with a laptop and Webcam taking it outdoors, pointing it up, and being told, "You're looking at Orion... turn a little this way, and you'll be looking at Aldebaran. Here's a map superimposed on the image of your camera to tell you what you're looking at." Not bad for an introductory descriptive astronomy course, eh? This would probably be the real "mass market" application. Unfortunately, it pretty much requires that cheap cameras be used, and that prospect looks grim (discussion below).
Accuracy: For evaluating accuracy, I'd go with my experience with automatic pattern-matching in Charon. Accuracy for individual stars is usually about a tenth of a pixel or better. The accuracy of the overall solution, which involves averaging several stars, is even better, but let's go with that tenth of a pixel criterion for the nonce.
The Logitech Webcam that I recently got has about a 45 degree field of view, 320 pixels across. In theory, that would mean each pixel was (45*60/320) = 9 arcminutes across, and I'd get a pointing accuracy of about an arcminute. Compare this to the resolution of high-resolution, 8192-step encoders: (360 * 60 / 8192) = 2.6 arcminutes.
This is not the whole story, of course. High-res encoders are limited less by encoder accuracy and more by mechanical errors in the mount. You can compensate for these somewhat using TPoint or MEC, but it's a somewhat cumbersome and not completely successful process. And the one-arcminute Webcam accuracy is optimistic, too; I doubt the Webcam images are apt to be "clean" enough to allow such wonderful accuracy.
The star tracker does have an advantage when it comes to mechanical errors. If the scope is sloppily aimed, the camera will have similar sloppy aim. As long as the camera is securely attached to the scope, it would seem that you could have an absolutely crummy mount and still get good pointing. TPoint or MEC would not be required.
Incidentally, a few paragraphs ago, I
mentioned a setup with a Connectix grayscale QuickCam and a 50mm
camera lens. In such a setup, the pixels would be about 40 arcseconds
on a side, and our pointing would be within mere arcseconds.
(At that point, keeping the Webcam rigidly pointed relative
to the telescope might be a limiting factor.)
Testing: I first experimented by purchasing a
Logitech QuickCam Express.
This is a bottom-of-the-line camera, and on several tests, I never
got it to image even a single star. I've since learned from Dave
Allmon that the Express has a CMOS-based sensor, unlike the CCD-based
sensors in most other cameras. Fortunately, I can return the Express. Other reports were also not encouraging. Jan van Gijsen got a cheap
Webcam to "see" Mars and, just barely, Spica. Since the system would
need at least seven stars, I think, to reliably recognize star
patterns, this is woefully inadequate. The Hamburg Observatory tried to
use a Webcam as a cloud sensor, and eventually gave up and went with a
Lisää camera instead.
However, Dave Allmon and the
QCUIAG people have
gotten quite good images from QuickCams. It helps to have software that
can get longer exposures (currently available only for the QC Grayscale,
which is unfortunate; I don't think you can get one of these anymore)
and to remove the
infrared-blocking filter (doing this destroys the color balance,
which we don't care much about, while giving you about twice the
sensitivity). QuickCam
Hack Web page (about connecting to a 1.25" adaptor)
Charles Wray's page shows a mounting for a modified QC. I've heard of another setup that would almost certainly work. This
is one described by Greg Roberts, using a video camera, narrow (1.5
degree) FOV, and a capture board. I suspect you could get the cost for
that down to about US $200, which is really not bad at all. (Getting
software to recognize a randomly-selected, 1.5 degree square image
from the sky with enough speed to be useful might be more difficult...
but I think it's quite possible.) I'm still hoping to use the
cheaper, right-off-the-shelf Webcams. Example of star tracking: A few years back, I got an
inquiry from Bob Leitner at Stamford Observatory in Connecticut. He
had five plates, taken decades ago on the 22" Gregory-Maksutov at that
institution. The original data had been lost, and about all that
could be said was that the images had to have been taken above the
southern declination limit. I made a few modifications to Charon to
load the data from these images, and Bob set up a '486 to process
them. As I recall, it took about a day to check a single image
against the entire sky north of about dec -35 (recalling that these
were narrow-field images). At the end, we got RA/decs for the centers
of four of the five plates. I don't think anyone knows where the
fifth one was taken. Charon was not (and still is not) really intended for such a use;
its main purpose is in handling asteroid astrometry. For a star tracking
program, I've got a somewhat different algorithm in mind that should
handle the pattern matching very briskly. If the field of view is about 45 degrees (usual QuickCam field if
you make no modifications), then a view going down to maybe third
magnitude would suffice to get recognizable patterns. This would mean
that the software would search for a pattern match among maybe a few
hundred stars, and would get an answer in a heartbeat. With narrower fields of view, you need to go to fainter stars
to get a recognizable pattern, and getting the match starts to take
a bit longer. On the flip side, narrower field = greater positional
precision. And just how fast need the match be, anyway? Suppose you
slew the scope to a totally new part of the sky and the software takes
a second or two to figure out where you are. That's not a big problem.
I'll be trying out the default wide-field view first (the one you
get when you don't try to replace the lens.) This may be good enough
for most uses. I'll then try out the QC Grayscale
plus 50mm lens combo. I have no doubts that matching the resulting
3.7 by 2.8 degree field can be done quickly... get much smaller than
that, and I do start to worry a little.