Hi Brian,

I'm writing a detailed reply because you sound like you've thought of the
problem of access to Linux; hopefully you'll be enthused enough by emacspeak
to make it even better.

I had a very good reason for writing emacspeak on top of emacs.

1) I started this off as something that I would get working in a couple of
weeks, which I did.
(this was last October)

2) Emacs, as you allude (this is full gnu emacs in all its garbage collecting
glory) is extremely powerful and not to be compared with micro-emacs.

3) From emacs I can do everything, including run a subshell etc.

4) With the forthcoming eterm (terminal emulator under emacs) which emacspeak
already works with, you basically get everything you would  if you wrote a general
purpose speech output at the linux tty driver level and more.

1) How you get the same:

With the eterm terminal emulator under  emacs, I can run vi, rn and a host of
other shell programs, including telnet sessions and kermit sessions when
logging into other machines. And everything talks

2) How you get more:

I'll preach a little here, apologies in advance.

When you take a tty driver and make it speak, (this is essentially what all PC
screenreaders under DOS do) all you get to hear is the contents of the
display; you're responsible for figuring out why it's there.

So for instance, when a calendar application lays out the calendar to produce
a well-formatted tabular display, it looks nice; but the blind user hears
"1 2  3 4 5 6 7 2 3 4 5 6 "... or some such garbage; Believe me; I've used
such an interface for the last five years.
So now you've got to figure out that for instance 27 april is a thursday by
checking which screen column the figure "27" appears in.

Emacspeak has a completely different approach to speech enabling Emacs apps
(which as you know are numerous). Emacspeak looks at the program environment
and data of the applications, and speaks the information the way it should be
spoken. So in the case of the calendar, you hear "Thursday, April 27, 1995".

So in summary:

1) Emacspeak does much better at providing speech output for applications
written for Emacs, e.g. the emacs calendar, gnus, W3 ...
2) In the case of applications running at the shell level, ie non-emacs apps,
emacspeak provides the same level of speech output as do Dos-based
screenreaders or a tty-based screenreader would.

All this said, there is one small short-coming; you dont get speech till emacs
has started.
Here is how I have things setup on my laptop; if you have suggestions I'd
welcome them.

At present, I have set up lilo to beep once when it prompts for dos or linux;
if I don't touch the keyboard it boots to linux, and gives me a double beep.
(this tells me I have a login)

When I login, my .profile speaks a welcome message by sending a string to the
speech device.

I also set up my bash promptcmd so it speaks something after each command is
successfully executed.  For times when things go wrong (as they did when I was
building and testing emacspeak) I also have a shell script "speak" that sends
its argument to the speech device.

So for instance
speak `pwd` tells me the working directory etc. 
So even if emacs does not start up successfully I get some feedback, and have
some hopes of figuring out what the machine is upto:

(from the above you'll realize that I am completely dependent on the speech

Finally once emacs is up, I have full control of the machine.

Before I sign off, could you tell me what your interest in speech interfaces
comes from? 

Best Regards,

      Adobe Systems                 Tel: 1 (408) 536 3945   (W14-129)
      Advanced Technology Group     Fax: 1 (408) 537 4042 
      (W14 129) 345 Park Avenue     Email: raman@adobe.com 
      San Jose , CA 95110 -2704     Email:  raman@cs.cornell.edu
      http://labrador.corp.adobe.com/~raman/raman.html (Adobe  Internal)
      http://www.cs.cornell.edu/Info/People/raman/raman.html  (Cornell)
    Disclaimer: The opinions expressed are my own and in no way should be taken
as representative of my employer, Adobe Systems Inc.