[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Using emacspeak with speech-dispatcher
From: Tim Cross <firstname.lastname@example.org>
Subject: Re: Using emacspeak with speech-dispatcher
Date: Sun, 8 Jan 2006 14:59:40 +1100
> I would possibly suggest a different route if we wanted emacspeak to
> be solely based on speech-dispatcher - instead of taking emacspeak and
> trying to integrate the speech dispatcher client, I would be more
> inclined to start with speechd.el and start to add 'add-ons' to
> increase the power and interface of speechd.el so that we increased
> its power. I actually started looking at this a few months ago and
> created modules which provided enhanced support for speech feedback
> from speechd.el when running VM.
> To me, this has the advantage of creating something from a clean code
> base and giving us the ability to learn from the experiences of Raman
> and the development of emacspeak. The one thing we would need to be
> careful about is not to affect the clean base of speechd.el - any
> additions should be made as libraries which could be loaded and used
> if and when the user wnated them.
Sure !!!! But I explained in my previous mail that
it could be done progressively : first reimplement
dtk-speak and dtk-speak-using-voice and try to use
high level modules of emacspeak knoing that they certainly
will need to be more or less modified.
> > As i said above, simply not use dtk-speak.
> but then we lose existing support for already available synthesizes
> within eamcspeak!
Not necessary eamacs lisp is flexible enough to
allow switching from a method to another
provided you do not inline it ! That was my
major problem when I tried to interface directly
emacspeak with festival since all functions in
dtk-interp were inlined ...
Moreover I think the people who want to use
speech-dispatcher are not interested in
switching to another speech-server.
> Again, I see this as us coming from two different directions. Your
> approach seems to invove a fundamental change to the architecture
> while mine is about adding additional options in supported speech
> servers. These are two very different things and I beleive a change in
> fundamental architecture requires significant planning, analysis and
> consultation. At the very least, you would either need to get Raman's
> support as the maintainer of emacspeak or break-off and create your
> own branch which is independent of the main emacspeak development. I
> don't believe a split in the emacspeak community would benefit anyone
> in the long run, but thats just my opinion.
No need to be so radical Tim ! Sure I plan a deep
change in emacspeak low part as I have already done
for my festival client. But I think that this part
is very stable and that there are no major changes between
two versions. So it is not so difficult to follow
emacspeak evolution. Well that's my opinion.
It's not a spllitt ; but you know or probably know not
that the french emacspeak community is very small.
You gave the reason : emacspeak is not
multilingual and that's a big problem for french
people ! One of my goals is to improve
> > 2. I dont see any interst to add a layer.
> In the long-term, I would agree. Thats why I said in my original post
> that eventually we could do the whole speech-dispatcher interface in
> elisp and 'borrow' from speechd.el. However, sticking with the TCL
> solution means very little work would need to be done - really, all
> you would need is a very simple tcl script which uses the tts-lib.tcl
> library and the generic-voices.el or ssml-voices.el file and simply
> sends speech to the speech-dispatcher socket. As the generic-voices.el
> and ssml-voices.el files are already done, there would be no need to
> modify any elisp and a basic tcl command loop which passed data to the
> speech dispatcher socket (with whatever speech-dispatcher specific
> commands are necessary) would be fairly trivial to implement.
For me, reimplementing a dtk-speak method is as trivial
just because probably I have written so many lisp
lines ... Moreover with the tcl script solution
we'll probably loos many speech-dispatcher features
and that's not what we want.
> No, I don't believe it is that straight forward. don't forget that the
> basic architecture for emacspeak was developed over 15 years ago -
> back then, there were very few speech synthesizes capable of
> supporting multiple languages and the whole issue of multilingual
> support in software was still at a very early stage.
Yes but once more this concern the low level methods
and if you simply discard them as I did in my
festival client for emacspeak you obtain something
multilingual very satisfying !
You'll ask me why I did not continue this festival
client project. I found speech-dispatcher and
discover that many communication protocol problems
I had were solved here ! hence I can focus on
> The more I think about it, given your objectives, I feel the better
> approach would be to enhance speechd.el to increase its power by using
> emacspeak as a guide on how to add features etc. To me, this has the
> advantage of
> - Starting with a cleaner code base
> - Avoid potentially difficult to resolve inconsistencies between
> the two models that would be encountered when trying to
> integrate the two
> - Less development time before we see something useful since we
> would be enhancing an existing system
> The main and obvious disadvantage is that we would be splitting the
> user community, which is a real concern. However, there is nothing
> preventing a re-integration later if that proved to be warranted.
I said above how I plan to proceed : indeed I as
already did with festival waht gave from my
point of view a very satisfying result :
reimplementing emacspeak basis and using high
level modules and modifying them as less as
To unsubscribe from the emacspeak list or change your address on the
emacspeak list send mail to "email@example.com" with a
subject of "unsubscribe" or "help"
Emacspeak Files |
Unsubscribe | Search