eSpeak - Text to Speech
eSpeak is a compact open source software speech synthesizer for English and other languages. eSpeak uses a formant synthesis method. This allows many languages to be provided in a small size. It supports SAPI5 version for Windows, so it can be used with screen-readers and other programs that support the Windows SAPI5 interface. It can translate text into phoneme codes, so it could be adapted as a front end for another speech synthesis engine.
comments powered by Disqus
AxTk is a toolkit for building highly accessible applications with speech output. AxTk is built on top of wxWidgets and so is cross-platform. The developer can opt to speech-enable an existing wxWidgets UI, or use a new menu-based interface which is easier for a vision impaired user. AxTk also contains a text to speech wrapper class, wxTextToSpeech, with handlers for a variety of speech engines including SAPI, Mac Speech Synthesis Manager, eSpeak and Cepstral. wxTextToSpeech can be used independ
SPEECH ENABLING DIALOG: BUILDING A SIMPLE SPEECH 'GUI' Voice-dialog is a speech version of the widely used ncurses widget library, dialog. In this project, dialog has been voice enabled to work with popular speech synthesis engines such as Festival, Flite and eSpeak. The goal of the project is to instrument intelligent speech rendering into each of the native curses-based widgets of dialog. All the components of dialog have been speech enabled except for tailbox. Michael Gorse's Emacspeak eflite
SPEECH ENABLING DIALOG: BUILDING A SIMPLE SPEECH 'GUI' Vialog is a speech version of the widely used ncurses widget library, dialog. In this project, dialog-1.1-20070409 has been voice enabled to work with popular speech synthesis engines such as Flite and eSpeak. The goal of the project is to instrument intelligent speech rendering into each of the native curses-based widgets of dialog. All the components of dialog have been speech enabled except for tailbox. Michael Gorse's Emacspeak eflite se
Dragonfly is a speech recognition framework. It is a Python package which offers a high-level object model and allows its users to easily write scripts, macros, and programs which use speech recognition. It currently supports the following speech recognition engines: Dragon NaturallySpeaking (DNS), a product of Nuance, Window Speech Recognition (WSR), as included in Microsoft Windows Vista.
Wiki-to-Speech is the former name of the SlideSpeech project. Roughly speaking, the original project was the Open Allure Dialog System which focused on a desktop application, this project, Wiki-to-Speech, focused on a mobile application and finally SlideSpeech focused on a web application. IntroductionCompanion Android/Python wiki-to-speech project. Drive text-to-speech interaction using wiki-based scripts. Scripts can include statements, question/answer/response (multiple choice), links to webs
Voxforge - Free GPL Speech Corpus and Acoustic Model Repository for Open Source Speech Recognition E
VoxForge was set up to collect speech audio files to create a GPL Speech Corpus for use with Free and Open Source Speech Recognition Engines (on Linux and Windows). The transcribed speech will be 'compiled' into acoustic models for use with Open Source speech recognition engines such as Julius, ISIP, and Sphinx, and HTK (note that HTK has distribution restrictions). Why Do We Need Free GPL Speech Audio?Most acoustic models used by 'Open Source' Speech Recognition engines are closed source. They
SpeakRight is an Java framework for writing speech recognition applications in VoiceXML. Dynamic generation of VoiceXML is done using the popular StringTemplate templating framework. Although VoiceXML uses a similar web architecture as HTML, the needs of a speech app are very different. SpeakRight lives in application code layer, typically in a servlet. The SpeakRight runtime dynamically generates VoiceXML pages, one per HTTP request.
Project Name: VOICE ENABLED INTERACTIVE LEARNING MADE BY: AJAY KUMAR PROJECT DESCRIPTION: This project includes 3 parts:- 1:Speech Synthesis 2:Speech Recognition 3:Speech Analysis 1: Speech Synthesis: In this part it takes text as input and voice as output.You can open any text file or doc file,it will read for u. 2: Speech Recognition: In this part it takes speech as input and text as an ouput.Whatever you speak it will print on the screen. 3.Speech Analysis: It has two parts a. waveform creati
FreeTTS is a speech synthesis system written entirely in the Java. It is based upon Flite, a small run-time speech synthesis engine developed at Carnegie Mellon University. Flite is derived from the Festival Speech Synthesis System from the University of Edinburgh and the FestVox project from Carnegie Mellon University. FreeTTS supports a subset of the JSAPI 1.0 java speech synthesis specification.
This project aims at providing functional Text To Speech software for Linux. It will allow the user to have text such as text in a PDF, on the web, word processors, and ANY text that can be copied to the clipboard to automatically be read without user intervention using one or more of the OSS (Open Source Software, Not Open Sound System) text speech synthesizers made for Linux. Features to be included are Ability to test and choose from many voices The ability to set the speed and gap between wo