doc_id
stringlengths 7
11
| appl_id
stringlengths 8
8
| flag_patent
int64 0
1
| claim_one
stringlengths 13
18.3k
|
---|---|---|---|
20030154070 | 10072974 | 0 | 1. In a computer-assisted language learning system, a method for analyzing grammar and building a learners' model using a student's level of proficiency, comprising the steps of. maintaining a syntactic error table for the student, said error table including a column listing syntactic subtrees with associated frequency fields; determining a proficiency level of a student's writing ability using a sentence input by the student; matching the input sentence to a correct sentence in a database template; obtaining a grammar tree of the matched correct sentence; matching the input sentence to leaves of the grammar tree; for each leaf of the grammar tree that is matched with words marked as errors, finding a minimum syntactic subtree of the leaf and associating the leaf with said subtree; for all the subtrees found, combining leaves associated with a common subtree; for each common subtree, searching the syntactic error table for the common subtree; and updating the table to reflect the common subtree. |
20120173574 | 13395080 | 0 | 1. An information retrieving apparatus comprising: a name database that registers at least one record as a unit, each of the record contains at least one attribute to be retrieved and a value associated with the attribute, wherein the value is content about each of the attribute; an operation input unit that receives an operation input of a user; a voice input unit that receives a voice input of the user; a voice recognizing unit that recognizes voice obtained from the voice input unit as a character string; a sound model storing unit that stores sound data referred by the voice recognizing unit; a language model storing unit that stores, as a language model, a vocabulary recognized by the voice recognizing unit and a connection rule of the corresponding vocabulary; a retrieving unit that retrieves the value of the attribute included in the name database using an input character string input from the operation input unit or the voice recognizing unit, and creates, as a candidate, the record in which the input character string is included in the value; an output unit that outputs, as the search result, the candidate of the record created by the retrieving unit; a selecting unit that selects the output candidate of the record; and a recognized vocabulary producing unit that receives a selection result of the record by the selecting unit, and produces a new additionally recognized vocabulary that is a voice recognized vocabulary to be added to the language model, wherein the recognized vocabulary producing unit records, in the name database or the language model, a corresponding relationship between the additionally recognized vocabulary corresponding to the input character string and the selected record. |
20050070337 | 10671140 | 0 | 1. A headset for communication with a device, the headset configured for processing audio signals captured by the headset to detect user speech and transmitting, to the device, sampled representations of the captured audio signals generally only when user speech is detected. |
8935283 | 13634473 | 1 | 1. A method performed by a computing device for searching for analog data, comprising: receiving a digital search term; using the digital search term to search an analog writing mapping database, a facial expressions mapping database and a voice mapping database, the mapping databases respectively comprising mappings between digital text and analog writing, mappings between digital text and photos of facial expressions, and mappings between digital text and voice data; when the digital search term matches a digital text entry in one of the mapping databases, obtaining an analog search term mapped to the digital text entry, the analog search term comprising an analog writing, a photograph of a facial expression or a voice data; consistent with which one of the mapping databases that was used to obtain the analog term, searching one of an analog writing database, a facial expression database and a voice database using the analog search term to return the analog data and a data link to a location of the analog data within a file as a search result. |
8401855 | 12367131 | 1 | 1. A computer-implemented method in a dialog system, comprising: defining a set of one or more grammar rules defined for various objects within an application executed by the dialog system; labeling each grammar rule of the set of grammar rules with semantic or syntactic characteristics by annotating each grammar rule with a specific item of information regarding each element of the respective grammar rule to produce labeled grammar rules; generating labeled sentences from the set of labeled grammar rules to preclude the need to label sentences after they are generated; and using the labeled sentences to train one or more statistical models to be used by a spoken language unit of the dialog system. |
10096329 | 15311821 | 1 | 1. A method for enhancing intelligibility of speech content in an audio signal, the speech content contained in a speech component of the audio signal, the method comprising: obtaining reference loudness of the audio signal; enhancing the intelligibility of the speech content by adjusting partial loudness of the audio signal based on the reference loudness and a degree of the intelligibility; and outputting, from a loudspeaker, the audio signal having the intelligibility of the speech content enhanced, wherein enhancing the intelligibility of the speech content by adjusting the partial loudness of the audio signal comprises: adjusting the partial loudness of the audio signal to the reference loudness; determining whether an intelligibility criterion is met by the intelligibility of the speech content in the adjusted audio signal; determining target loudness in response to the intelligibility criterion being not met; and adjusting the partial loudness of the audio signal to the target loudness, wherein determining the target loudness comprises: calculating a first metric indicating a ratio of the speech component to the non-speech component; calculating a second metric indicating a ratio of the speech component to the non-speech component and an environmental noise signal; determining additional loudness based on the first and second metrics; and determining the target loudness based on the reference loudness and the additional loudness. |
8731905 | 14131838 | 1 | 1. A method of parsing a sentence into a sequence of phrasal segments, the method being performed by a computer processor, the method comprising the steps of: (a) providing a target segment length expressed as a number of words or syllables; (b) parsing, by the computer processor, the sentence to identify a plurality of candidate break points based on punctuation and parts of speech and including candidate break points at the beginning and end of the sentence; and (c) eliminating, by the computer processor, some of the candidate break points, leaving a plurality of final break points, so that every word in the sentence is situated between two final break points, and the sum of the differences between (i) the target segment length and (ii) the number of words, if the target segment length is expressed as a number of words, or the number of syllables, if the target segment length is expressed as a number of syllables, between each consecutive pair of final break points is minimal; wherein each sequence of words situated between each consecutive pair of final break points is identified as a segment. |
20100102998 | 12651503 | 0 | 1. A method for input of text symbols into an electronic device having a reduced keyboard, the reduced keyboard having keys representing a plurality of characters, comprising: receiving character inputs from the reduced keyboard; identifying symbol variants based on the received character inputs; displaying a list of symbol variants; selecting an input symbol from the list of symbol variants, wherein the input symbol is a Korean Hangul syllable; designating at least one Chinese Hanzi syllable to correspond to at least one Korean Hangul syllable; and replacing the Korean Hangul syllable with a Chinese Hanzi syllable. |
7944448 | 11151305 | 1 | 1. A computer system comprising a processor configured to execute computer-readable instructions stored on a non-transitory computer-readable medium, the computer-readable instructions being configured to implement an agent comprising: an interpreter that receives an input event and outputs a social event based on the interpretation of the input event; a social response generator that receives the social event, an output from an emotional state register and an output from a predefined personality trait register, and updates at least one of a current state of the emotional state register and a social response message stored an event buffer; an emotion generator that outputs an emotion response message based on at least one of the social response message stored in the event buffer, one or more outputs of the predefined personality trait register, or one or more outputs of the emotional state register; a manifester that receives the emotion response message output from the emotion generator and converts the emotion response message into a behavior message; and a role database comprising social characteristics used by the interpreter to create social events and data used by the manifester to convert the emotion response message into the behavior message. |
20080086300 | 11690102 | 0 | 1. A method of representing the meaning of a source sentence from a source language, comprising: obtaining a language-independent semantic structure to represent the meaning of the source sentence; synthesizing a syntactic structure of the output sentence from the language-independent semantic structure using information which includes lexical descriptions, semantic descriptions, syntactic descriptions, and morphological descriptions of the output language; and constructing an output sentence to represent the meaning of the source sentence in an output language. |
7840579 | 11609697 | 1 | 1. A method of presenting information to a user, the method comprising: receiving a first input from a user; structuring the first input as a first stem; receiving a separator designating subsequently-received input as a second input; receiving the second input; structuring the second input as a second stem; relating the first stem and the second stem to a library of candidates, further including searching a database of objects, each object being associated with one or more strings, by identifying matches between the first stem and a first string and the second stem and a second string, the first and second stems indicative of at least one of an object type or an application from the library of candidates; rendering one or more results in response to relating the first stem and the second stem to the library of candidates the results including an indication of applications available for launch in conjunction with the results, including rendering an action as an object in the results related to a command accessible through a menu system; transferring the user into the menu system so that the action may be commenced in response to receiving a confirmation instruction from the user; and enabling the user to select from among the one or more results, identifying matches between the first stem and the first string and the second stem and the second string including identifying matches between the first stem and the first string that appear in a first attribute and between the second stem and the second string that appear in a second attribute that is different than the first attribute. |
9619465 | 14285693 | 1 | 1. A system comprising: a translation server operable to perform machine translation obtaining translation model data from a translation model for translation between a source language and a target language and language model data from a language model for the target language, the translation server further operable to translate text in the source language into the target language using the obtained translation model data and language model data, the translation server comprising: a request queue operable to store requests for language model data to be obtained for translating a segment in the source language, and a segment translation server cache operable to store language model data obtained by the requests by the translation server, wherein the translation server is further operable to: process the translation of the segment using language model data from a second language model for the target language to produce an initial translation of the segment before the requests for the language data in the language model in the request queue are sent out, update the requests for the language model data of the language model in the request queue based on the initial translation, send out the updated requests in the request queue to obtain language model data from the language model for processing the initial translation, and after the updated requests are served and the data for the updated requests are stored in the segment translation server cache, process the initial translation with the data for the updated requests to produce a final translation. |
20140285427 | 14206340 | 0 | 1. A signal processing device comprising: a memory; and a processor coupled to the memory and configured to: detect a second feature value relating to a first feature value recognized to satisfy a recognition condition, from a second time series prior to a first time series of the first feature value in a times series of a feature value corresponding to an input signal, and change the recognition condition so that the second feature value is recognized as a class for recognizing the first feature value. |
6029156 | 09218906 | 1 | 1. A method for creating a business simulation utilizing a rule-based expert system with a spreadsheet object component that includes data, calculations required for the simulation and communication information to provide a dynamic, goal based educational learning experience, comprising the steps of: (a) accessing the information in the spreadsheet object component of the rule-based expert system to retrieve information indicative of a goal; (b) querying a user for information based on one or more learning objectives of the presentation; (c) analyzing user responses to ascertain user characteristics; (d) utilizing the information in the spreadsheet object component of the rule-based expert system to integrate goal-based learning information in a structured, dynamic business simulation designed by a profiling component to motivate accomplishment of the goal for use in the business simulation based on the user characteristics; and (e) monitoring answers to questions posed to evaluate progress toward the goal utilizing the spreadsheet object component of the rule-based expert system and providing dynamic, goal-based, remediation learning information feedback from a remediation object component including a knowledge system and a software tutor comprising an artificial intelligence engine which generates individualized coaching messages to based on the user characteristics that further motivates accomplishment of the goal. |
20060074634 | 10959523 | 0 | 1. A method in a data processing system for fast semi-automatic semantic annotation, the method comprising: dividing a data set of sentences into a plurality of corpuses, wherein each of the plurality of corpuses includes an equal number of sentences; learning a structure of each sentence of a first corpus using a plurality of trainers; forming a model based on the structure; and using the model in a set of engines to annotate new sentences. |
10157609 | 14799533 | 1 | 1. A computing device for providing speech recognition with local and remote feedback loops, the computing device composing: a communications device configured to communicate with a remote system developer over a communications network; one or more processors; and a memory storing instructions that when executed by the one or more processors, cause the computing device to perform a method comprising: collecting user data associated with a user, wherein the user data includes audio by the user and textual data from user generated documents; filtering the collected data at distinct levels for local and generic models to protect private data; updating one or more local models with the user data filtered at the local model level, each local model comprising at least one of a local acoustic model that models how phonemes sound or a local language model that models how words fit together to form sentences; providing, over the communications network, user data filtered at the generic model level to the remote system developer to enable the remote system developer to update one or more generic models comprising at least one of a remote acoustic model that models how phonemes sound or a remote language model that models how words fit together to form sentences; receiving speech inputs; and recognizing, by a speech recognition system, the speech inputs based at least in part on the updated one or more local models and the updated one or more generic models. |
20170221475 | 15014213 | 0 | 1. A method comprising: receiving audio data corresponding to an utterance that includes a voice command trigger term and an entity name that is a proper noun; generating, by an automated speech recognizer, an initial transcription that (i) corresponds to a portion of the audio data that is associated with the entity name that is a proper noun, and (ii) includes a transcription of a mispronounced term that is associated with a pronunciation of a term that is not a proper noun; in response to the generation of the initial transcription that includes a transcription of a mispronounced term that is associated with a pronunciation of a term that is not a proper noun, prompting a user for feedback, wherein prompting the user for feedback comprises: providing, for output, a representation of the initial transcription that (i) corresponds to the portion of the audio data that is associated with the entity name that is a proper noun, and (ii) includes the transcription of the mispronounced term that is associated with a pronunciation of a term that is not a proper noun; receiving a corrected transcription in which a manually selected term that is a proper noun is substituted for the transcription of the mispronounced term that is associated with a pronunciation of a term that is not a proper noun; in response to receiving the corrected transcription in which a manually selected term that is a proper noun is substituted for the transcription of the mispronounced term that is associated with a pronunciation of a term that is not a proper noun, obtaining a phonetic representation that is associated with the portion of the received audio data that is associated with the entity name that is a proper noun; updating a pronunciation dictionary to associate (i) the obtained phonetic representation that is associated with the portion of the received audio data that is associated with the entity name that is a proper noun with (ii) the entity name from the utterance that is a proper noun; receiving a subsequent utterance that includes the entity name; and transcribing the subsequent utterance based at least in part on the updated pronunciation dictionary. |
20050152602 | 10756930 | 0 | 1. A method for scaling handwritten character input for performing handwriting recognition, the method comprising the computer implemented steps of: deriving a stroke parameter from a first handwritten character stroke; calculating an input area in which the first handwritten character stroke was supplied; and scaling the stroke parameter according to the input area. |
20050129202 | 10736257 | 0 | 1. A method of providing caller information comprising: receiving a voice signal; detecting portions of the voice signal that are inaudible using a perceptual audio processor; replacing the inaudible portions of the voice signal with digital caller information; and transmitting the resulting voice signal specifying the digital caller information. |
9666184 | 14727462 | 1 | 1. A method of training a language model, the method comprising: converting, using a processor, training data into error-containing training data; and training a neural network language model using the error-containing training data wherein the converting comprises selecting a word to be replaced with an erroneous word from words in the training data, and generating the error-containing training data by replacing the selected word with the erroneous word, wherein the neural network language model is used to estimate a connection relationship between words, wherein the selecting comprises randomly selecting the word from the words in the training data, wherein the processor is configured to use the trained language model to convert a speech into output data. |
20150379102 | 14755105 | 0 | 1. (canceled) |
20150370888 | 14742560 | 0 | 1. A system for transforming media elements into a narrative comprising: a processor; a memory in communication with the processor; a clustering module in communication with the processor and the memory, the clustering module configured to: receive a dataset comprising a plurality of media elements each comprising metadata; and organize the plurality of media elements into a plurality of clusters based on the metadata, the plurality of clusters being organized into a clustering tree; and a narrative module in communication with the processor and the memory, the narrative module configured to create a narrative comprising a plurality of the media elements arranged into a narrative sequence, the narrative sequence being structured according to the clustering tree and for a predetermined duration. |
20090104940 | 12343846 | 0 | 1. A wireless headset operable to support voice communications over at least one servicing network, the modular wireless headset comprising: transceiver circuitry operable to receive and transmit RF signals; signal conversion circuitry coupled to the transceiver circuitry operable to convert incoming RF signals to incoming audio signals and to convert outgoing audio signals to outgoing RF signals; a speaker coupled to the signal conversion circuitry that is operable to render incoming audio signals audible; a microphone coupled to the signal conversion circuitry that is operable to produce the outgoing audio signals; and voice recognition circuitry coupled to at least the signal conversion circuitry and operable to produce command signals when the wireless headset operates in a voice command mode; and processing circuitry coupled to at least the voice recognition circuitry and operable to receive and process the command signals. |
8583148 | 12795047 | 1 | 1. An apparatus comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the processor, cause the apparatus to: determine a contextual characteristic associated with at least a caller or a call recipient of an unanswered call; generate a message based on the contextual characteristic, the message comprising a plurality of predetermined message segments; and cause transmission of the message; wherein the message further comprises at least one message portion to be completed by the caller or recipient, and wherein the at least one memory and the computer program code are further configured to at least cause the apparatus to: receive the at least one message portion; and incorporate the at least one message portion into the message. |
9730073 | 14743457 | 1 | 1. A method for provisioning network credentials onto a device, comprising: detecting pressing of a configuration button on the device, wherein the device lacks a visual display; determining, in response to detecting the pressing of the configuration button, a plurality of wireless networks within range of the device; outputting first audio indicating an identifier of a first wireless network included in the plurality of wireless networks; outputting a prompt to select one of the plurality of wireless networks; receiving second audio corresponding to a selection of the first wireless network; processing, by the device, the second audio to determine security protocols associated with the first wireless network; outputting third audio requesting spelling of a password corresponding to the first wireless network; receiving fourth audio corresponding to an alphanumeric character included in the password; processing, by the device, the fourth audio using a plurality of keyword models to obtain text corresponding to the alphanumeric character; connecting, by the device, to the first wireless network using the password; and limiting processing of audio data by the device when the device is connected to the first wireless network and in communication with a remote device configured to perform speech recognition. |
7805704 | 11074890 | 1 | 1. A computer-implemented voice user interface system, comprising: an application program interface that supports configuration of a voice user interface that mixes different types of audible dialog prompts, and wherein the application program interface comprises: a first dialog container that includes information for activating, in a predetermined sequence, audible dialog prompts that correspond to a collection of audible dialog elements assigned to the first dialog container, the predetermined sequence being at least partially determined by semantics-driven audible dialog functionality applied by the first dialog container to said collection of audible dialog elements assigned to the first dialog container; a second dialog container that includes information for activating, in a predetermined order, audible dialog prompts that correspond to a collection of audible dialog elements assigned to the second dialog container, the predetermined order being at least partially determined by state-driven audible dialog functionality applied by the second dialog container to said collection of audible dialog elements assigned to the second dialog container; a particular audible dialog element that corresponds to a particular audible dialog prompt, wherein the particular audible dialog element having different property setting requirements depending upon whether included in the collection of audible dialog elements assigned to the first dialog container or the set of audible dialog elements assigned to the second dialog container; and a computer processor that is a component of a computing device, wherein the computer processor processes an implementation of the application program interface and provides a corresponding implementation of the voice user interface by outputting, to a user of the voice user interface system, said audible dialog prompts that correspond to the collection of audible dialog elements assigned to the first dialog container and said audible dialog prompts that correspond to the collection of audible dialog elements assigned to the second dialog container. |
20130174017 | 13543445 | 0 | 1. A method comprising: ingesting a document page in an unstructured document format; extracting one or more images and metadata associated with the images from the document page; extracting text and fonts associated with the texts from the document page; coalescing text into paragraphs; and creating a structured document page in a markup language format using the extracted images, text and fonts rendered with layout fidelity to the original ingested document page. |
8185377 | 12147750 | 1 | 1. A system for evaluating translation quality of a machine translator, comprising: a bilingual data generator configured to intermittently access a wide area network and generate a bilingual corpus from data received from the wide area network; a linguistic analysis component configured to add otherwise latent linguistic information to the bilingual corpus to obtain an augmented bilingual corpus; an example extraction component configured to receive an ontology input indicative of a plurality of ontological categories of evaluation and to extract evaluation examples from the augmented bilingual corpus based on the ontology input; and a processor that is component of a computing device that utilizes an evaluation component to evaluate translation results from translation by a machine translator of the evaluation examples and to score the translation results according to the ontological categories. |
9671867 | 12294022 | 1 | 1. A method for operating an interactive control device, the control device including a display device configured to represent information including control elements, comprising: ascertaining a control probability based on an ascertained control intention for each of a plurality of control elements represented on the display device, from a movement of a user's body part in relation to the plurality of control elements that does not activate the control elements, wherein the control intention is ascertained based on the movement of the user's body part in relation to a respective control element; ascertaining a control spotlight region corresponding to a portion of the display device encompassing an expected touch position of the user's body part on the display device, wherein the location and size of the control spotlight region is ascertained based on the ascertained control probability for each control element, the size of the control spotlight region being greater than the user's body part and smaller than the display device; and selectively adapting display of the control elements such that (i) each control element which is at least partially within the control spotlight region is proportionally optimized for activation according to the respective ascertained control probability, and (ii) control elements outside the control spotlight region are not proportionally optimized for activation; wherein the control probability for each control element is ascertained prior to activating a control action; wherein each control element which is at least partially within the control spotlight region is scaled as a function of at least one of (i) distance from the center of the control spotlight region and (ii) amount of planar overlap with the control spotlight region. |
20150039854 | 13956696 | 0 | 1. A computer-implemented method comprising: loading a first set of data elements into a lookup table having a base address; loading a vector register with a translation constant corresponding to an arrangement of the first set of data elements; loading a second set of data elements into an index register; invoking a vector addition instruction to combine the translation constant and the index register; and invoking a vector permute instruction on the first set of data elements at the base address and the combined translation constant and index register to decode the second set of data elements into a set of segments in a destination memory. |
20130300900 | 13466593 | 0 | 1. A method for automated video analysis in a video analysis system, the method comprising: storing a database comprising training data for a micro-expression classifier, wherein the training data comprises spatio-temporal local texture reference descriptors for a plurality of spontaneous facial micro-expressions, wherein spatio-temporal local texture reference descriptors associated with a micro-expression describe temporal dynamic features in consecutive reference video frames containing the micro-expression; acquiring an image sequence comprising a plurality of video frames; detecting a face in the plurality of video frames; extracting dynamic facial features from the plurality of video frames by computing spatio-temporal local texture descriptors for the plurality of video frames, wherein the spatio-temporal local texture descriptors describe temporal dynamic features acquired through comparison of a plurality of consecutive video frames of the image sequence; and comparing the computed spatio-temporal local texture descriptors with the spatio-temporal local texture reference descriptors and determining on the basis of their similarity whether or not the image sequence comprises a micro-expression. |
8131552 | 11623955 | 1 | 1. A method for processing a multimedia event, comprising: separating text components from a multimedia data stream associated with the multimedia event to yield separated text components; generating a plurality of semantically coherent text blocks from the separated text components using an automated multimedia content indexing and retrieval system, wherein at least one semantically coherent text block is generated by merging disconnected text blocks; identifying a target speaker based on audio features in the multimedia data stream to yield an identified target speaker; deriving a topic for each text block of the plurality of semantically coherent text blocks based on a set of topic category models to yield derived topics; and generating a multimedia description of the multimedia event based at least on the identified target speaker, the plurality of semantically coherent text blocks, and the derived topics, wherein the multimedia description comprises at least a timeline representation having a plurality of layers showing multiple categorizations of the multimedia data stream for each instance of time. |
20170046428 | 14827051 | 0 | 1. A computer-implemented method for generating a synonym list from an existing thesaurus, the method comprising: preparing a first feature vector from a natural language query and preparing a second feature vector from a result of the natural language query; determining, using a processor, whether a combination of a first feature from the first feature vector and a second feature from the second feature vector is included as a synonym pair in the existing thesaurus; and generating the synonym list by adding the combination to the synonym list when the determination is positive. |
20160088341 | 14888444 | 0 | 1. A video reception device configured to transmit and receive data through a communication network, the video reception device comprising: an input unit configured to receive an input of a video signal, and content related information including feature information indicating a feature of the video signal; a video extraction unit configured to extract a partial video for video recognition processing, from the video signal; a video recognition region setting unit configured to set a video recognition region to the partial video based on the feature information; a control unit configured to perform control of transmitting content recognition information to a video recognition device connected to the communication network so as to request the video recognition device to perform the video recognition processing, obtaining a result of the video recognition processing from the video recognition device, and obtaining additional information based on the result of the video recognition processing from an additional information distribution device connected to the communication network; and an additional information display control unit configured to generate the content recognition information in the video recognition region of the partial video. |
10115392 | 12793113 | 1 | 1. A method for adjusting a voice recognition system to different surrounding acoustic noise levels and voice characteristics, wherein the voice recognition system comprises a speaker and a microphone, the method comprising the steps of: a) memorizing an audio frequency signal by the voice recognition system; wherein the audio frequency signal comprises any prompt output by the speaker or any audible signal output by the speaker and/or a prompt signal prompting for a user input signal and/or a welcome signal; wherein the audio frequency signal is recognized by the voice recognition system; b) playing back the audio frequency signal by means of the speaker, wherein after the playing back of the audio frequency signal, a user is prompted to provide the user input signal; c) detecting an acoustic effect of the playing back of the audio frequency signal by the microphone; d) generating a detection signal by the microphone based on the detected acoustic effect of the playing back of the audio frequency signal; e) automatically adjusting parameters of the voice recognition system upon detecting the generated detection signal and prior to the user operatively using the voice recognition system; and f) receiving the user input signal by the voice recognition system; wherein the user input signal comprises a first voice command; wherein the steps a) through f) are performed in sequential order; wherein the adjusting of parameters of the voice recognition system includes an adjustment of a gain of an amplifier associated to the microphone, speaker or both; wherein the voice recognition system is in a vehicle; and wherein the step of playing back the audio frequency signal is conducted during an initial use of the voice recognition system once the user enters the vehicle. |
7991129 | 10954387 | 1 | 1. A method for providing a service for delivering information over a communications network, comprising: maintaining a record identifiable by data representing a user, the record containing user defined voice preferences for delivering information in one of a plurality of automated voices, each of said user defined voice preferences being associated with one or more different services, aspects of a service or combination of services and aspects of a service respectively; receiving a call from the user, along with the data representing said user, through the communications network, the call including a request for the service for delivering selected information; determining the selected information in response to the request; retrieving the record based at least on the received data representing said user; determining from the record an automated voice preference associated with the requested service; and generating an automated voice in accordance with the voice preference assigned by said use to deliver the selected information from said service. |
8744091 | 12945698 | 1 | 1. A method for modifying intelligibility of speech in a downlink voice signal during a call, comprising: computing a current noise level estimate based on a) sampling ambient acoustic noise during the call, and b) a previously estimated noise level, by 1) calculating a delta noise based on the sampled ambient acoustic noise and based on the previously estimated noise level, 2) determining a slew rate. 3) calculating a slew delta by multiplying the slew rate and a noise sampling period, and 4) selecting the sampled ambient acoustic noise to be the current noise level estimate when the delta noise does not exceed the slew delta; determining an overall output gain based on the current noise level estimate and based on a user-selected volume setting; determining a frequency response based on the current noise level estimate and based on the user-selected volume setting; and modifying the downlink voice signal during the call in accordance with the overall output gain and the frequency response. |
7816579 | 12034248 | 1 | 1. A transgenic plant transformed with a recombinant polynucleotide comprising an isolated plant arsenate reductase coding sequence operatively linked to a plant-expressible transcription regulatory sequence; wherein the plant arsenate reductase coding sequence is greater than or equal to about 95% homologous with a sequence selected from the group consisting of SEQ ID NO:1, SEQ ID NO:2, SEQ ID NO:3, SEQ ID NO:4, and SEQ ID NO:21, wherein the plant arsenate reductase coding sequence encodes a polypeptide having arsenate reductase activity; and wherein the transgenic plant is resistant to a metal or metal ion. |
4661915 | 06289604 | 1 | 1. A speech recognition system comprising: means for analyzing digital speech data representative of an analog speech signal to generate perceived phonemes representative of component parts of said digital speech data; memory means having encoded digital speech data stored therein, said encoded digital speech data including phoneme codes representative of a plurality of respective reference phonemes, said memory means further having digital speech data stored therein representative of allophones analogous to said phoneme codes; means operably coupled to said analyzing means and to said memory means for selecting encoded digital speech data representative of a particular reference phoneme from said memory means as the closest match for each of said perceived phonemes of said digital speech data to provide a phoneme code at least approximating each of said perceived phonemes; and means operably coupled to said selecting means and said memory means for forming a phoneme code sequence of a plurality of said phoneme codes, said phoneme code sequence-formeing means being responsive to said phoneme codes as determined by said selecting means to access digital speech data from said memory means representative of analogous allophones corresponding to said phoneme codes. |
9711141 | 14569517 | 1 | 1. A method for operating an intelligent automated assistant, the method comprising: at an electronic device with a processor and memory storing one or more programs for execution by the processor: receiving, from a user, a speech input containing a heteronym and one or more additional words; processing the speech input using an automatic speech recognition system to determine at least one of: a phonemic string corresponding to the heteronym as pronounced by the user in the speech input; and a frequency of occurrence of an n-gram with respect to a corpus, wherein the n-gram includes the heteronym and the one or more additional words; determining a correct pronunciation of the heteronym based on at least one of the phonemic string and the frequency of occurrence of the n-gram; generating a dialogue response to the speech input, wherein the dialogue response includes the heteronym; and outputting the dialogue response as a speech output, wherein the heteronym in the dialogue response is pronounced in the speech output according to the determined correct pronunciation. |
9351063 | 14025639 | 1 | 1. An ear plug system comprising: an ear plug comprised of a material suitable for hearing protection, and providing hearing protection when wedgingly inserted in an ear canal of a user, wherein a portion of the ear plug proximate to an eardrum of the user is shaped to hold the ear plug in place in the ear canal; a battery enclosed within the ear plug; a first wireless receiver and transmitter enclosed within the ear plug, distal to the ear canal, the first wireless receiver and transmitter electrically coupled to the battery, and receiving and transmitting at least one wireless signal; a speaker enclosed within the ear plug and electrically coupled to the battery and to the first wireless receiver and transmitter, and receiving at least one audio signal from the first wireless receiver and transmitter; a sound canal bore embedded within the ear plug, the sound canal bore extending from the speaker to an edge of the ear plug proximate to the eardrum, whereby the sound canal bore is open to the ear canal at the edge of the ear plug proximate to the eardrum; an on/off switch of the ear plug located within the sound canal bore, electrically coupled to the first wireless receiver and transmitter and configured for turning the ear plug on and off; a controller comprising a second wireless receiver and transmitter, the second wireless transmitter and receiver transmitting at least another wireless signal to the first wireless transmitter and receiver, and receiving at least one wireless signal from the first wireless transmitter and receiver; a program module communicatively coupled to the second wireless receiver and transmitter and including a user interface configured for controlling the at least one wireless signal from the second wireless receiver and transmitter to the first wireless receiver and transmitter. |
7860870 | 11755972 | 1 | 1. A computer-implemented method for detecting abnormal user behavior for a query session of an electronic search engine through determining clickstream data by tracking user click activities associated with a search results page generated in response to a user search request including a search term, the method comprising: electronically determining a conformance score for the clickstream data based on existing clickstream data for one or more similar query sessions, the conformance score representing normal user behavior; normalizing the conformance score that includes a plurality of tracked user transitions to generate a normalized conformance value based on an event count for the query session; mapping the clickstream characteristics onto a probability score using univariate or multivariate models; and comparing the probability score with the probability scores for the one or more similar query sessions to determine if the query session is abnormal. |
6128594 | 08913849 | 1 | 1. A process of voice recognition, comprising the steps of: performing a coarse recognition of acquired samples, said coarse recognition being based on searching a syntax base, supplying the N best phrases recognized after comparing results of said coarse recognition with stored acoustic references, determining phonetic components of said acquired samples by performing an acoustico-phonetic decoding on said acquired samples, choosing a most appropriate phrase from said N best phrases by comparing said N best phrases with models of probable dialogues and with said phonetic components, and updating said syntax base as a function of a history of most appropriate phrases chosen in said choosing step. |
9443511 | 13285971 | 1 | 1. A method for recognizing an environmental sound at a client device, the method comprising: accessing a client database including a plurality of sound models representing environmental sounds and a plurality of labels, wherein each of the plurality of labels identifies at least one of the plurality of sound models; receiving an input environmental sound and generating an input sound model based on the input environmental sound; determining similarity values between the input sound model and the plurality of sound models to identify one or more sound models of the plurality of sound models that are similar to the input sound model; selecting a first label from one or more labels, of the plurality of labels, associated with the one or more sound models; associating the first label with the input environmental sound based on a confidence level of the first label; and if the confidence level is less than a confidence threshold: transmitting the input sound model to a server; and receiving a second label identifying the input environmental sound from the server. |
4767335 | 07077173 | 1 | 1. An academic quizzer system comprising: a first plurality of response stations and a second plurality of response stations, each of said response stations being identical and interchangeable and including a switch and an indicator light activated by pressing said switch, and first and second female portions of modular jacks for connecting to telephone wire, said first plurality of response stations being connectable by means of telephone wire and said modular jacks into a first series string of response stations and said second plurality of response stations being connectable by means of telephone wire and said modular jacks into a second series string of response stations; a control console including a microprocessor, a timer, a plurality of indicators, one for each of said response stations, a start switch and a mode switch, said master console including a first and second female portions of modular jacks for connecting to telephone wire, said first and second strings of response stations being connectable respectively to said first and second female portions of said modular jacks of said master console by means of telephone wire, said microprocessor being responsive to the switches of said first and second pluralities of response stations and to said start switch and said mode switch of said control console and being operative to control said system in a plurality of modes of operation including a learn mode which, upon power up, recognizes the configuration of the system including the locations of each of the individual response stations, a directed question mode in which contestants from one or the other of two teams are asked a question, the start switch of said master console is pressed and a first predetermined period of time is counted, and a toss-up question mode in which any constestant from either team may press their switch in their response station and a second predetermined period of time less than said first predetermined period of time is counted, said microprocessor further being operative to lockout all other contestants but the first to press their switch or to lockout all contestants of one team when a question is answered incorrectly by a contestant of that team. |
8909025 | 13427610 | 1 | 1. A computer program product embodied in a non-transitory computer-readable medium, the computer program product comprising an algorithm adapted to effectuate a method for analyzing visual events comprising: selecting a plurality of visual events in a visual recording, wherein a visual event is a local visual feature occurring over a plurality of video frames; for one or more occurrences of the plurality of visual events, representing an occurrence of a visual event as a point process, to create a plurality of point processes; constructing a non-parametric representation of the plurality of point processes; and identifying, between pairs of point processes, causal sets providing evidence of causal relationships between the pairs of point processes; wherein the non-parametric representation of the plurality of point processes is an estimate of a cross-spectral density function. |
20020031269 | 09947696 | 0 | 1. A named entity discriminating system for detecting named entities composed of location names, personal names and organization names in text, comprising: a reading means for reading text from a hypertext database; a single text analyzing means for analyzing the text read by the reading means to detect candidates for the named entity in the text; and a complex text analyzing means for estimating the likelihood of the candidate named entity detected by the single text analyzing means by an analysis with reference to referring link text and/or linked text of the text in which the candidate named entity appears. |
20130183944 | 13369129 | 0 | 1. A home automation control system comprising: a first control device configured to receive a voice command from a user and to transmit the received voice command as a control signal over a network communication medium; and a second control device configured to receive the control signal via the network communication medium and to perform an action in response to the received control signal. |
20060155544 | 11033075 | 0 | 1. A method of developing a unit inventory for use by a text to speech system, comprising: identifying a list of phones for a target language; receiving a lexicon containing phonetic transcriptions of a plurality of words having a plurality of syllables; identifying a set of common multi-phone atom units for the lexicon; and adding the set of common multi-phone atom units to the unit inventory for the target language. |
9318027 | 14491344 | 1 | 1. A method, in a data processing system comprising a processor and a memory, for answering an input question, the method comprising: receiving, in the data processing system, an input question to be answered from a source; processing, by the data processing system, the input question to extract one or more features of the input question; comparing, by the data processing system, the extracted one or more features to cached features stored in one or more entries of a question and answer (QA) cache of the data processing system; determining, by the data processing system, whether there is a matching entry in the one or more entries of the QA cache based on results of the comparing, wherein determining whether there is a matching entry in the one or more entries of the QA cache comprises: generating, for each entry in the QA cache, a match value indicative of a degree of matching between the one or more extracted features of the input question to cached features of the entry in the QA cache; and comparing the match value to one or more threshold values indicating one or more requisite degrees of similarity between the input question and an entry in the QA cache, wherein: in response to the match value equaling or exceeding a first threshold value, a corresponding entry is determined to match the input question, and in response to the match value being less than the first threshold value but the match value being equal to or greater than a second threshold value, determining that the corresponding entry is sufficiently similar for updating the corresponding entry with the one or more extracted features of the input question; retrieving, by the data processing system, in response to a matching entry being present in the one or more entries of the QA cache, candidate answer information from the matching entry; and returning, by the data processing system, the retrieved candidate answer information to the source of the input question as candidate answer information for answering the input question. |
20080021721 | 11780241 | 0 | 1. A method, comprising: determining whether a searcher is available for training; and training a selected searcher with a training query. |
9542491 | 13715173 | 1 | 1. One or more computer-readable storage hardware device storing computer-useable instructions that, when used by one or more computing devices, cause the one or more computing devices to utilize keystroke logging to determine items for presentation, the instructions configured to: receive a search query including submitted content and keystroke logging information, the keystroke logging information being captured between engagement with a search query input region and execution of a search query; and determine at least one item for presentation in response to the search query based, at least in part, on the keystroke logging information, the at least one item comprising a search result, the determining comprising: ranking a plurality of potential search results in response to the search query based, at least in part, on the keystroke logging information; and determining the at least one item for presentation based on the ranking. |
20150025909 | 14214753 | 0 | 1. A method for the display of radiology, clinical, pathology, and laboratory reports in a graphical or tabular format, the method comprising the steps of: optimizing a keyword search within a keyword searchable database; restructuring and parsing text data in the database; creating and applying a natural language processing engine to the database; and applying a comprehensive automated analyzer to the natural language processed database, wherein a result from the automated analyzer can then be viewed in the graphical or tabular format on one of a computer or hand-held device. |
9646604 | 13621086 | 1 | 1. A method to adjust an automatic speech recognition (ASR) engine, comprising: receiving, by a social media gateway of a contact center, social network information from a social network; modifying, by the social media gateway of the contact center, the social network information, wherein the modifying comprises filtering the social network information and redacting the filtered social network information based on a relevancy of the social network information to the ASR engine; data mining, by a dialog engine of the contact center, the modified social network information to extract one or more characteristics; inferring, by the dialog engine of the contact center, a trend from the extracted one or more characteristics; adding, by the dialog engine of the contact center, one or more words or phrases related to the trend to a recognition grammar of the ASR engine; calculating, by the dialog engine of the contact center, a magnitude of adjustment to weights of the added one or more words or phrases in the recognition grammar of the ASR engine based upon a shaped sliding window, and adjusting, by the dialog engine of the contact center, the ASR engine by adjusting a speech recognition weighting of the ASR engine based upon the calculated magnitude of adjustment, wherein the adjustment to the speech recognition weighting of the ASR engine has a limited duration. |
20100161596 | 12344093 | 0 | 1. A computer-implemented method of subspace learning for ranking, the method comprising: learning from a plurality of labeled queries; applying the learning from the labeled queries for ranking unranked documents; obtaining, via the applying, a learned latent semantic space (LSS) for ranking unranked documents; providing ranking for unranked documents in the learned LSS; reporting the learned LSS including the unranked documents based at least in part on the ranking. |
20040123248 | 10467212 | 0 | 1. A text input support system for supporting input of text containing plural words separated from each other with spaces, the system comprising: a keyboard for entering a character string; a display device for displaying the entered character string and text; a storage device for storing a program and data; a processor for processing the character string entered by the keyboard in accordance with the program; and a dictionary registering all words that can be used for the text input and being stored in the storage device as the data; wherein the processor compares the entered character string with words registered in the dictionary one by one character from a leading character, memorizes a word that is identical to the entered character string, makes the word that was identical to the entered character string last time a fixed character string when it is decided there is no possibility that the entered character string will be identical to a word registered in the dictionary, and adds a space after the fixed character string so as to separate the character string from a subsequent input character string. |
20070118379 | 11653288 | 0 | 1. A speech decoding method according to code-excited linear prediction (CELP) wherein the speech decoding method receives a speech code and synthesizes a speech using an excitation codebook and an adaptive codebook, the speech decoding method comprising: decoding the speech code and obtaining power information which is used for weighting a time series vector outputted from the adaptive codebook; obtaining a time series vector with a number of samples with zero amplitude-value from the excitation codebook; determining whether modification of the time series vector is necessary according to the power information; if modification is determined to be necessary, modifying the time series vector such that the number of samples with zero amplitude-value is changed; outputting the time series vector; and synthesizing a speech using the outputted time series vector. |
10154812 | 15219255 | 1 | 1. A method for affecting living tissue comprising: receiving at least one signal from at least one read modality, the signal representing at least one parameter or measurement of living tissue; computing at least one signal to effect alterations to the living tissue based on the received input signal; and delivering the at least one computed signal to the living tissue through at least one write modality to effect alterations to the living tissue. |
20150095011 | 14493533 | 0 | 1. A speech translation system comprising: a first terminal device comprising a first speech input for inputting a first speech of a first language spoken by a first user, and converting the first speech to a first speech signal; a second terminal device comprising a second speech input for inputting a second speech of a second language spoken by a second user, and converting the second speech to a second speech signal; a speech recognition device that receives the first speech signal and the second speech signal, recognizes the first speech signal to a first recognized text, and recognizes the second speech signal to a second recognized text; a machine translation device that receives the first recognized text and the second recognized text, translates the first recognized text to a first translated text of the second language, and translates the second recognized text to a second translated text of the first language; and a control device; wherein the first terminal device receives (a) a first text set of the first language being the first recognized text and the second translated text, and (b) a second text set of the second language being the second recognized text and the first translated text, and comprises a first display unit that displays the first text set and the second text set; and the second terminal device receives at least one text of the second text set, and comprises a second display unit that displays the at least one text of the second text set. |
8275618 | 11926938 | 1 | 1. A method of speech recognition on a mobile device comprising: presenting with the mobile device for a speech input containing a plurality of spoken words, a recognized word display representing one or more most likely recognition hypotheses corresponding to a current one of the spoken words; performing a recognition verification process wherein recognition of each spoken word is veified by a user input action. the verification process including either: receiving from the user a key input verifying one of the displayed recognition hypotheses as correct, or receiving from the user: i. a first key input representative of one or more associated letters which limits the recognition hypotheses presented in the recognized word display to a limited set of one or more recognition hypotheses starting with the one or more letters associated with the key input, and ii. accepting a second key input verifying one of the recognition hypotheses presented in the recognized word display as a corrected speech recognition result; and repeating the recognition verification process for a next word in the speech input until all the spoken words have been processed. |
20150030149 | 14339244 | 0 | 1. A method for conducting a conference, comprising: buffering audio of each of a plurality of endpoints in the conference with an audio delay; leveling the audio of each of the endpoints in the conference with a fader; detecting speech in the audio of any one of the endpoints in the conference; controlling the audio delay and the fader for each of the endpoints based on the detection of the speech; and outputting a mix of the audio of the endpoints in the conference based on the control. |
20050053142 | 10933956 | 0 | 1. A method comprising: checking a hybrid motion vector prediction condition based at least in part on a predictor polarity signal applicable to a motion vector predictor; and determining the motion vector predictor. |
7478047 | 10415851 | 1 | 1. A method for controlling the voice of a synthetic character, which is autonomous and interacts with others in a shared environment, comprising: providing speech data corresponding to at least a part of an intended communication generated by the character; creating modified speech data by modifying, by an automatically determined amount, at least one of the pitch or duration of at least a portion of the speech data; generating speech sounds associated with the character using the modified speech data; and associating a representation of an emotional state with the character; and wherein creating modified speech data comprises modifying the speech data based on the emotional state representation and wherein the emotional state representation is determined at least in part by interaction of the character with others in a shared environment. |
9894266 | 14788226 | 1 | 1. An apparatus for cognitive recording and sharing of live events comprising: a processing unit; a recording device to record a live event; one or more sensors, each configured for obtaining a biometric signal data from an individual; a transmitting device for communicating the one or more biometric signals for receipt at the processing unit, the processing unit configured to: obtain a biometric signature of the individual based on a received biometric signal data; receive, from devices of one or more other individuals in proximity to the individual, a signal representing one or more of: a recognized emotional state of, a biometric signature of, and a determined precognition input of the one or more other individuals in proximity to the individual; determine the individual's current emotional state based on the signature in combination with the signals received from the devices of said one or more other individuals in proximity to the individual; and record the live event by said recording device in response to said determined emotional state. |
20130125094 | 13678168 | 0 | 1. A method for assisting with software programming, comprising: receiving, via a user interface device, user input in an imprecise syntax, the user input indicating an instruction in a precise syntax, wherein an application, when executed by one or more computing devices, is configured to evaluate instructions in the precise syntax; after receiving the user input in the imprecise syntax, displaying, on a display device, the user input in the imprecise syntax in a workspace, wherein the workspace is for entering instructions to be evaluated by the application; determining, with one or more computing devices, the instruction in the precise syntax based on the user input in the imprecise syntax; after determining the instruction in the precise syntax, including, with one or more computing devices, the instruction in the precise syntax in the workspace such that the application executed by one or more computing devices can evaluate the instruction in the precise syntax, wherein including the instruction in the precise syntax in the workspace comprises simultaneously displaying, on the display device, the user input in the imprecise syntax and the instruction in the precise syntax on the workspace; and after including the instruction in the precise syntax in the workspace, evaluating, with the application executed by one or more computing devices, the instruction in the precise syntax. |
20130010947 | 13618819 | 0 | 1. A method comprising: retrieving first data associated with a call from an action-object table; retrieving second data from a resolution table based on the first data, wherein the second data indicates a treatment type to be provided to the call; and in response to determining that the treatment type indicates a particular treatment type, servicing the call with the particular treatment type, wherein the particular treatment type includes routing the call to a destination associated with a call center. |
20040209640 | 10418775 | 0 | 1. A communications system, comprising: a first switch connected to a calling party communications device, the calling party communications device communicating with a caller identification messaging device, the caller identification messaging device transmitting a caller identification messaging signal comprising a caller identification message and at least one of (i) an identifier of a calling party, (ii) an identifier of a destination communications address, and (iii) an identifier of the calling party communications device; a second switch connected to a receiving party communications device; and a communications network connecting the first switch to the second switch, the communications network operable to process an incoming line identification (ICLID) signal and the caller identification messaging signal and operable to transmit only the caller identification message to the destination communications address, the communications network further operable to establish a voice connection to the destinations communications address and to transmit at least one of the caller identification messaging signal and caller identification message, wherein the communications network comprises at least one of a public switched telephone network and a mobile switching telephone communications network. |
4875187 | 07078945 | 1 | 1. Apparatus for generating a flow chart consisting of boxes, joined by connecting links, the apparatus comprising: A. a data processing unit; B. an input device connected to said data processing unit; and C. a display device which is also connected to said data processing unit and which has vertical and horizontal axes; the data processing unit including means for causing display on the display device of a plurality of boxes and means for accepting from the input device data identifying a starting box and an end box, the processing unit further comprising means for causing display on the display device of a connecting link between the starting box and the end box, said data processing unit also including: (a) means for defining a link start point associated with the starting box, and a link end point associated with the end box; (b) means for calculating whether first and second lines joining the link start point and respective first and second intermediate target points are obstructed by a non-permitted obstacle, said lines being parallel respectively to the horizontal and vertical axes of the display device, and having a length equal respectively to horizontal and vertical displacement of the link end point from the link start point, such that said intermediate target points may be joined to the link end point by respectively third and fourth lines parallel respectively to the vertical and horizontal axes; (c) means for determining when no such obstacle is discovered, whether the said third and fourth lines joining said intermediate target pints, and the link end point, are obstructed by non-permitted obstacles, and for generating when no obstacle is encountered to the third and fourth lines, a connecting link on the display device, composed of the first and third, or of the second and fourth said lines; (d) means for determining, when either said first and said second lines are obstructed, whether a channel exists which is not obstructed by non-permitted obstacles, said channel joining the respective first or second line, and a line parallel thereto, and displaced therefrom by respectively vertical or horizontal displacement of the link start point from the link end point; and (e) means for storing a value to indicate the first and second lines respectively to be allowable, in accordance with whether or not a corresponding channel is found to exist for said lines respectively, and means for defining a junction of the channel with the respective first or second line as a new start point, in construction of a desired connecting link. |
9734830 | 14981636 | 1 | 1. A handheld portable electronic device comprising: a plurality of microphones to pick up speech of a user, including a first microphone differently positioned than a second microphone; a first processor communicatively coupled with the plurality of microphones and having an activated state and a deactivated state; and a second processor communicatively coupled with the first processor and the plurality of microphones, the second processor configured to: receive a plurality of audio signals from the plurality of microphones; process each of the audio signals to recognize a command in any one of the audio signals; and signal the first processor to transition from the deactivated state to the activated state in response to recognizing the command in one or more of the audio signals. |
7711547 | 10281997 | 1 | 1. A method for associating words and word strings in a language comprising: providing a collection of documents, wherein said collection includes at least one document; receiving from a user a word or word string query to be analyzed; searching, by a processor, said collection of documents for the query to be analyzed and returning documents containing the query to be analyzed; determining a user-defined amount of words or word strings or both to the left of said query to be analyzed in said returned documents based on their frequency and creating a Left Signature List comprising each of said words and word strings to the left of said query to be analyzed in said returned documents; searching said collection of documents for the words and word strings on the Left Signature List and returning documents containing said words or word strings on the Left Signature List; determining a user-defined amount of words or word strings or both to the right of each of said words and word strings comprising said Left Signature List and creating a Left Anchor List comprising each of said words and word strings to the right of each of said words and word strings on the Left Signature List based on their frequency in a collection of documents; determining a user-defined number of words or word strings or both to the right of said query to be analyzed in said returned documents and creating a Right Signature List comprising each of said words and word strings to the right of said query to be analyzed in said returned documents based on their frequency; searching said collection of documents for each of said words and word strings on the Right Signature List and returning documents containing said words and word strings on the Right Signature List; determining a user-defined number of words or word strings or both to the left of each of said words and word strings comprising said Right Signature List and creating a Right Anchor List comprising each of said words and word strings to the left of each of said words and word strings on the Right Signature List based on their frequency; and ranking results based on the number of different Anchor Lists on which the result appears. |
20100287048 | 12839065 | 0 | 1. A computer-implemented method for presentation of advertising in an application on a mobile communication facility, the method comprising the steps of: (a) receiving a request for sponsored content from a mobile communication facility operated by a user, wherein the request is made in association with the application running on an operating system of the mobile communication facility; (b) selecting a sponsored content based on one or more predefined hardware or software characteristics of the mobile communication facility, wherein the one or more predefined hardware or software characteristics are required to view the sponsored content on the mobile communication facility; and (c) transmitting the selected sponsored content to the mobile community facility for display within the application during the running thereof. |
20100277416 | 12837338 | 0 | 1. A text entry input system, comprising: a direction selector to individually point in a direction of letters to collectively form an intended linguistic object, where each letter comprises a linguistic object subcomponent; a collection of linguistic objects; an output device with a text display area; a processor, comprising: a difference calculation module configured to output, for each act of pointing, various letters based upon factors including at least a vector difference between an actual direction indicated by the directional selector and pre-assigned directions of said letters; an object search engine configured to construct at least one predicted linguistic object based on the output letters; and a selection component to facilitate user selection of a desired linguistic object. |
8416926 | 11691290 | 1 | 1. A method, comprising: providing a list of a plurality of users of a network and respective presence information regarding each of the plurality of users; prior to establishing a communication connection between an endpoint and a particular user of the plurality of users, receiving a request from the endpoint to receive an audio representation of a name of the particular user of the plurality of users; providing the audio representation to the endpoint; and after providing the audio representation to the endpoint, receiving a request from the endpoint to establish the communication connection with the particular user; wherein the audio representation of the name at least generally approximates a pronunciation of the name as pronounced by the particular user; and wherein receiving a request from the endpoint to receive an audio representation of a name of the particular user comprises receiving notification that an icon associated with the particular user has been selected from a plurality of icons. |
9948788 | 15440908 | 1 | 1. A method comprising: a multi-tenant telecommunication platform system performing a machine learning process to automatically generate a telephony fraud rule set that includes a plurality of telephony fraud rules, wherein the machine learning process generates the telephony fraud rule set from stored telephony fraud scenario data for at least one telephony fraud scenario that has occurred, and wherein each generated telephony fraud rule includes a usage pattern that, when matching at least a portion of the telephony fraud scenario data, sets a telephony fraud score that indicates occurrence of a telephony fraud scenario that corresponds to the portion of the telephony fraud scenario data; the platform system storing the telephony fraud rule set; the platform system receiving a request to create a first parent account from an external first application developer system via one of an API of the platform system and a user interface of the platform system; the platform system creating the first parent account for the first application developer system; the platform system receiving a request to create a first sub-account of the first parent account from the first application developer system via the API; the platform system creating the first sub-account account for the first parent account; the platform system receiving a request to create a second sub-account of the first parent account from the first application developer system via the API; the platform system creating the second sub-account account for the first parent account; the platform system receiving a first usage request from the first application developer system via the API, wherein the first usage request is a request of the first sub-account; the platform system generating first usage data responsive to processing the first usage request, wherein the first usage data corresponds to illicit use of the platform system by the first sub-account; the platform system determining each telephony fraud rule of the telephony fraud rule set that matches at least the first usage data; for each matching telephony fraud rule, the platform system assigning the telephony fraud score associated with the telephony fraud rule to the first sub-account; the platform system determining a sum of all telephony fraud scores assigned to the first sub-account; the platform system determining whether the sum is above a first telephony fraud score threshold; and responsive to a determination that sum is above the first telephony fraud score threshold, the platform system performing a first fraud action. |
7779029 | 11449587 | 1 | 1. A machine-readable medium comprising: code for defining a database query based on at least one markup language element representing said database query, said code including instructions for causing said database query to be performed and instructions for storing a result of said database query; code for displaying a user interface screen having a display element that is based at least in part upon said result of said database query, said user interface screen being based on at least one markup language element representing said user interface screen; code for refreshing said user interface screen based on a markup language element representing said refreshing of said user interface screen that is distinct from and references said markup language element representing said user interface screen; and code for refreshing said database query based on a markup language element representing said refreshing of said database query that is distinct from and references said markup language element representing said database query, wherein said code for refreshing said user interface screen and said code for refreshing said database query are each independently executable based upon a user specification of a respective one of said markup language element representing said refreshing of said user interface screen and said markup language element representing said refreshing of said database query. |
9984675 | 13956335 | 1 | 1. A computer-implemented method, comprising: receiving, by a beamformer of a device, and from one or more physical microphones, a plurality of audio signals; generating, by the beamformer, and based on the plurality of audio signals, first and second beamforms that respectively correspond to first and second virtual microphones, the first virtual microphone being configured to receive data corresponding to audio control commands, and the second virtual microphone being configured to receive data that is recorded or transmitted by the device; determining, by a voice command recognition module that is operatively coupled to the beamformer of the device, that audio data received through the first virtual microphone corresponds to a command directly specifying an adjustment to an attribute of a second virtual microphone; and responsive to determining that the audio data received through the first virtual microphone corresponds to the command directly specifying the adjustment to the attribute of the second virtual microphone, adjusting the attribute of the second virtual microphone corresponding to the second beamform generated by the beamformer, wherein the attribute of the second virtual microphone comprises at least one non-directional audio attribute of the second virtual microphone. |
20170154628 | 14954810 | 0 | 1. A method for processing a natural language query, the method comprising: receiving the natural language query for the application; a natural language processor using one or more natural language modules to interpret the natural language query for the application; and for each of the one or more natural language modules: calculating a charge for processing the natural language query, the charge determined in accordance with a pricing model defined for the natural language module. |
20110144989 | 12638583 | 0 | 1. A computer-implemented method of sending a spoken message as a text message, the method causing a computing device to perform steps comprising: initiating a connection with a first subscriber; receiving from the first subscriber a spoken message and spoken information associated with at least one recipient address; converting the spoken message to text via an audible text center subsystem (ATCS); and delivering the text to the recipient address. |
8041026 | 11350180 | 1 | 1. A method for canceling noise, comprising: monitoring an internal bus in a computing device for a signal associated with an event; detecting on the internal bus of the computing device a first signal associated with a first event, wherein the first signal is not an audio signal; correlating said first event to an associated noise, wherein the first signal associated with the first event is not a representation of the noise associated with the first event; providing an input identifying the noise correlated with said first event to a noise cancellation process, wherein the input does not include the detected first signal; in response to said input, selecting a noise cancellation procedure; and implementing said selected noise cancellation procedure by applying the selected noise cancellation procedure to an audio input signal that includes noise and voice information received by a microphone, wherein the audio input signal received by the microphone does not include an audible representation of the detected first signal. |
20150032448 | 13950299 | 0 | 1. A method for expansion of search queries on large vocabulary continuous speech recognition transcripts comprising: obtaining a textual transcript of audio interaction generated by the large vocabulary continuous speech recognition; generating a topic model from the textual transcripts; said topic model comprises a plurality of topics wherein each topic of the plurality of topics comprises a list of keywords; obtaining a search term; associating a topic from the topic model with the search term; and generating a list of candidate term expansion words by selecting keywords from the list of keywords of the associated topic; said candidate term expansion words are of high probability to be substitution errors of the search term that are generated by the large vocabulary continuous speech recognition. |
8831952 | 13447578 | 1 | 1. A voice input device for receiving a voice input from a user and for executing an operation command based on the voice input, comprising: a mastery level identifying device identifying a mastery level of the user with respect to the voice input; and an input mode setting device switching and setting a voice input mode between a guided input mode and an unguided input mode, wherein, in the guided input mode, preliminary registered contents of the voice input are presented to the user, wherein, in the unguided input mode, the preliminary registered contents of the voice input are not presented, wherein the input mode setting device sets the voice input mode to the unguided input mode at a starting time when the voice input device starts to receive the voice input, wherein the input mode setting device switches the voice input mode from the unguided input mode to the guided input mode at a switching time, and wherein the input mode setting device sets a time interval between the starting time and the switching time in proportion to the mastery level; the voice input device further comprising: a switching timing storing device storing switching timing information, wherein the switching timing information represents a relationship between the mastery level and the time interval, and wherein the input mode setting device switches the input mode based on the switching timing information. |
5444823 | 08325541 | 1 | 1. A system for accessing a topic stored as part of an on-line body of documentation, comprising: a questionless case-based knowledge base, said questionless case-based knowledge base comprised of means for storing a series of questionless case structures in memory, each said questionless case structure comprised of first, second and third fields, said first field containing a topic name, said second field containing a description of said topic named in said first field, and said third field containing a pointer which provides a path to said second field; a user interface, said user interface comprising means for inputting, as a series of alpha-numeric characters, a natural language description of said topic; a search engine coupled to said user interface and said questionless case-based knowledge base, said search engine comprising: means for performing a search of said questionless case-based knowledge base each time one of said series of alpha-numeric characters is entered into said search engine, via said user interface, as an input; means for identifying, each time one of said series of alpha-numeric characters is entered into said search engine as an input, said questionless case structures which potentially contain said topic, said questionless case structures identified based upon all of said alpha-numeric characters previously entered into said search engine as inputs; and means for selecting said questionless case structure in which said topic is stored; wherein said means for identifying, each time one of said series of alpha-numeric characters is entered into said search engine as an input, said questionless case structures which potentially contain said topic further comprises: means for identifying said questionless case structures having at least one word, or a portion thereof, in said first or second fields, which matches said series of alpha-numeric characters previously entered into said search engine as inputs; means for identifying said questionless case structures having at least one word, or a portion thereof, in said first or second fields, which matches three consecutive alpha-numeric characters of said series of alpha-numeric characters previously entered into said search engine as inputs; and means for identifying said questionless case structures having, in said first or second fields, a numeric representation which falls within a pre-determined value of a numeric representation forming part of said series of alpha-numeric characters previously entered into said search engine as inputs. |
9805023 | 15016500 | 1 | 1. A device comprising: at least one processor; a display accessible to the at least one processor; and storage accessible to the at least one processor and bearing instructions executable by the at least one processor to: store a first phrase from a sent message for presentation of the first phrase again during a subsequent composition of a second message, wherein the first phrase from the sent message that is stored comprises a variable when stored that will be replaced in the second message with at least one character for a particular recipient during composition of the second message; and identify the first phrase for presentation during composition of the second message; and present, on the display and during composition of the second message, the first phrase. |
9048798 | 14015390 | 1 | 1. A method of controlling an output of a hearing aid worn by a wearer, comprising: receiving an input audio signal from a microphone; receiving a facial movement indication from a facial movement detector measured contemporaneously with the input audio signal; determining whether the facial movement indication matches a first movement pattern associated with the wearer speaking; applying a first gain profile to the input audio signal for generating an augmented audio segment in response to determining that the facial movement indication matches the first movement pattern associated with the wearer speaking; applying a second gain profile to the input audio signal for generating the augmented audio segment in response to determining the facial movement indication does not match the first movement pattern associated with the wearer speaking; and outputting the augmented audio segment. |
4749353 | 06877731 | 1 | 1. A talking electronic learning air comprising: memory means for storing digital data therein including digital speech data from which synthesized speech in a human language may be derived concerning a plurality of requests in synthesized human speed for an operator to spell respective words in a human language, the appropriate operator responses comprising the correct spelling of the respective words, and comments reflecting upon the appropriateness of responses made by an operator as proposed spellings corresponding to the respective requests to spell respective words; control means operably associated with said memory means for selecting a word spelling problem derivable from digital speech data stored in said memory means; speed synthesizer means operably associated with said control means and said memory means for generating analog signals representative of human speech from digital speech data stored in said memory means and corresponding to the selected word spelling problem as selected by said control means; audio means coupled to said speech synthesizer means for converting said analog signals into audible human speech for audibly requesting the operator to spell the word selected by said control means; operator input means for receiving an input from the operator indicative of a proposed spelling of said selected word spelling problem as presented audibly; said control means including comparator means operably associated with said operator input means and said memory means for determining the appropriateness of the input received by said operator input means from the operator with respect to said word spelling problem selected by said control means and providing an output indicative thereof; said operator input means including at least speech input means for translating operator generated speech into speech synthesis control data and for receiving operator generated characters associated with the operator generated speech; and said memory means being operably coupled to said at least speech input means for storing said speech synthesis control data and said operator generated characters corresponding thereto as generated by an operator in the form of digital speech data from which respective words and the correct spelling thereof as input by said operator via said at least speech input means may be derived for subsequent testing of the spelling skills of the operator or another person. |
RE41080 | 11592316 | 1 | 1. An item locator system having both voice activation and voice responsive capabilities for location feedback to locate one or more specific goods in a retail store facility , which comprises said system comprising : a.) a support structure, for physically supporting said system at one or more locations, and functionally containing or connected to the following components: b.) a continuous speech recognition digital signal processor (DSP), wherein said continuous speech recognition engine DSP utilizes tokens of raw acoustic signals representing utterances or words and matches these against a set of models and then relies upon likelihood to select a most likely model to decode signals for interpretation; c.) a programmable microprocessor interfaced with , coupled to said speech recognition DSP ; , d.) sufficient programming and circuitry contained within said programmable microprocessor to provide for voice activation and voice recognition and audio and/or visual response to provide item location to a user wherein item and location data are defined by manager input to said system ; e.) voice input means connected coupled to said speech recognition DSP; f.) at least one memory storage means connected to , coupled to said programmable microprocessor and/or said DSP, for storage of operational inputs, control inputs, and a voice recognition vocabulary for storage of command match and execute functions; and g.) at least one user feedback unit and connection from , coupled to said programmable microprocessor to said at least one user feedback unit, said at least one user feedback unit ,adapted to provide audio and/or visual feedback selected from the group consisting of audio feedback, visual feedback and combinations thereof, to a said user in response to an item location query. |
10141001 | 15636501 | 1 | 1. An apparatus comprising: an audio coder input configured to receive an audio signal; a first calculator configured to determine a long-term noise estimate of the audio signal; a second calculator configured to determine a formant-sharpening factor based on the determined long-term noise estimate; a filter configured to filter a codebook vector based on the determined formant-sharpening factor to generate a filtered codebook vector, wherein the codebook vector is based on information from the audio signal; and an audio coder configured to: generate a formant-sharpened low-band excitation signal based on the filtered codebook vector; and generate a synthesized audio signal based on the formant-sharpened low-band excitation signal. |
9666178 | 13915247 | 1 | 1. A device for aiding communication in the aeronautical domain, said device comprising: at least one microphone in an aircraft; at least one loudspeaker in the aircraft; a display screen in the aircraft, and a transceiver and data processor assembly in the aircraft and configured to process and transmit data relating to audio communications to and from the aircraft, which is connected to said microphone and to said loudspeaker and which is configured to emit outgoing audio communications and to receive incoming audio communications, wherein the transceiver and data processor assembly is configured to: record audio messages corresponding to the incoming and outgoing audio communications; transcribe, in real time, said audio messages into textual messages; display, on the display screen, the textual messages in a table, each textual message being displayed associated with an indicator indicating a source of the textual message and an emission time of its corresponding audio message, wherein the table includes a first array of cells each for a source of each of the textual messages, a second array of cells each for at least a portion of each of the textual messages, and a third array of cells each for an emission time of each of the audio messages; extract data associated with a predetermined flight parameter from at least one of the audio or textual messages, and display a value representative of or derived from the extracted data on the display screen in a region of the display screen separate from a region displaying the textual message. |
20170148436 | 14406015 | 0 | 1. A speech processing system, comprising: utterance input means for receiving an input of utterance information including a speech signal representing an utterance and prescribed environmental information representing an environment in which the utterance is made; speech recognition means for performing speech recognition on the speech signal in the utterance information received by said utterance input means and for outputting a recognition result as a text; data processing means for executing a prescribed data processing on the text output by said speech recognition means; utterance sequence model storage means for storing an utterance sequence model statistically trained such that upon reception of a text of an utterance and said prescribed environmental information, a probability of an utterance in a prescribed set of utterances to be uttered successively following the utterance represented by said text can be calculated; utterance storage means for storing utterances in said prescribed set of utterances and degree of confidence of data processing when each of said utterances in said set of utterances is processed by said data processing means; and utterance candidate recommendation means, for scoring, in said set of utterances, candidates of utterances to be recommended to a user who made the utterance recognized by said speech recognition means, based on an evaluation score obtained by combining, in a prescribed form, a probability calculated for each utterance in said prescribed set by said utterance sequence model stored in said utterance sequence model storage means, using the result of recognition by said speech recognition means of the utterance information received by said utterance input means and the environmental information included in the speech information, and the degree of confidence of said data processing on each utterance in said prescribed set of utterances, and for recommending an utterance candidate to the user based on the scores. |
7757225 | 09897540 | 1 | 1. A method comprising: examining, by a computing device, a program code that includes at least procedural programming language statements; detecting, by the computing device, a first non-procedural programming language statement in the program code, the first non-procedural programming language statement having multiple implementations one of which is selected based on a presence or absence of a second non-procedural language statement in the program code; in response to detecting the first non-procedural programming language statement, determining, by the computing device, if the program code includes the second non-procedural programming language statement defining a context specific implementation of the first non-procedural programming language statement; in response to determining that the program code includes the second non-procedural programming language statement, introducing into the program code, by the computing device, a declaration for an undefined variable to flag the presence of the second non-procedural programming language statement, the undefined variable being declared using an extern keyword; inserting, by the computing device, function calls compliant with an Open Database Connectivity (ODBC) standard; compiling, by the computing device, the program code; at link-time, selecting, by the computing device, one of a plurality of alternative object modules based on whether the compiled program code includes the undefined variable, a first of the alternative object modules providing a definition of the undefined variable and the context specific implementation of the first non-procedural programming language statement being selected if the compiled program code includes the undefined variable, and a second of the alternative object modules providing a second implementation of the first non-procedural programming language statement being selected if the compiled program code does not include the undefined variable; and building, by the computing device, an executable program corresponding to the program code by linking-in the selected alternative object module. |
8543384 | 13616311 | 1 | 1. A word recognition system for recognizing an input signal entered by a user via a shorthand-on-keyboard interface, the system comprising: a core lexicon comprising commonly used words, wherein the commonly used words were selected for the core lexicon based on an associated frequency of use value for each word being above a pre-determined threshold, and wherein the frequency of use values are not based on candidate selections by the user; an extended lexicon comprising words not contained in the core lexicon; a recognition module for recognizing words associated with the input signal; a selector module for outputting an output word associated with the input signal from the core lexicon; and a module for admitting, from the extended lexicon to the core lexicon, a candidate word associated with the input signal, upon a first selection of the candidate word by the user, to create an augmented core lexicon. |
8175655 | 12848113 | 1 | 1. A communication device comprising: an input device to operate said communication device; a microphone to retrieve audio data; a speaker to output audio data; a camera to retrieve visual data; a display to display a plurality of images; an antenna which sends and receives wireless signals; a voice communication implementer; a multiple & real-time speech-to-text mode implementer; and an audiovisual communication implementer; wherein said voice communication implementer transfers a 1st voice data which indicates a 1st voice retrieved via said microphone via said antenna and outputs a 2nd voice indicated by a 2nd voice data received via said antenna from said speaker; wherein said multiple & real-time speech-to-text mode implementer converts said 1st voice data to a 1st text data and said 2nd voice data to a 2nd text data, wherein the conversions are performed in a real-time manner, and said 1st text data and said 2nd text data are displayed on said display, wherein said 1st text data includes alphanumeric data indicated by said 1st voice and said 2nd text data includes alphanumeric data indicated by said 2nd voice; and wherein said audiovisual communication implementer implements audiovisual communication by utilizing said microphone, said speaker, said camera, and said display. |
5590241 | 08054494 | 1 | 1. A method for enhancing speech signals in a noisy environment, comprising the steps of: inputting a digital speech signal x(k) at an input of a plurality of successive delay elements whose outputs form a like plurality of taps of an adaptive finite impulse response (FIR) filter; inputting said digital speech signal and each of said plurality of taps to corresponding inputs of a plurality of variable multipliers; computing a first signal power estimate y(k) at a sample point k given by the formula y(k)=.beta..sub.1 y(k-1)+(1-.beta..sub.1)x.sup.2 (k); computing a second signal power estimate z(k) at said sample point k given by the formula z(k)=.beta..sub.2 z(k-1)+(1-.beta..sub.2)x.sup.2 (k); choosing a value of .beta..sub.1 greater than a value of .beta..sub.2 ; selecting an overall signal power estimate yz(k) at said sample point k as a maximum one of said first signal power estimate y(k) and said second signal power estimate z(k); recursively updating a plurality of FIR filter coefficients corresponding to said plurality of variable multipliers according to a normalized least-mean-squares (NLMS) prediction using said overall signal power estimate yz(k) and an estimation error signal to provide updated values of said plurality of FIR filter coefficients; providing said updated values of said plurality of FIR filter coefficients to coefficient inputs of said plurality of variable multipliers; and summing outputs of said plurality of variable multipliers to provide an enhanced speech signal as an output of said adaptive FIR filter. |
10073843 | 15394343 | 1 | 1. A system providing cross-linguistic communication and providing feedback for machine learning, comprising: a client component capturing inputs, the client component providing a user interface configured to display a translation of an input term into a different language than an input language, a retranslation from the target language back into the source language, and interface elements enabled for selecting one of whether translations should be verified before transmission, and whether a previous translation should be revised; and a server component providing the translation and the retranslation to the client component based upon the input term, the server component including, an interaction manager stored within memory of the server component, the interaction manger configured to request the translation of the inputs and access a database containing data representing a sense of the input term, the sense including a synonym for each different sense of the input term, wherein each different sense includes a different meaning for the term. |
20180107648 | 15297763 | 0 | 1. A computer implemented method, in a data processing system comprising a processor and a memory comprising instructions which are executed by the processor to cause the processor to implement a mixed-language question answering supplement system, the method comprising: receiving a question in a target language; determining the question cannot be answered using a target-language only corpus; applying natural language processing to parse the question into at least one focus; for each focus, determining if one or more target language verbs share direct syntactic dependency with the focus; for each of the one or more verbs sharing direct syntactic dependency, determining if one or more target language entities share direct syntactic dependency with the verb; determining one or more Abstract Universal Verbal Types associated with each verb; for each of the one or more Abstract Universal Verbal Types, determining whether a dependency between a source language entity and a source language verb is of the same type as the dependency between the target language verb and the target language entity comprising: using a cognitive system to generate a plurality of reasoning algorithms to analyze the source language and the target language, wherein each reasoning algorithm generates a dependency score; training a statistical model employed by a question answer pipeline; applying the trained statistical model to determine a weight for each dependency score; applying the weight to each dependency score to generate weighted dependency scores; and processing the weighted dependency scores with the statistical model to generate one or more confidence scores measuring dependency; if the dependency is similar, returning the source language entity as a member of a set; populating the set of returned source language entities for each focus in the target language question; identifying one or more parallel passages wherein all core arguments are matched; for each parallel passage: identifying the presence or absence of oblique nominal arguments; and measuring the precision of the oblique nominal arguments in the parallel passages against those present in the target language question; and returning an answer to the target question in the target language based on a scoring of the parallel passages based on the accuracy of their respective oblique nominal arguments. |
20150142813 | 14100157 | 0 | 1. A method, comprising: accessing a standardized language tag repository to identify changes in repository language tags; searching records of a data source categorized using language tags to identify language tags present in records of the data source; determining whether the language tags present in the records of the data source are inconsistent with the repository language tags; responsive to determining that the language tags present in the records of the data source are inconsistent with the repository language tags, determining a language tag update policy for the data source; and performing a language tag update process for the records according to the language tag update policy to re-categorize the records based on the repository language tags. |
20090281982 | 12115588 | 0 | 1. A method for providing access to meta-data information for Java annotations, comprising: receiving a Java annotation declaration in a Java annotation model; receiving a Java annotation definition in the Java annotation model; receiving domain specific context rules in the Java annotation model; and providing access to combined meta-data information derived from the Java annotation declaration, the Java annotation definition, and the domain specific context rules through the Java annotation model. |
9135927 | 13460039 | 1 | 1. An apparatus comprising: at least one processor; memory storing computer program code; wherein the memory storing the computer program code is configured, with the processor, to cause the apparatus to perform actions comprising at least: determining sound characteristic based on at least one user input comprising at least one trace on an input device, wherein the sound characteristics include at least a duration during which a sound is recorded, and an apparent direction from which the sound is recorded, and wherein the duration during which the sound is recorded and the apparent direction from which the sound is recorded correspond to the length and direction, respectively, of the at least one trace; and performing at least one control operation to cause a sound phrase to exhibit the sound characteristic. |
20010023396 | 09777424 | 0 | 1. A hybrid speech encoding method, comprising the steps of: (a) classifying frames of speech signals as voiced, unvoiced, or transitory; (b) using harmonic coding to compress frames associated with at least one of said classes; and (c) coding frames classified as transitory using a coding technique selected from the group consisting of waveform coding, analysis-by-synthesis coding, codebook excited linear prediction analysis-by-synthesis coding, and multipulse analysis-by-synthesis coding. |
9517559 | 14498871 | 1 | 1. A robot control system including a robot that outputs an utterance content to a user in an exhibition hall placing an exhibition, comprising: a recording module that records to a memory device an inspection action of a user at a time of attendance of the user in the exhibition hall as a history, the history comprises the exhibition placed in the hall and an inspection time period spent by the user for inspection as to the exhibition; and a first output module that, when the user attends again, determines an appropriate utterance content by using the history associated with the attending user, and makes the robot output the determined utterance content. |
Subsets and Splits