doc_id
stringlengths 7
11
| appl_id
stringlengths 8
8
| flag_patent
int64 0
1
| claim_one
stringlengths 13
18.3k
|
---|---|---|---|
8548135 | 13022869 | 1 | 1. A communication device comprising: (a.) a processor; and (b.) a memory coupled to said processor, said memory comprising: instructions executable by the processor for allowing access to a database comprising one or more visual Interactive Voice Response (IVR) menus associated with each of a plurality of phone numbers of one or more first party devices; means for receiving a call from a first party device from the one or more first party devices; means for comparing a phone number of the calling first party device with the plurality of phone numbers stored in the database, and retrieving from the database a visual IVR menu associated with the phone number of the calling first party device; and means for displaying the retrieved visual IVR menu. |
20050104864 | 10804311 | 0 | 1. A computer-implemented process for outputting whiteboard content, comprising the following process actions: inputting a sequence of image frames of content written on a whiteboard in real-time; dividing each of said image frames into cells; for each cell in each image frame, determining if there is a change in the cell image compared to a correspondingly located cell the immediately preceding image frame in said sequence of image frames; whenever it is determined that there is a color change, setting a cell age to a prescribed minimum value, and if there is no change increasing cell age by a prescribed increment value; determining if cell age is greater than a specified threshold value; whenever the cell age is not greater than the threshold value, not processing said cell any further; if said cell age is greater than the threshold value, computing the background color of the cell; updating a whiteboard color model; classifying each cell image as a foreground or whiteboard cell using said whiteboard color model; whenever a cell image is classified as a foreground cell not processing that cell any further; and whenever a cell image is classified as a whiteboard cell, outputting the cell image whenever it exhibits strokes not present in the correspondingly located cell in the preceding image frames or is missing strokes found in the preceding image frames in real-time. |
20150095032 | 14567969 | 0 | 1. A method of recognizing a keyword in a speech, comprising: on an electronic device: receiving a sequence of audio frames comprising a current frame and a subsequent frame that follows the current frame; determining a candidate keyword for the current frame using a predetermined decoding network that comprises keywords and filler words of multiple languages, associating the audio frame sequence with a confidence score that is partially determined according to the candidate keyword; identifying a word option for the subsequent frame using the candidate keyword and the predetermined decoding network; when the candidate keyword and the word option are associated with two distinct types of languages, updating the confidence score of the audio frame sequence based on a penalty factor that is predetermined according to the two distinct types of languages, the word option and an acoustic model of the subsequent frame; and determining that the audio frame sequence includes both the candidate keyword and the word option by evaluating the updated confidence score according to a keyword determination criterion. |
20170018308 | 15205813 | 0 | 1. A content addressable memory cell apparatus, comprising: a plurality of domain-wall-based magnetic tunnel junctions (DW-MTJs) interconnected to write complementary bits, wherein a write polarity on each of the plurality of DW-MTJs is controlled by modulating a direction of current; a plurality of transistors; a plurality of searchlines; a wordline; a bitline (BL); a sourceline (SrL); and a plurality of matchlines. |
4650423 | 06664227 | 1 | 1. An apparatus for teaching and transcription of language which comprises: a substrate upon which a plurality of language elements are arranged in a rectangular matrix of rows and columns, wherein there exists a regular reoccurrence of a language element pattern of long quality vowels, short quality vowels and triad of consonants wherein said pattern is fully contained in either a row or a column in said matrix. |
8037010 | 12039630 | 1 | 1. A computer-implemented hierarchical network comprising a plurality of spatio-temporal learning nodes, wherein each spatio-temporal learning node comprises: a spatial pooler adapted to: receive a sensed input pattern; generate a first set of spatial probabilities associated with a set of spatial co-occurrence patterns, wherein each spatial co-occurrence pattern represents a first set of one or more sensed input patterns and each spatial probability in the first set of spatial probabilities indicates the likelihood that the sensed input pattern has the same cause as a spatial co-occurrence pattern; a temporal pooler adapted to: receive the first set of spatial probabilities from the spatial pooler; generate a set of temporal probabilities associated with a set of temporal groups based at least in part the first set of spatial probabilities, wherein each temporal group comprises one or more temporally co-occurring input patterns and each temporal probability indicates the likelihood that the sensed input pattern has the same cause as the one or more temporally co-occurring input patterns in a temporal group; and transmit the set of temporal probabilities to a parent node in the hierarchical network of nodes. |
20050096907 | 10976378 | 0 | 1. A method of generating a language model using meta-data, the method comprising: identifying projections based on meta-data; and estimating a conditional language model using the identified projections. |
6036496 | 09167278 | 1 | 1. A method for screening a human to determine his/her ability to process spoken language, the method using target/distractor phonemes that are processed using a plurality of acoustic manipulations, each of the acoustic manipulations having a plurality of processing levels, the method comprising: a) presenting a target/distractor sequence of acoustically processed phonemes to the human; b) requiring the human to indicate recognition of an acoustically processed target phoneme within the sequence; c) recording the human's correct/incorrect indication, corresponding to the sequence; and d) repeating a)-c) for each of the plurality of processing levels, for each of the plurality of acoustic manipulations; e) wherein a)-d) develop an acoustic processing profile for the human. |
9030417 | 13415718 | 1 | 1. A method for assisting in avoiding incorrect input in a portable terminal, the method comprising: identifying an input character string and, among previously registered candidate words, searching among a plurality of candidate words recommended for the input character string; calculating a similarity between the input character string and each of the searched candidate words based at least partially upon a type of keypad used to input the input character string, wherein calculating the similarity between the input character string and each of the searched candidate words comprises: identifying the keypad type; determining each cost of an insert operation, a delete operation, and a replace operation corresponding to the identified keypad type; and calculating an edit distance using the operations according to the respective cost of each of the candidate words; and providing one or more of the searched candidate words according to the calculated similarity. |
20110125486 | 12626529 | 0 | 1. A method for translating oral statements, the method comprising: receiving, by a device, an oral statement; converting the oral statement into data; analyzing, by a processing unit connected to the device, the data to identify a particular language of the oral statement; and responsive to identifying the particular language of the oral statement, providing a translation of the oral statement. |
20150095012 | 14564671 | 0 | 1. A method comprising: gathering statistics from a plurality of interactions with a user, wherein the statistics are gathered periodically with a defined frequency, and wherein the statistics identify words in the plurality of interactions and languages associated with the words; identifying, via a processor and based on the statistics, a target language of the user, the target language being the language having a highest number of words used in the plurality of interactions; receiving a message for the user in a source language which is distinct from the target language; prior to presenting the message to the user, translating, via the processor, the message into the target language, to yield a translated message; and presenting the translated message in the target language to the user. |
5416696 | 07952412 | 1 | 1. An apparatus For translating original words in an original sentence written by a first language into words in a translated sentence of a second language, comprising: sentence structure memory means for storing parts of speech of the original words which are required to analyze the sentence structure of the original sentence; first language grammar memory means for storing the grammar of a first language of the original sentence; second language word memory means for storing the words of the second language, wherein each of the original words classified as monosemy linguistically and semantically corresponds to a word of the second language stored in the second language word memory means and each of the original words classified as polysemy linguistically corresponds to a plurality of words of the second language stored in the second language word memory means; second language grammar memory means for storing the grammar of the second language; inflection memory means for storing inflection information of the words of the second language; processing means for (1) operating an artificial neural network in which a plurality of artificial neurons are assigned the words of the second language stored in the second language word memory means, positive links through which the artificial neurons assigned the words semantically relevant to one another are interconnected are weighted with positive values to increase the output values of the artificial neurons in cases where an external input is provided to one of the artificial neurons, and negative links through which the artificial neurons assigned the words semantically irrelevant to one another are interconnected are weighted with negative values to decrease the output values of the artificial neurons in cases where an external input is provided to one of the artificial neurons. (2) providing external inputs Im to artificial neurons Nm which are assigned words Wm of the second language linguistically and semantically corresponding to the original words classified as the monosemy to increase the output values of artificial neurons Nms assigned the words semantically relevant to the words Wm and to decrease the output values of artificial neurons Nmi assigned the words semantically irrelevant to the words Wm, (3) providing an external input Ip, one after another, to each of the artificial neurons Nms to considerably increase the output values of artificial neurons Nmp which belongs to the artificial neurons Nms and artificial neurons Np assigned the words of the second language linguistically corresponding to the original words classified as the polysemy, the external input Ip provided to each of the artificial neurons Nms being stored therein, and the value of the external input Ip previously stored in the artificial neurons Nms being uniformly reduced to again provide to each of the artificial neurons Nms as past records each time the external input Ip is provided to each of the artificial neurons Nms. (4) repeatedly converging the output values of all of the artificial neurons each time the external input Ip is provided to each of the artificial neurons Nms, (5) adopting words Wmp assigned to the artificial neurons Nmp, as translated words, of which the output values are considerably increased, and (6) adopting the words Wm as the translated words, the translated sentence being composed of the word Wmp and the words Wm; and translation means for translating the original sentence into the translated sentence according to a translation process in which (1) the sentence structure of the original sentence is analyzed by referring the parts of speech of the original words stored in the sentence structure memory means and the grammar of the first language stored in the first language grammar memory means, (2) the original words are changed to a series of words which are composed of the words linguistically corresponding to the original words classified as the monosemy and the words Wmp, Wm adopted in the processing means, and (3) a series of words of the second language are changed to the translated sentence by referring the grammar of the second language stored in the second language grammar memory means and the inflection information of the words of the second language stored in the inflection memory means. |
8751498 | 13364244 | 1 | 1. A method for identifying documents referring to an entity, the entity being associated with a first set of features, the method comprising: at a computer having one or more processors and memory storing programs for execution by the one or more processors: identifying a first set of documents based on a first model and the first set of features, wherein the first model includes a first set of rules specifying at least one combination of features from the first set of features that are sufficient for identifying a document referring to the entity, and each document in the first set of documents includes a sufficient number of features in common with the first set of features to identify a document referring to the entity according to the first model; determining a second model based on features included in one or more documents in the first set of documents, wherein the second model includes a second set of rules specifying at least one combination of features from the first set of documents that are sufficient for identifying a document referring to the entity; identifying a second set of documents based on the second model, wherein each document in the second set of documents includes a sufficient number of features in common with the first set of features to identify a document referring to the entity according to the second model, and wherein the second set of documents includes at least one document not included in the first set of documents; and extracting one or more facts from the second set of documents and associating the extracted facts with the entity. |
20100114560 | 12265166 | 0 | 1. A method of evaluating a sequence of characters to determine the presence of a natural language word in the sequence, the method comprising: finding a subsequence of alphabetical characters in the sequence of characters; calculating a probability that the subsequence is a natural language word using a statistical model of a natural language; and determining if the subsequence is a natural language word based on the probability. |
20140210588 | 14242771 | 0 | 1. A multi-tuner radio further comprising a gesture pad for control of radio functions. |
8560325 | 13564596 | 1 | 1. A method for determining an intended action of a user of a computing system environment, the computing system environment comprising a voice system, the intended action being specified via a spoken input of the user, the method comprising: obtaining a decoding of the spoken input of the user; and extracting the intended action from the decoding of the spoken input using an iterative hierarchical extraction process comprising analyzing the decoding of the spoken input in multiple hierarchically dependent semantic stages, comprising: determining a first level of classification of the intended action from the decoding of the spoken input during a first semantic stage of the iterative hierarchical extraction process, the first level of classification having a plurality of sub-classifications associated with the first level of classification; and determining, from among the plurality of sub-classifications associated with the first level of classification, a second level of classification of the intended action from the same decoding of the spoken input during a second semantic stage of the iterative hierarchical extraction process, wherein determining the intended action further comprises utilizing information about the user or the user's environment. |
8385633 | 12961790 | 1 | 1. A computer-implemented method comprising: programmatically selecting, from a collection of images, a set of candidate images for a training set; providing the set of candidate images to a user for input; designating images from the set of candidate images for the training set based on input from the user; performing a recognition of an input image; wherein performing the recognition includes comparing the input image with one or more images of the training set, and determining, based at least in part on comparing the input image with the one or more images of the training set, that a confidence value of the recognition satisfies a defined threshold; providing a results of the recognition to the user, wherein the result includes a plurality of images that have a recognized object or face that is represented in an image of the training set; determining an error rate base on user feedback regarding the results, wherein the error rate is associated with a measurement of instances of erroneous recognition of the input image from the plurality of images provided with the result; and in response to the user feedback, adjusting the defined threshold based on the error rate. |
10089994 | 15893718 | 1 | 1. An automated method for extracting an acoustic sub-fingerprint from an audio signal fragment, said method comprising: using at least one computer processor to perform the steps of: a: dividing an audio signal into a plurality of time-separated signal frames (frames) of equal time lengths of at least 0.5 seconds, wherein all frames overlap in time by at least 50% with at least one other frame, but wherein at least some frames are non-overlapping in time with other frames; b: selecting a plurality of non-overlapping frames to produce at least one cluster of frames, each selected frame in a given cluster of frames thus being a cluster frame; wherein the minimal distance between centers of said cluster frames is equal or greater than a time-length of one frame; c: decomposing each cluster frame into a plurality of substantially overlapping frequency bands to produce a corresponding plurality of frequency band signals, wherein said frequency bands overlap in frequency by at least 50% with at least one other frequency band, and wherein at least some frequency bands are non-adjacent frequency bands that do not overlap in frequency with other frequency bands; d: for each cluster frame, calculating a quantitative value of a selected signal property of frequency band signals of selected frequency bands of that cluster frame, thus producing a plurality of calculated signal property values, said selected signal property being any of: average energy, peak energy, energy valley, zero crossing, and normalized energy; e: using a feature vector algorithm and said calculated signal property values of said cluster frames to produce a feature-vector of said cluster; f: using a sub-fingerprint algorithm to digitize said feature-vector of said cluster and produce said acoustic sub-fingerprint. |
7664644 | 11423212 | 1 | 1. A method of generating a spoken dialog system, the method causing a computing device to perform steps comprising: training individual models associated with labeled data for each of a plurality of applications; mapping call-types between the plurality of applications using the labeled data and the trained individual models; and retraining a first model using information based on the mapped call-types. |
7894596 | 11532018 | 1 | 1. A method of providing language interpretation comprising: providing a language interpretation number that services multiple languages, wherein the language interpretation number can be used to place a telephone call for language interpretation; receiving a language interpretation telephone call at a language interpretation provider from a caller speaking a first language, wherein the caller places the language interpretation telephone call by dialing the language interpretation number; identifying a business need of the caller; determining a group of business entities that can each satisfy the business need of the caller; selecting, from the group of business entities, a first business entity that is a subscriber for a subscription fee to the language interpretation provider for language interpretation for the first business entity over a second business entity that is not a subscriber for a subscription fee to the language interpretation provider for language interpretation for the second business entity; identifying an interpreter that can interpret between the first language and a second language spoken by an agent of the first business entity; and telephonically engaging the interpreter and the agent of the first business entity in the language interpretation telephone call, wherein the interpreter interprets a conversation between the caller and the agent of the first business entity. |
8965757 | 13295818 | 1 | 1. A system for suppressing noise in a primary input speech signal that comprises a first desired speech component and a first background noise component using a reference input speech signal that comprises a second desired speech component and a second background noise component, the system comprising: a blocking matrix configured to filter the primary input speech signal, in accordance with a first transfer function, to estimate the second desired speech component and to remove the estimate of the second desired speech component from the reference input speech signal to provide an adjusted second background noise component; an adaptive noise canceler configured to filter the adjusted second background noise component, in accordance with a second transfer function, to estimate the first background noise component and to remove the estimate of the first background noise component from the primary input speech signal to provide a noise suppressed primary input speech signal, wherein the first transfer function is determined based on statistics of the first desired speech component and the second desired speech component, and the second transfer function is determined based on statistics of the primary input speech signal and the adjusted second background noise component. |
20160054916 | 14933940 | 0 | 1. A method of presenting interactive media items, comprising: at a client device with one or more processors, memory, a touch-sensitive surface, and a display: receiving user selection of a previously generated interactive media item, the media item associated with an audio file, one or more visual media files, and one or more effects; in response to the user selection of the media item, presenting the media item on the display; and while presenting the media item: detecting a touch input gesture at a location on the touch-sensitive surface corresponding to at least a portion of the presented media item; and, in response to detecting the touch input gesture, applying at least one effect of the one or more effects to the presented media item based on one or more characteristics of the touch input gesture. |
20020082829 | 09417172 | 0 | 1. A speech recognition system for identifying words from a digital input signal, the system comprising: a feature extractor for extracting at least one feature from the digital input signal; a lexicon comprising at least one noise entry; a search engine capable of identifying a sequence of hypothesis terms based on at least one feature and at least one speech model, at least one of the hypothesis terms being a noise entry found in the lexicon and at least one of the hypothesis terms being a hypothesis word; and a noise rejection module capable of replacing a hypothesis word in the sequence of hypothesis terms with a noise marker by identifying noise based in part on a model of noise phones and at least one feature. |
20070061356 | 11225861 | 0 | 1. A method for generating a document summary, the method comprising: providing a document model for the document; calculating a score based on normalized probabilities for sentences of the document based on the document model; and selecting one or more sentences based on the scores to form the summary of the document. |
8787598 | 12471427 | 1 | 1. A dynamic sound enhancement system that produces extreme low frequency sound, in the bass and sub-bass range, 50-250 Hz and below 50 Hz respectively, from all speaker and earpiece types including standard speakers, small speakers, personal earpieces, and hearing aids, that substantially compensates for the natural inadequacy of the human ear at both low and high frequencies, below 250 Hz and above 10,000 Hz respectively, and that substantially compensates for the frequency roll-off effects of standard speakers and other sound producing devices, comprising: a low frequency adjustment sub-system operable to provide a variety of low frequency responses in response to adjustments of an adjustable element, embodied with a plurality of step settings, each setting producing a different low frequency response thus allowing or preventing preferred low frequencies from reaching the system gain sub-system; a gain sub-system operable to invert input signal polarity, adjust the magnitude of applied frequencies dynamically, integrate feedback signals from other sub-systems with newly arriving input signals concurrently and present the resulting composite signal at its output; a low frequency feedback control sub-system operable to provide low frequency responses in relation to applied frequency thereby determining the bandwidth and magnitude of low frequency feedback signals applied to the input of the gain subsystem and contributing to regulation of bass and sub-bass frequency signal magnitude at the gain sub-system output; a low frequency phase offset sub-system operable to provide variable plus and minus feedback time offsets for preferred bass frequencies in relation to non-preferred frequencies embodied to produce time offsets encompassing up to several milliseconds of time which occur before (plus offset) and after (minus offset) in relation to non-preferred feedback frequencies; a band pass feedback control subsystem operable to provide a broad pass frequency band while concurrently attenuating the magnitude of applied feedback frequencies prior to conveying those frequencies to the gain sub-system thereby contributing to the regulation of system dynamic gain; a low frequency damping subsystem operable to provide supplementary regulation of the magnitude and shape of low frequency output signals thereby controlling sound producing device distortion artifacts by the method of controlling decay rate of the final output signal magnitude after the signal reaches a threshold gain defined in conjunction with a setting of the low frequency adjustment sub-system; a high frequency adjustment sub-system operable to provide a variety of high frequency responses in response to adjustments of an adjustable element, embodied with a plurality of step settings, each setting producing a different high frequency cutoff thus allowing or preventing preferred high frequencies from reaching the system gain sub-system; a high frequency damping subsystem operable to provide supplementary regulation of the magnitude and shape of high frequency output signals thereby controlling sound producing device distortion artifacts by the method of controlling decay rate of the final output signal magnitude after the signal reaches a threshold gain defined in conjunction with a setting of the high frequency adjustment sub-system, and a high frequency phase offset sub-system operable to provide variable plus and minus feedback time offsets for preferred treble frequencies in relation to non-preferred frequencies embodied to produce time offsets encompassing up to several milliseconds of time which occur before (plus offset) and after (minus offset) in relation to non-preferred feedback frequencies. |
8352469 | 12555962 | 1 | 1. A computer-implemented method of generating a stop word list for information retrieval and analysis, the method comprising: providing a corpus of documents and a plurality of keywords; constructing a term list of all terms in the corpus of documents; determining keyword adjacency frequency of each term in the corpus of documents, wherein the keyword adjacency frequency comprises the number of times a term occurs adjacent to one of the keywords; determining keyword frequency of each term in the corpus of documents, wherein the keyword frequency comprises the number of times a term occurs within one of the keywords; excluding from the term list each term having a ratio of keyword adjacency frequency to keyword frequency that is less than a predetermined value; and truncating the term list based on a predetermined criteria to form the stop word list. |
20040243419 | 10447399 | 0 | 1. A computer-implemented method for interacting with a computer system, the method comprising: receiving input from a user and capturing the input for processing; and performing recognition on the input to ascertain semantic information pertaining to a first portion of the input and outputting a semantic object comprising data in a format to be processed by a computer application and being in accordance with the input that has been recognized and semantic information for the first portion, wherein performing recognition and outputting the semantic object are performed while capturing continues for subsequent portions of the input. |
20110224981 | 13115105 | 0 | 1. A system for facilitating free form dictation and constrained speech recognition and/or structured transcription among users having heterogeneous system protocols the system comprising: at least one system transaction manager using a uniform system protocol, adapted to receive a verified streamed speech information request from at least one user employing a first user legacy protocol, and configured to route a response to one or more users employing a second user legacy protocol, the speech information request comprised of free form dictation of spoken text and commands and the response comprised of a transcription of spoken text; a user interface capable of bi-directional communication with the system transaction manager and supporting dictation applications, including prompts to direct user dictation in response to user system protocol commands and system transaction manager commands the user interface being in bi-directional communication with the systems transaction manager; and, at least one speech recognition and/or transcription engine communicating with the system systems transaction manager wherein the speech recognition and/or transcription engine is configured to receive the speech information request containing spoken text and commands for constrained speech recognition transmitted by the systems transaction manager, to generate structured transcription in response to the speech information request, and to transmit the response comprised of structured transcription to the system transaction manager. |
20120150871 | 12965604 | 0 | 1. A computer-readable medium storing computer-executable instructions that, when executed, cause one or more processors to perform operations comprising: receiving a plurality of media objects that are captured by an electronic device during a trip session; receiving one or more geolocations of the electronic device at periodic intervals during the trip session; analyzing the one or more geolocations of the electronic device to determine a movement of the electronic device away from a visited site; and publishing a blog entry for a place of interest that corresponds to the visited site, the blog entry including at least one media object and auto-generated textual content that is based at least on one or more pre-stored knowledge items that include information about the place of interest. |
9640047 | 14808311 | 1 | 1. A method of generating a haptic signal with auditory saliency estimation from an audio signal, comprising: detecting audio characteristic information of a bass component and audio characteristic information of a treble component from a received audio signal; estimating auditory saliency with respect to the audio signal based on the two types of audio characteristic information; and calculating a tactile signal based on the auditory saliency, wherein the calculating the tactile signal includes calculating a peak pitch in the auditory saliency for each subband of the two types of audio characteristic information, calculating a perceptual magnitude of vibration of each subband in the peak pitch, and converting the perceptual magnitude into a physical vibration amplitude, and wherein the perceptual magnitude of vibration includes spectral components of each of a subband of a treble component and a subband of a bass component which are played in a vibration actuator at the same time. |
10115400 | 15229868 | 1 | 1. A networked microphone device comprising: one or more amplifiers configured to drive one or more speakers; a microphone array; a network interface; one or more processors; tangible, non-transitory computer-readable media having stored therein instructions executable by the one or more processors to cause the networked microphone device to perform a method comprising: continuously recording, via the microphone array, audio into a buffer; analyzing the recorded audio using multiple wake-word detection algorithms running concurrently on the one or more processors, each wake-word detection algorithm corresponding to a respective voice assistant service; when a particular wake-word detection algorithm of the multiple wake-word detection algorithms detects, in the recorded audio, a wake-word corresponding to a particular voice assistant service, querying, via the network interface, the particular voice assistant service with a voice command following the detected wake-word within the recorded audio, wherein the voice command comprises a search query; receiving, from one or more servers of the particular voice assistant service via the network interface in response to the query, data representing search results, the search results including audio tracks corresponding to the search query, wherein the search results are unique to the particular voice assistant service among the multiple voice assistant services; and playing back at least one audio track from the search results via the one or more amplifiers configured to drive the one or more speakers. |
10140978 | 15703033 | 1 | 1. A method comprising: obtaining, by one or more computers, acoustic data for an utterance; determining, by the one or more computers, speech recognition candidates for the utterance based on the acoustic data; obtaining, by the one or more computers, a ranking of the speech recognition candidates determined by a speech recognizer; selecting, by the one or more computers, a transcription for the acoustic data from among the speech recognition candidates; determining, by the one or more computers, feature scores from the ranking of the speech recognition candidates; generating, by the one or more computers, a classifier output for each of at least some of the speech recognition candidates, wherein each of the classifier outputs is an output that a trained machine learning classifier provided in response to receiving at least one of the feature scores as input; selecting, by the one or more computers, a subset of the speech recognition candidates based on the classifier outputs of the trained machine learning classifier; and providing, by the one or more computers and for display at a client device, data indicating (i) the transcription for the utterance and (ii) the subset of the speech recognition candidates as a set of alternative transcriptions for the utterance, wherein the one or more computers are configured to provide different quantities of alternative transcriptions for different utterances. |
20180046616 | 15791541 | 0 | 1. A computer system comprising: a touch screen; a processor communicatively connected to a memory; the memory comprising program instructions that when executed by the processor, cause the computer system to: display a mimicked view of an application graphical user interface (GUI) upon the touch screen in a simulation layer of a multilayered translation interface, wherein the mimicked view is a graphical reproduction of the application GUI with functionality of the one or more text objects of the application GUI disabled, and wherein the simulation layer comprises one or more text objects; subsequent to a user touch engaging a text object within the simulation layer, display a prompt for a user to enter a text translation of the touch engaged text object within a translation layer of the multilayered translation interface, wherein the prompt further comprises an accentuation object within the translation layer to visually accentuate the engaged text object; a text-editing object within the translation layer to receive the text translation of the engaged text objet from the user via the touch screen, and a link object within the translation layer that visually connects the accentuation object and the text-editing object; receive the text translation of the touch engaged text object; and display the text translation within the mimicked view in the simulation layer in place of the engaged text object. |
20050256713 | 10844093 | 0 | 1. A method of modeling a data generating process, said method comprising: observing a data sequence comprising irregularly sampled data; obtaining an observation sequence based on the observed data sequence; assigning a time index sequence to said data sequence; obtaining a hidden state sequence of said data sequence; and decoding said data sequence based on a combination of said time index sequence and said hidden state sequence to model said data sequence. |
7769185 | 11132644 | 1 | 1. A method for testing a hearing assistance device, the method comprising the acts of: mounting the hearing assistance device proximal to an acoustic waveguide having a soundfield with acoustic waves propagating down the waveguide; placing a microphone of the hearing assistance device in the soundfield of the acoustic waveguide to increase a direct acoustic component and to reduce reflected acoustic components and scattered acoustic components of sound sensed by the microphone; and generating sound using a sound generator to propagate sound of desired frequencies down the waveguide. |
20060167931 | 11314835 | 0 | 1. A method for identifying knowledge, comprising the steps of: a. inputting one or more terms to be explored for additional knowledge; b. searching one or more sources of information to identify resources containing information about or information associated with said terms; c. decomposing resources identified during searching into nodes; d. storing nodes in a node pool; and e. from the node pool, construct correlations of nodes representing knowledge using information about relation types. |
20150054729 | 14216500 | 0 | 1. A system comprising: a processor coupled to a plurality of display devices; a gesture interface coupled to the processor, wherein the gesture interface detects a gesture of at least one object from gesture data received at the processor, identifies the gesture and translates the gesture to a gesture signal; a plurality of remote client devices coupled to the processor; a plurality of applications running on the processor, wherein the plurality of applications control content of the plurality of remote client devices simultaneously across the plurality of display devices and the plurality of remote client devices, wherein the control of the content comprises control via the gesture signal and inputs of the plurality of display devices. |
8738366 | 13923744 | 1 | 1. The related-word registration device comprising: a receiving means that receives a search query of a search word entered by the user; a search query storing means that stores the received search queries in accordance with reception order; a search query extracting means that extracts, from the search query storing means, a preceding search query whose reception order is earlier than that of the received search query on the basis of a preset search query extracting condition; a character string set storing means that stores, as a character string set, a preceding search word constructing the extracted preceding search query and a search word constructing the received search query; a character string extracting means that extracts a character string set having the search word which is the same or similar to the preceding search word from the character string set storing means in accordance with a preset character string set extraction start condition; a related-word specifying means that specifies a character set as a related word from the extracted character string set on the basis of a preset registration condition; and a related-word registering means that registers the specified character string set as related words into a related-word database. |
7779020 | 10090068 | 1 | 1. A method of managing a relational database on a pervasive computing device comprising: a. receiving queries on a pervasive computing device in SQL, the queries comprising a plurality of query terms; b. interpreting the queries on a pervasive computing device by associating at least one declarative language function with the query terms by converting the SQL to an intermediate tree representation corresponding to the declarative language function wherein the declarative language function is implemented in a declarative language that is chosen from the group consisting of ML, LISP, and HASKELL; c. converting the queries represented by at least one declarative language function to a plurality of JAVA statements on a pervasive computing device; and d. executing the JAVA statements. |
20110063192 | 12953169 | 0 | 1. A method, comprising: displaying an application interface for a device application on a first display that is integrated in a dual-display mobile device; receiving binding position data from a binding system that movably couples the first display to a second display that is integrated in the dual-display mobile device; receiving application context data associated with the device application; and generating feedback based on the binding position data and the application context data. |
9008416 | 14177174 | 1 | 1. A computer-implemented method comprising steps of: providing one or more cues to a first plurality of untrained providers for mimicking a predetermined expression; receiving from the first plurality of untrained providers images created in response to the step of providing one or more cues; sending requests to rate the images to a second plurality of untrained providers; receiving ratings of the images from the second plurality of untrained providers, in response to the step of sending requests; applying a first quality check to the images rated by the second plurality of untrained providers, the first quality check being based on the ratings of the second plurality of untrained providers, the step of applying the first quality check resulting in one or more images that passed the first quality check; sending the one or more images that passed the first quality check to one or more experts, for rating by the one or more experts; in response to the step of sending the one or more images that passed the first quality check, receiving one or more ratings from the one or more experts; and applying a second quality check to the images rated by the one or more experts, the second quality check being based on the one or more ratings of the one or more experts, the step of applying the second quality check resulting in one or more images that passed the second quality check. |
20120075083 | 12984365 | 0 | 1. A communication appliance comprising: a processor; a data store; a display screen; an interface to a wireless network; an interface for receiving audio signals; and software executable from a machine-readable physical medium; wherein the software provides: a function for comparing incoming audio signals with signals stored in the data store as trigger signals; and a function for initiating an action or sequence of actions in response to recognizing an incoming signal as a trigger signal. |
20130317825 | 13953527 | 0 | 1. A method comprising: obtaining, by a computer, non-verbal components of a plurality of speech signals and geographical locations respectively associated with the speech signals; deducing, by the computer, physiological and/or psychological conditions corresponding to the non-verbal components; and providing a geographical distribution of the deduced physiological and/or psychological conditions by associating the deduced physiological and/or psychological conditions with the respective geographical locations of the obtained speech signals from which the conditions were deduced. |
9865281 | 14843382 | 1 | 1. A computer program product comprising: one or more computer readable tangible storage media and program instructions stored on at least one of the one or more storage media, the program instructions comprising: program instructions to determine a meeting has initialized between a first user and a second user, wherein vocal and video recordings are produced for at least the first user; program instructions to receive the vocal and video recordings for the first user; program instructions to analyze the vocal and video recordings for the first user according to one or more parameters for speech and one or more parameters for gestures; program instructions to identify the one or more parameters for gestures, wherein the one or more parameters are selected from a group of measures including: folding arms across the chest, clenching fists, smiling, frowning, raising eyebrows, nodding, and flaring nostrils; program instructions to analyze the one or more parameters for gestures; program instructions to produce at least one output data point for each of the one or more parameters for gestures; program instructions to determine one or more emotions and a role in the meeting for the first user based at least on the analyzed vocal and video recordings; program instructions to identify one or more highest recurring output data points from the at least one output data point for each of the one or more parameters for gestures; program instructions to identify the one or more highest recurring output data points as the one or more emotions for the first user; program instructions to determine the role in the meeting for the first user based at least on the identified one or more emotions for the first user; and program instructions to send an output of analysis to at least one of the first user and the second user, wherein the output of analysis includes at least the determined one or more emotions and the role in the meeting for the first user. |
20060074668 | 10536239 | 0 | 1. An assignment device ( 1 ) with assignment means ( 4 ) for assigning supplementary information to one or more words of text information (ETI), characterized in that the assignment means ( 4 ) is designed to assign word class information (WKI) to one or more words of text information (ETI), and to deliver word-class sequence information (WK-AI) containing the assigned word-class information (WKI), and that linkage means ( 7 ) designed to detect the presence of at least two specific items of word-class information (WKI) in the word-class sequence information (WK-AI) and to deliver the corresponding linkage information (VI) is provided, and that action means ( 10 ) designed to activate an action (A) when specific linkage information (VI) or a specific combination of linkage information (VI) is delivered by the linkage means ( 7 ) is provided. |
20100123797 | 12612883 | 0 | 1. An imager comprising: an image-capturing device that captures a subject image and outputs an image; a voice detector that detects ambient sound; a voice recognition device that converts the ambient sound to characters; and a composer that composes the characters into the image. |
20100153412 | 12334913 | 0 | 1. A user interface for building a formulated query to search a database of structural data organized by classes, attributes of classes, literals of attributes, and structural relations between classes, and for displaying results of the formulated query, the user interface comprising, a structural query section to define constraints for the formulated query, the structural query section including one or more query elements to be populated and a means for adding one or more additional query elements, and wherein a relationship between query elements is expressible using a defined structural relation; and a query results section for displaying results of the formulated query after the formulated query is executed. |
20130035940 | 13603226 | 0 | 1. An electronic larynx speech reconstruction method, comprising the following steps: firstly, model parameters are extracted from the collected speech as a parameter base, then face images of the sounder are acquired and transmitted to an image analyzing and processing module, sounding start and stop times and vowel classes are obtained after the image analyzing and processing model analyzes and processes the images, then sounding start and stop times and the vowel classes are used to control a voice source synthesis module to synthesize a waveform of a voice source, finally, the waveform of the voice source is output by an electronic larynx vibration output module, wherein the electronic larynx vibration output module comprises a front end circuit and an electronic larynx vibrator; and characterized in that: the synthesis steps of the voice source synthesis module are as follows: 1) synthesize the waveform of the glottis voice source, that is, select the model parameters of the glottis voice source from the parameter base according to the individualized sounding features of the sounder, wherein the sounding start and stop times control the starting and the stopping of the synthesis of the voice source, and the synthesis of the glottis voice source adopts an LF model with the specific math expression as follows: { u g ′ ( t ) = E 0 α t sin ( ω g t ) ( 0 ≤ t ≤ t e ) u g ′ ( t ) = - ( E e ɛ t a ) [ - ɛ ( t - t e ) - - ɛ ( t c - t e ) ] ( t e ≤ t ≤ t c ) in the above expression, Ee is the amplitude parameter, t p , t e , t a and t c are all time parameters which respectively represent the maximum peak time, the maximum negative peak time, the exponential restore segment time constant and the base frequency period of airflow, E 0 is the amplitude parameter, U g is airflow value at the t e time, α is the exponential equation coefficient of open phase, ε is the exponential equation coefficient of return phase, ω g is angular frequency of opening phase, and the rest parameters can be obtained by the above five parameters with reference to the following formulas: { ɛ t a = 1 - - ɛ ( t c - t e ) ω g = π t p U e = E 0 [ α t e ( α sin ω g t e - ω g cos ω g t e ) + ω g ] / ( α 2 + ω g 2 ) E e = - E 0 α t e sin ω g t e U e = E e t α 2 K α K α = { 2.0 R α < 0.1 2 - 2.34 R α 2 + 1.34 R α 4 0.1 ≤ R α ≤ 0.5 2.16 - 1.32 R α + 0.64 ( R α - 0.5 ) 2 R α > 0.5 R α = t α t c - t e 2) select shape parameters of the sound track according to the vowel classes, simulate sound transmission in the sound track by using a waveguide model, and calculate the waveform of the voice source according the following formulas: { u i + 1 + = ( 1 - r i ) u i + - r i u i + 1 - = u i + - r i ( u i + + u i + 1 - ) u i - = ( 1 + r i ) u i + 1 - + r i u i + = u i + 1 - + r i ( u i + + u i + 1 - ) r i = A i - A i + 1 A i + A i + 1 { glottis : u 1 + = 1 - r g 2 u g - r g u 1 - = 1 2 u g - r g ( 1 2 u g + u 1 - ) r g ≈ - 1 lips : u out = ( 1 - r N ) u N + = u N + - u N - r N ≈ - 1 wherein the sound track is expressed by the cascading of a plurality of sound tubes with uniform sectional areas; in the above formulas, A i and A i+1 are area functions of i th and (i+1) th sound tubes, u i + and u i − and are respectively forward sound pressure and reverse sound pressure in the i th sound tube, r i is the reflection coefficient of adjacent interfaces of the i th sound tube and the (i+1) th sound tube, u g is the waveform of the glottal voice source obtained through the calculation of the LF model, u N is the sound pressure value of lip end, N is the number of segments of tubes with different areas, wherein the vocal tract is expressed by N segments of connected tubes, and u out is the waveform of the speech of the lip end. |
9176958 | 13795602 | 1 | 1. A method for searching music, the method comprising: receiving a query comprising a pulse train representable by a plurality of query values defining a tempo of music to be searched; generating a tempo scale set based on the query, wherein the generating comprises: mapping each of the query values to a tempo scale representing a length of a musical note corresponding to the query value; wherein the tempo scale set is a set of tempo scales representing lengths of musical notes corresponding to the plurality of query values; constructing a tempo word set based on the generated tempo scale set, the tempo word set comprising one or more tempo words, wherein constructing each tempo word of the one or more tempo words comprises: collecting one or more tempo scales of the tempo scale set, wherein the two or more tempo scales are positioned in the tempo scale set at a predetermined interval from one another; and identifying the music based on the tempo word set. |
10116733 | 15337310 | 1 | 1. A method comprising: a first external application server system receiving first user-provided communication quality feedback from a first telephony communication endpoint system and providing the first user-provided communication quality feedback to a multi-tenant telephony communication platform system; at the multi-tenant telephony communication platform system: receiving the first user-provided communication quality feedback from the first external application server system, wherein the first user-provided communication quality feedback relates to a first communication route of a first telephony communication initiated on behalf of a first platform account that is associated with the first external application server system; storing the first user-provided communication quality feedback in association with information that indicates the first communication route and an account identifier of the first platform account; receiving second user-provided communication quality feedback from the first external application server system, wherein the second user-provided communication quality feedback relates to a second communication route of a second telephony communication initiated on behalf of the first platform account; storing the second user-provided communication quality feedback in association with information that indicates the second communication route and the account identifier of the first platform account; receiving from the first external application server system a RESTful first feedback application programming interface (API) call; responsive to the RESTful first feedback API call, the platform system providing the first external application server system with feedback information that includes the first user-provided communication quality feedback and the second user-provided communication quality feedback, wherein the first platform account is one of a plurality of platform accounts of the platform system. |
9536547 | 14875092 | 1 | 1. A speaker change detection device comprising: a processor configured to: extract features representing features of a human voice in each frame having a predetermined time length from a voice signal including a conversation between a plurality of speakers; set, for each of a plurality of different time points in the voice signal, a first analysis period before the time point and a second analysis period after the time point; generate, for each of the plurality of time points, a first speaker model representing features of voices of a group of at least two speakers speaking in the first analysis period on the basis of a distribution of the features of a plurality of frames included in the first analysis period and a second speaker model representing features of voices of a group of at least two speakers speaking in the second analysis period on the basis of a distribution of the features in a plurality of frames included in the second analysis period; calculate, for each of the plurality of time points, a matching score representing the likelihood of similarity of features between the group of speakers in the first analysis period and the group of speakers in the second analysis period by applying the features in a plurality of frames included in the second analysis period to the first speaker model and applying the features of a plurality of frames included in the first analysis period to the second speaker model; and detect a speaker change point at which a change from a group of speakers speaking before the speaker change point to another group of speakers speaking after the speaker change point occurs in the voice signal on the basis of the matching score for each of the plurality of time points. |
8078463 | 10996811 | 1 | 1. A computerized method for spotting an at least one call interaction out of a multiplicity of call interactions, in which an at least one target speaker participates, the method comprising: capturing at least one target speaker speech sample of the at least one target speaker by a speech capture device; generating by a computerized engine a multiplicity of speaker models based on a multiplicity of speaker speech samples from the at least one call interaction; matching by a computerized server the at least one target speaker speech sample with speaker models the multiplicity of speaker models to determine a target speaker model; determining a score for each call interaction of the multiplicity of call interactions according to a comparison between the target speaker model and the multiplicity of speaker models; and based on scores that are higher than a predetermined threshold, determining call interactions, of the multiplicity of call interactions, in which the at least one target speaker participates. |
20110270609 | 12771400 | 0 | 1. A computer system for providing real-time resources to participants in an audio conference session, the computer system comprising: a conference system for establishing an audio conference session between a plurality of computing devices connected via a communication network; and a server configured to communicate with the conference system and the plurality of computing devices via the communication network, the server comprising: a processor and a memory; a pre-processing engine stored in the memory and executed by the processor, the pre-processing engine comprising logic configured to: receive an audio stream associated with one or more of the computing devices, the audio stream comprising a speech signal; and extract the speech signal from the audio stream; a speech-to-text conversion engine stored in the memory and executed by the processor, the speech-to-text conversion engine comprising logic configured to extract words from the speech signal; and a relevance engine stored in the memory and executed by the processor, the relevance engine comprising logic configured to: receive the extracted words from the speech-to-text conversion engine; and determine a relevant keyword or topic being discussed in the audio conference session; and a resources engine stored in the memory and executed by the processor, the resources engine comprising logic configured to: identify a resource related to the relevant keyword or topic; and provide, via a graphical user interface, the resource to the one or more computing devices. |
9342180 | 12479573 | 1 | 1. A method of providing a control signal of a computing system including a touch sensitive surface, the method comprising: obtaining first positions of multiple contacts corresponding to touch objects concurrently on or near the surface at a first time, the first positions including a first position of a first contact and a first position of a second contact; obtaining second positions of the multiple contacts corresponding to the touch objects concurrently on or near the surface at a second time after the first time, the second positions including a second position of the first contact and a second position of the second contact; determining a single rotational velocity based on the first position of the first contact, the first position of the second contact, the second position of the first contact, and the second position of the second contact; and providing the rotational velocity as a control signal of the computing system. |
8655651 | 13384882 | 1 | 1. A method performed by a computer for speech quality estimation, wherein the computer comprises a processor performing the steps of: determining a coding distortion parameter (Q COD ), a bandwidth related distortion parameter (BW) and a presentation level distortion parameter (PL) of a speech signal; extracting a first coefficient (ω 1 ) and a second coefficient (ω 2 ), the first coefficient (ω 1 ) and the second coefficient (ω 2 ) being dependent on the coding distortion parameter(Q COD ); calculating a signal quality measure (Q), where the signal quality measure is calculated based on
Q COD +ω 1 ·BW+ω 2 ·PL, and using the signal quality measure (Q) in a quality estimation of the speech signal. |
20030050785 | 10206669 | 0 | 1. System, comprising: detection means for detecting a visual field of a user being directed onto a display; speech recognition means for recognizing speech information of the user; control means for controlling the system; and means for generating a visual feedback signal relating to a processing status of the speech information of the user. |
20160063993 | 14475450 | 0 | 1. A computer-implemented process, comprising: applying a machine-learned facet model to evaluate a plurality of samples of sentiment-bearing content to identify conversational topics and facets associated with one or more segments of that content; identifying one or more of the facets that have a consensus based on two or more samples of the sentiment-bearing content; generating a plurality of conversational utterances about one or more of the identified facets that have a consensus; and wherein one or more of the conversational utterances are generated by fitting one or more of the facets to one or more predefined conversational frameworks. |
9411801 | 13725095 | 1 | 1. An electronic apparatus configured to translate words into a target language, the electronic apparatus comprising: an electronic display; an electronic processor; and instructions to cause the electronic apparatus to: acquire an image of text; detect a selection of a word or word combination in the image to be translated; perform optical character recognition (OCR) on the selected word or word combination using a character alphabet of a plurality of languages; generate a set of recognition variants for each word of the selected word or word combination; transmit each set of recognition variants to a set of language specific processors; eliminate language inappropriate variants from the set of recognition variants for each language, wherein the language inappropriate variants are the recognition variants which do not contain characters or symbols of the language; match each of remaining variants to a source language, wherein the remaining variants are the recognition variants minus the language inappropriate variants; confirm that at least one of the remaining variants is in at least one language specific word list; translate a confirmed word variant using a translation dictionary; and provide a translation of the confirmed word variant. |
20070136059 | 11607608 | 0 | 1. Speech recognition method comprising the steps: operating a plurality of speech recognition processes in parallel; determining the best scoring interim recognition hypothesis for each speech recognition process and the best overall score; and pruning of interim recognition hypotheses of the plurality of speech recognition processes based on the best overall score. |
9910865 | 13959417 | 1 | 1. A method for storing digital images, said method comprising: capturing an image using a digital camera system; capturing metadata, using a processor, said metadata associated with said image at a moment of capture of said image, wherein said metadata comprises sensory information regarding said moment of capture of said image; and storing said metadata in at least one field within a file format, wherein said file format defines a structure for storing said image, and wherein said at least one field is located within an extensible segment of said file format, and wherein a user of said digital camera system is provided an option to encrypt said metadata, wherein at least one type of said metadata is related image information, wherein said at least one type of said metadata is selected from a group consisting of: GPS data and time data, and wherein said storing comprises: linking said image to at least one related image, wherein said linking comprises using said at least one type of said metadata; and embedding information to perform said linking in a field within said file format, wherein said field is dedicated to storing said related image information. |
20110211759 | 13036875 | 0 | 1. A character recognition apparatus, comprising: a binarizer for binarizing an input image; a character extractor for extracting at least one character area from the binarized image; a character feature value extractor for calculating a slope value of the extracted at least one character area and setting the calculated slope value as a character feature value; and a character recognizer for recognizing a character by using a neural network for recognizing a plurality of characters by receiving the set character feature value. |
20110208507 | 12709129 | 0 | 1. A method for correcting one or more typed words on an electronic device, comprising: receiving one or more typed words from a text input device; generating one or more candidate words for the one or more typed words; receiving an audio stream at the electronic device that corresponds to the one or more typed words; translating the audio stream into text using the one or more candidate words, wherein the translating comprises assigning a confidence score to each of the one or more candidate words and selecting a candidate word among the one or more candidate words to represent each portion of the text based on the confidence score of the selected candidate word; and replacing a word from the one or more typed words with the corresponding selected candidate word when the confidence score of the selected candidate word is above a predetermined threshold value. |
20130297321 | 13801441 | 0 | 1. A computer-based method of determining a location, comprising: receiving a signal representing an utterance from the user, the utterance specifying a location attribute and a landmark; identifying a set of candidate locations based on the specified location attribute; identifying a set of landmarks based on the specified landmark; generating an associated kernel model for each landmark in the set of landmarks, each kernel model comprising a three-dimensional model centered on a map at the location of a landmark associated with the kernel model; ranking the candidate locations based on kernel model amplitudes at each candidate location; and selecting a location to provide to the user based on the ranked candidate locations. |
8989485 | 14053208 | 1 | 1. A method for detecting a junction in a received image of a line of text to update a junction list with descriptive data, the method comprising: creating a color histogram based on a number of color pixels in the received image of the line of text; detecting, based at least in part on the received image of the line of text, a rung within the received image of the line of text; identifying a horizontal position of the detected rung in the received image of the line of text; additionally identifying a gateway on the color histogram, wherein the identified gateway is associated with the detected rung; and updating the junction list with data including a description of the identified gateway. |
20100162880 | 12347463 | 0 | 1. A computer-implemented method, comprising: automatically switching a graphical view of one or more file components associated with an audio and displayed in a graphical user interface (GUI) during playback of the audio, the switching based at least in part on a Musical Instrument Digital Interface (MIDI) view-switching track belonging to a file; automatically switching an instrument displayed in the GUI during playback of the audio based at least in part on a MIDI instrument-switching track belonging to the file; and automatically switching a metronome beat associated with the audio between on and off during playback of the audio based at least in part on a MIDI metronome-switching track belonging to the file. |
20020038336 | 09849816 | 0 | 1. A method of processing an application request on an end user application and an application server including a transaction manager comprising the steps of: a) initiating the application request on the end user application in a first language with a first application program; b) transmitting the application request to the server and converting the application request from the first language of the first end user application to a form for the transaction manager running on the application server; c) processing said application request on the application server; d) transmitting a response to the application request from the application server to the end user application, and converting the response to the application request from the transaction manager running on the application server to the first language of the first end user application; and e) wherein the end user application and the application server have at least one connector therebetween, and the steps of (i) converting the application request from the first language of the first end user application as a source language to the language running on the application server as a target language, and (ii) converting a response to the application request from the language running on the application server as a source language to the first language of the first end user application as a target language, each comprise the steps of: 1) invoking connector metamodels of respective source language and target transaction manager; 2) populating the connector metamodels with metamodel data of each of the respective source language and target transaction manager, the metamodel data of the target transaction manager including control data, state data, and user data; and 3) converting the source language to the transaction manager. |
20130024197 | 13272352 | 0 | 1. An electronic system comprising: a display unit; a voice input unit; and a controller configured to: control display, on the display unit, of multiple, different types of content provided by multiple, different content sources, receive, from the voice input unit, a voice command, select, from among the multiple, different types of content provided by the multiple, different content sources, a first type of content to associate with the received voice command, the first type of content being provided by a first content source, and control output of the first type of content provided by the first content source based on the received voice command. |
20160299889 | 15190321 | 0 | 1. A method comprising: receiving a preferred language identification for each of a plurality of message accounts associated with a first user; associating each preferred language identification with a respective one of the plurality of message accounts, each of the plurality of message accounts having a different preferred language identification; determining that a source language of an electronic message sent via a particular one of the plurality of message accounts from the first user to a second user is different from a first preferred language associated with the particular message account; and translating the electronic message to the first preferred language. |
9442924 | 14974768 | 1 | 1. A method comprising: based at least in part on a user interaction with a game application that is executed on a client device, identifying a text string in the game application; receiving, from a server separate from the client device, a translation of the identified text string, the translation of the identified text string including a token; in an automated operation performed using one or more computer processor devices, substituting the token with a word in accordance with the user interaction with the game application, thereby providing an updated translation of the text string; and causing display of the updated translation of the text string on the client device. |
9946511 | 14721044 | 1 | 1. A method for user training of an information dialogue system being at least partially implemented on a computing device, the method comprising: activating a user input subsystem associated with the computing device, the user input subsystem including at least one of a voice record and recognition component and a keyboard; receiving, by the user input subsystem, a training request, the training request being entered by a user via at least one of the voice record and recognition component and the keyboard associated with the computing device, wherein the training request includes instructions to personalize a response of the information dialog system to a request synonym, wherein the training request further includes a user request and instructions to associate, by the information dialogue system, the user request comprising at least one word with a sequence of actions to be performed by the information dialogue system, wherein at least one of the actions includes accessing a website; converting, by the user input subsystem, the training request of the user into a first text; sending the first text of the training request obtained as a result of the converting to a dialogue module associated with the computing device; processing, by the dialogue module, the first text of the training request; forming and sending, by the dialogue module, a confirmation request to the user; providing the confirmation request to the user, wherein the providing the confirmation request includes displaying the confirmation request or reproducing the confirmation request; receiving, by the user input subsystem, a response to the confirmation request, the response to the confirmation request being entered by the user: converting, by the user input subsystem, the response to the confirmation request into a second text; sending the second text of the response to the confirmation request to the dialogue module; processing, by the dialogue module, the second text of the response to the confirmation request; confirming that the training request and the response to the confirmation request are accepted by the information dialogue system; determining, by the dialogue module, whether the training request conflicts with preliminary settings of the information dialogue system; based on the determining that the training request conflicts with the preliminary settings, modifying, by the dialogue module, the preliminary settings to avoid conflicting the training request; forming, by the dialogue module, a response to the training request, wherein the response to the training request is formed as a command executed by the user input subsystem and one or more of the following: a voice cue and a response text, the command executed by the user input subsystem being based on the instructions and including associating the user request with the sequence of actions, the response being personalized based on the instructions, wherein the personalizing includes establishing the request synonym for the one or more of the voice cue, the response text, and the action; sending the response to the training request to the user; automatic activating the user input subsystem after the response to the training request is sent to the user; receiving, by the user input subsystem, the user request from the user, wherein the user request is entered by the user via at least one of the voice record and recognition component and the keyboard; converting, by the user input subsystem, the user request into a third text and sending the third text to the dialogue module; processing, by the dialogue module, the third text; and based on the processing of the third text, sequentially performing, by the information dialogue system, the sequence of actions based on the instructions of the training request, wherein the at least one of the actions includes accessing, by the computing device the website. |
20080249760 | 11784161 | 0 | 1. A method for providing a translation service comprising: receiving a text string written in a source language from a member via a translation interface; selecting a domain-based translation engine, the domain-based translation engine associated with a source language, a target language, and a domain; translating the text string into the target language using, at least in part, the selected domain-based translation engine; and transmitting the translated text string to the member via the network. |
9464905 | 12823301 | 1 | 1. A method of updating a vehicle ECU, the method comprising: establishing communication between a data communications module of a vehicle and an update server via a cellular network; validating the vehicle using a key exchange protocol between the data communications module and the update server, wherein the key exchange protocol includes the data communications module sending a first security key to the update server, receiving a request for an updated security key from the update server after sending the first security key, and sending a second security key to the update server after receiving the request for the updated security key; and sending update information from the update server to the data communications module of the vehicle via the cellular network, the update information configured to be used to update the vehicle ECU. |
20080165132 | 11620557 | 0 | 1. At a computer system including a multi-touch input surface, a method for recognizing a multiple input point gesture, the method comprising: an act of receiving an ordered set of points, indicating at least that: contact with the multi-touch input surface was detected at a first location on the multi-touch input surface; contact with the multi-touch input surface was detected at a second location on the multi-touch input surface simultaneously with the detected contact at the first location, subsequent to detecting contact with the multi-touch input surface at the first location; and contact with the multi-touch input surface was detected at a third location on the multi-touch input surface simultaneously with the detected contact at the first location and at the second location, subsequent to detecting contact with the multi-touch input surface at the second location; an act of calculating a line segment between the first location and the second location; an act of determining that the third location is on a specified side of the line segment; and an act of recognizing an input gesture corresponding to detected contact at three or more locations on the multi-touch input surface based at least on the determination that the third location is on the specified side of the line segment. |
20080162127 | 11616351 | 0 | 1. A network entity for effectuating a conference session between participants at a plurality of locations, the network entity comprising: a processor configured to receive a plurality of signals representative of voice communication of the participants, the signals being received from a plurality of terminals of a respective plurality of participants at one of the locations, each of at least some of the terminals otherwise being configured for voice communication independent of at least some of the other terminals, wherein the processor is configured to classify speech activity of the conference session according to a speech pause, or one or more actively-speaking participants, during the conference session, and wherein the processor is configured to mix the signals of the respective participants into at least one mixed signal for output to one or more other participants at one or more other locations, the signals being mixed based upon classification of the speech activity. |
20100100816 | 12252418 | 0 | 1. A method for assessing textual widgets, comprising: entering a string expression into a document; invoking a spell-checker to check a spelling of the string expression; marking the string expression as misspelled; identifying a textual widget based on the misspelling of the string expression; evaluating the misspelled string expression using the identified textual widget, the identified textual widget returning at least one result of the evaluation; displaying the at least one result of the evaluation; selecting a result of the evaluation; and replacing the string expression in the document with the selected result of the evaluation. |
20030036906 | 10189156 | 0 | 1. A method of setting the voice personality of a voice service site, wherein a set of voice personality characterisers associated with a previously-visited voice service site is used in presenting the voice output of a currently-visited voice service site. |
20050015751 | 10620680 | 0 | 1. A computerized method for adding debugging statements to a computer source code having a plurality of lines of code comprising: creating an annotated source code; setting a verbosity level to a predetermined level; traversing through said computer source code by reading and analyzing a portion of said source code at a time, said reading and analyzing comprising: reading said portion of said source code, said portion comprising executable statements and comments; and if said portion comprises an executable statement, writing said executable statement to said annotated source code, constructing an output statement comprising at least an indicator of the location of said executable statement within said source code, and writing said output statement to said annotated source code; and causing said annotated source code to be executed in place of said computer source code. |
20090304215 | 12540925 | 0 | 1. A method of fitting a hearing aid to a sound environment, comprising selecting a setting for an initial hearing aid transfer function according to a general fitting rule, calculating an estimate of the sound environment by calculating the speech level and the noise level in each among a set of frequency bands, calculating a speech intelligibility index based on the estimate of the sound environment and the initial transfer function, and adapting the initial setting to provide a modified transfer function suitable for enhancing the speech intelligibility. |
10083173 | 14703018 | 1 | 1. An artificial intelligence system comprising: a storage device comprising a terminology database that stores (i) a plurality of terms utilized in a previous communication by a human user requesting a product and/or a service in a first spoken language, (ii) a plurality of responses in a second spoken language to the previous communication, and (iii) a plurality of outcomes that indicate accuracy of a correspondence between the plurality of responses in the second spoken language and the plurality of terms in the first spoken language, the second spoken language being distinct from the first spoken language; and a processor that (i) learns to generate responses associated with corresponding terms in a request based upon a statistical probability analysis of the plurality of outcomes from the terminology database, (ii) receives a request for a product and/or service in the first spoken language in a current communication in which a human language interpreter is not available, (iii) selects a phrase in the first spoken language from the terminology database based on an occurrence of a term in the first spoken language in the request and a substantial probability of the phrase provided in the first spoken language in conjunction with the term in the first spoken language eliciting particular follow-up data from the human user, (iv) provides the phrase in the second spoken language to an entity representative that participates in the current communication, (v) generates a response to the human user in the first spoken language to obtain the particular follow-up data from the human user in the first spoken language to facilitate ordering the product and/or service, (vi) provides the particular follow-up data received from the user in the second spoken language to the entity representative for the entity representative to order the product and/or service, and (vii) auto-populates at least one question for a requestor of the product and/or the service in the first spoken language and sends the at least one question to the requestor. |
9424241 | 14145168 | 1 | 1. An electronic device, comprising: a display for presenting paginated digital content to a user; and a user interface including an annotation mode, the annotation mode including multiple different note types for paginated digital content, the note types comprising: i) a sticky note that can be created in response to a tap or mouse click or selection made on the paginated digital content when the annotation mode is invoked, wherein the sticky note is represented by a movable graphic and selection of the graphic causes contents of the sticky note to be presented; and ii) a margin note that can be created by converting a previously created sticky note to a margin note, such that the margin note replaces the previously created sticky note and includes content from the previously created sticky note, wherein contents of the margin note are always presented and the margin note is configured to be placed anywhere on the paginated digital content. |
9326119 | 14092279 | 1 | 1. A mobile computer comprising: a processor configured to send a poll to a vehicle based computer in proximity to the mobile computer, the processor further configured to send the poll to the vehicle based computer via a Bluetooth link; a transceiver configured to receive a text message via a cellular network; circuitry configured to send the received text message to the vehicle based computer via the Bluetooth link, the sent text message configured to affect a radio volume associated with the vehicle based computer; and wherein the Bluetooth link between the mobile computer and the vehicle based computer is arranged in relation to the poll sent to the vehicle based computer from the mobile computer. |
20070005337 | 11344839 | 0 | 1. A method that converts a classifier in a host language to a classifier in a target language, the method comprising: marking target language examples of passages of text in order to obtain an initial classifier in the target language; re-classifying a plurality of target language examples in the initial classifier; questioning the marking used to obtain the initial classifier, the questioning being based on the re-classifying; isolating a high-quality set of target examples based on the results of the questioning; and using the high-quality set of target examples to prepare a high-quality classifier in the target language. |
9860647 | 15190202 | 1 | 1. A high sound quality piezoelectric speaker, comprising a moving coil speaker, a support frame, a vibration plate, and a piezoelectric ceramic plate, wherein the support frame is arranged on the moving coil speaker; the vibration plate is arranged on the support frame; the moving coil speaker has a sound emission direction in communication with the vibration plate; and the piezoelectric ceramic plate is arranged on the vibration plate, wherein the support frame is an annular frame, the annular frame comprising a receiving compartment formed in one side thereof, the annular frame being provided therein with a sound emission channel, the moving coil speaker being received in the receiving compartment, the moving coil speaker comprising a protection cover that is arranged opposite to the vibration plate, the protection cover having an outside surface on which a sealing board is disposed, the moving coil speaker comprising a frame in which fourth sound emission holes are formed, the sound emission channel being arranged between and in communication with the fourth sound emission holes and the vibration plate. |
20150378990 | 14319863 | 0 | 1. A method implemented by an information handling system that includes a memory and a processor, the method comprising: computing, by the processor, a leverage value of a language translation supply chain, wherein the leverage value corresponds to an amount of suggested translations, from a plurality of suggested translations, that are accepted by a user that results in a set of accepted translations; computing, by the processor, a factor value of the language translation supply chain, wherein the factor value indicates a productivity of the user to convert the set of accepted translation into a set of final translations; determining, by the processor, a performance efficiency of the language translation supply chain based upon the leverage value and the factor value; and evaluating the language translation supply chain based upon the performance efficiency. |
20150046884 | 13964961 | 0 | 1. A method comprising: displaying, by an electronic device, a user interface of an application; receiving, by the electronic device, touch input in a region of the displayed user interface; determining, by the electronic device, a context; performing, by the electronic device, a first action if the determined context is a first context; and performing, by the electronic device, a second action if the determined context is a second context different from the first context, wherein the second action is different from the first action. |
10074361 | 15278651 | 1 | 1. A speech recognition apparatus, the apparatus comprising: a processor configured to: extract select frames from all frames of a first speech of a user; calculate an acoustic score of a second speech, made up of the extracted select frames, by using a Neural Network (NN)-based acoustic model, and to calculate an acoustic score of frames, of the first speech, other than the select frames based on the calculated acoustic score of the second speech; and recognize the first speech based on the calculated acoustic score of the second speech and the calculated acoustic score of the frames other than the select frames. |
20020143534 | 10105498 | 0 | 1. A correction device ( 10 ) for correcting incorrect words in text information (ETI) recognized by a speech recognition device( 1 ) from speech information (SD), comprising reception means for receiving the speech information (SD), the associated recognized text information (ETI) and the link information (LI), which at each word of the recognized text information (ETI) marks the part of the speech information (SD) at which the word was recognized by the speech recognition device ( 1 ), and comprising editing means ( 11 ) for positioning a text cursor (TC) at an incorrect word of the recognized text information (ETI) and for editing the incorrect word according to editing information (EI) entered by a user and comprising synchronous playback means ( 12 ) to allow a synchronous playback mode, in which during acoustic playback of the speech information (SD) the word of the recognized text information (ETI) just played back and marked by the link information (LI) is marked synchronously, while the word just marked features the position of an audio cursor (AC) and the editing means ( 11 ) are designed for positioning the text cursor (TC) and for editing the incorrect word when the synchronous playback mode is active in the correction device ( 1 ). |
20050265541 | 10840171 | 0 | 1. A system for providing a voice dialogue in a telephone network, said system comprising: a switching point connected to a communication device; a service control point connected to said switching point; a voice extensible markup language browser connected to said switching point; and a converter connected to said service control point and said voice extensible markup language browser, wherein said converter communicates with said service control point using a call control protocol, and wherein said converter is adapted to convert said call control protocol to a voice extensible markup language. |
7734287 | 10165384 | 1 | 1. A system for facilitating diagnosis and maintenance of one or more control networks, comprising: an onboard vehicle control network located on a mobile conveyance; a wireless interface coupled to said control network; a wireless ground station configured to communicate over a wireless communication channel with said onboard vehicle control network via said wireless interface; a local area computer network in communication with said wireless ground station, said local area computer network comprising a server computer; a database comprising diagnostic information relating to said onboard vehicle control network; and a wide area network interface, whereby additional diagnostic information relating to said onboard vehicle control network is obtainable from one or more remote computers; and a portable handheld wireless diagnostic unit with a manual input interface and graphical display, said portable handheld wireless diagnostic unit having no physical connection to said onboard vehicle control network and configured to: communicate wirelessly with said onboard vehicle control network via said wireless interface; convey instructions wirelessly to the onboard vehicle control network in response to input from said manual input interface; and communicate wirelessly with said local area computer network via said wireless ground station, thereby receiving the diagnostic information from the local area computer network via said wireless interface pertaining to the onboard vehicle control network. |
20080221887 | 12120801 | 0 | 1. A method of performing speech recognition, the method comprising: receiving a voice request from a user at the device; determining an identity of the user; retrieving a speaker independent speech recognition model based on the user identification; and if the user is new to the device, dynamically adapting the device to provide speech recognition specific to the new user. |
20140365448 | 13910135 | 0 | 1. A method implemented at least in part by a computer, the method comprising: receiving trending data that includes a set of phrases to be used for text suggestions; installing the trending data as a new dataset of a client, the client including a local dataset that includes terms that have been inputted by user input on the client; assigning the new dataset a weight that is less than a weight assigned to the local dataset such that suggestions from the new dataset are suggested after higher weighted suggestions available from the local dataset when the suggestions from the local dataset and the suggestions from the new dataset are equally probable; and deleting, from the client, an old dataset that included previous trending data. |
9514130 | 14985300 | 1 | 1. A method comprising: receiving a first speech input from a first speaker; determining, by a speech translation system, a first recognized speech result based on the speech input; determining, by the speech translation system, whether there exists a recognition ambiguity in the first recognized speech result, wherein the recognition ambiguity indicates more than one possible match for the first recognized speech result; upon a determination that there is recognition ambiguity in the first recognized speech result of the first speaker, determining a confidence score based on the recognition ambiguity; and responsive to the confidence score being below a threshold, issuing a first disambiguation query to the first speaker via the speech translation system, wherein a response to the first disambiguation query resolves the recognition ambiguity. |
20120250858 | 13078740 | 0 | 1. A system for providing an application usage continuum across client devices, said system comprising: a first client device configured to execute a first instance of an application; a second client device configured to execute a second instance of said application; and wherein, said first client device is further configured to: receive an indication to transfer operation of said first instance of said application running on said first client device to said second instance of said application on said second client device; generate state information and data associated with execution of said first instance of said application on said first client device; and cause said state information to be sent to said second client device to enable said second instance of said application on said second client device to continue operation of said application on said second client device using said state information from said first client device. |
20030012558 | 10165427 | 0 | 1. A computer readable information storage medium read and executed by a medium layer, comprising: audio/video (AV) data providing a video picture when read and executed; and multiple markup documents information which represents text information in multiple languages to be displayed in one of the multiple languages and defines a display window to display the video picture corresponding to an AV data stream decoded and reproduced from the AV data when being read and executed; and multi-language markup document information representing a markup document to display the text information in the one of the multiple languages when being read and executed. |
7809575 | 11679279 | 1 | 1. A method for enabling global grammars for a particular multimodal application, the method implemented with a multimodal browser and a multimodal application operating on a multimodal device supporting multiple modes of user interaction with the multimodal application, the modes of user interaction including a voice mode and one or more non-voice modes, the method comprising: loading a multimodal web page; determining whether the loaded multimodal web page is one of a plurality of multimodal web pages of the particular multimodal application; if the loaded multimodal web page is one of the plurality of multimodal web pages of the particular multimodal application, loading any currently unloaded global grammars in the loaded multimodal web page and maintaining any previously loaded global grammars; and if the loaded multimodal web page is not one of the plurality of multimodal web pages of the particular multimodal application, unloading any currently loaded global grammars. |
20080226042 | 12049021 | 0 | 1. A method for facilitating use of an interactive voice response (IVR) system, comprising: transmitting a request for a webpage to a server; receiving the webpage, wherein the webpage includes a telephone number of the IVR system; displaying the webpage, wherein the webpage includes a control element associated with the telephone number; activating the control element; and in response to activation of the control element, displaying a visual representation of at least the first level of options of an IVR menu. |
8321196 | 12572602 | 1 | 1. A method programmed for execution in a computing device for providing prose text report generation from a patient consultation, the method comprising: capturing two or more orally provided knowledge information items pertaining to a patient or observations of a radiological image during the patient consultation, said two or more orally provided knowledge information items being provided in a memory of the computing device; and utilizing a prose text definition ontology having linguistic knowledge and a base report domain ontology to structure one or more sentences of a prose text report based on said knowledge information items for presentation to a user; said prose text definition ontology comprising: one or more references to said base report domain ontology; one or more connectors for modeling adjacent linguistic relations to connect concepts from said base report domain ontology; and one or more sequences for modeling relations among concepts from said base report domain ontology; wherein said prose text definition ontology is consulted with concepts from said base report domain ontology to provide prose text that expresses said concepts and relationships involved in the patient consultation; wherein said base report domain ontology imports said knowledge items pertaining to the patient, defines said knowledge items as report domain concepts of said base report domain ontology, identifies relationships between said report domain concepts, and identifies constraints on said report domain concepts, wherein said knowledge items are identified, classified and validated in the context of said base report domain ontology; and wherein said report domain concepts, said relationships between said report domain concepts and said constraints on said report domain concepts are presented to said prose text definition ontology to be expressed as prose text using said references, said connectors and said sequences in said one or more sentences. |
20040019485 | 10388107 | 0 | 1. A speech synthesis method comprising: a separating step of separating, from an input text, a singing data portion specified by a singing tag and the other text portion; a singing metrical data forming step of forming singing metrical data from said singing data; a speech symbol sequence forming step of forming a speech symbol sequence for said text portion; a metrical data forming step of forming metrical data from said speech symbol sequence; and a speech synthesis step of synthesizing the speech based on said singing metrical data or said metrical data. |
20040015547 | 10245918 | 0 | 1. A method for chat-based communications between a legacy wireless mobile terminal and a non-legacy wireless mobile terminal, comprising: receiving an outbound chat message from the non-legacy mobile terminal, the outbound message including a legacy address corresponding to the legacy mobile terminal; detecting the legacy address; upon detecting the legacy address, building an inbound message that includes an originator address; injecting the inbound message into an out-of-band messaging system for delivery to the legacy mobile terminal; and receiving a reply message at the non-legacy mobile terminal, sent from the legacy mobile terminal in response to the inbound message. |
4718092 | 06593893 | 1 | 1. In a speech recognition apparatus wherein speech units are each characterized by a sequence of template patterns, and having means for processing a speech input signal for repetitively deriving therefrom, at a frame repetition rate, a plurality of speech recognition acoustic parameters, and means responsive to said acoustic parameters for generating likelihood costs between said acoustic parameters and said speech template patterns, and for processing said likelihood costs for determining the speech units in said speech input signal, a method of template matching and cost processing for recognizing the correspondence of said speech input signal and said template patterns, said method comprising the steps of characterizing the allowable possible sequences of speech units as a grammar graph, said graph having a plurality of grammar nodes connected by a plurality of connecting arcs, each said arc having associated therewith at least one word, each word having at least one kernel, and each kernel having one template pattern, deactivating kernels of each said word having a plurality of kernels when a minimum cumulative score associated therewith exceeds a deactivation threshold, kernels which have not been deactivated being called active kernels, generating likelihood costs representing the similarity of said acoustic parameters and ones of said active kernels, determining, at each frame time, cumulative scores associated with said nodes, generating a speech recognition decision, and determining from said cumulative scores the identity of the speech units in said speech input signal. |
Subsets and Splits