doc_id
stringlengths
7
11
appl_id
stringlengths
8
8
flag_patent
int64
0
1
claim_one
stringlengths
13
18.3k
9691007
14840109
1
1. An identification apparatus, comprising: data obtaining circuitry that obtains input data; feature quantity obtaining circuitry that obtains a feature quantity corresponding to the input data; a plurality of classifiers that receives input of the feature quantity, performs classification based on the input feature quantity, and outputs a single first class value, which is a value corresponding to a class obtained by the classification, respectively; identification circuitry that inputs the feature quantity into each of the classifiers, and generates a second class value, which is a single classification result, based on a plurality of the first class values obtained from the classifiers; and reliability generation circuitry that generates a reliability of the second class value based on variations across the plurality of the first class values, wherein the reliability generation circuitry generates the reliability so that the magnitude of the variation of the plurality of the first class values and the magnitude of the reliability of the second class value have a negative correlation.
20080243508
12068600
0
1. A prosody-pattern generating apparatus comprising: an initial-prosody-pattern generating unit that generates an initial prosody pattern based on language information and a prosody model which is obtained by modeling prosody information in units of phonemes, syllables and words that constitute speech data; a normalization-parameter generating unit that generates, as normalization parameters, mean values and standard deviations of the initial prosody pattern and a prosody pattern of a training sentence included in a speech corpus, respectively; a normalization-parameter storing unit that stores the normalization parameters; and a prosody-pattern normalizing unit that normalizes a variance range or a variance width of the initial prosody pattern in accordance with the normalization parameters.
4624012
06375434
1
1. A text-to-speech synthesis system for producing audible synthesized human speech of any one of a plurality of voice sounds simulating child-like, adult, aged and sexual characteristics from digital characters comprising: text reader means adapted to be exposed to text material and responsive thereto for generating information signals indicative of the substantive content thereof; converter means for receiving said information signals from said text reader means and generating digital character signals representative thereof; means for receiving said digital character signals from said converter means; memory means storing digital speech data including digital speech instructional rules and digital speech data representative of sound unit code signals; data processing means for searching said digital speech data stored in said memory means to locate digital speech data representative of a sound unit code corresponding to said digital character signals received from said converter means; speech memory means storing digital speech data representative of a plurality of sound units; concatenating controller means operably coupled to said speech memory means for selectively combining digital speech data representative of a plurality of sound units in a serial sequence to provide concatenated digital speech data representative of a word; speech synthesis controller means coupled to said data processing means and to said speech memory means for receiving digital speech signals representative of a sound unit code corresponding to said digital character signals and selectively accessing digital speech data representative of sound units corresponding to said sound unit code from said speech memory means; speech synthesizer means operably coupled to said concatenating controller means and said speech synthesis controller means for receiving selectively accessed serial sequences of digital speech data from said concatenating controller means to provide audio signals corresponding thereto and representative of synthesized human speech; voice characteristics conversion means interposed between said concatenating controller means and said speech synthesizer means and being coupled therebetween independently of the coupling between said concatenating controller means and said speech synthesizer means, said voice characteristics conversion means being operably coupled to said speech synthesis controller means and being responsive thereto to selectively modify the voice characteristics of said serially sequenced digital speech data output from said concatenating controller means, said voice characteristics conversion means including means for making a voice character selection of the synthesized speech to be derived from the digital speech data as selectively accessed from said speech memory means so as to simulate a voice sound differing in character with respect to the voice sound of the synthesized speech from the digital speech data of said speech memory means in the voice characteristics pertaining to the apparent age and/or sex of the speaker; said digital speech data as selectively accessed from said speech memory means having a predetermined pitch period, a predetermined vocal tract model and a predetermined speech rate; speech parameter control means for modifying the pitch period and speech rate in response to inputs from said voice character selection means to produce a modified pitch period and a modified speech rate, said speech parameter control means including sample rate control circuit means responsive to inputs from said voice character selection means for adjusting the sampling period of said digital speech data selectively accessed from said speech memory means in a manner altering the digital speech formants contained therein to a preselected degree and providing adjusted sampling period signals as an output; speech data reconstructing means operably associated with said speech parameter control means for combining the modified pitch period and the modified speech rate with the predetermined vocal tract model into a synthesized speech data format of speech data modified with respect to the original speech data from said speech memory means; said speech synthesizer means being coupled to said speech data reconstructing means and to the output of said sample rate control circuit means for receiving the modified speech data and the adjusted sampling period signals therefrom in providing said audio signals representative of human speech from the modified speech data; and audio means coupled to said speech synthesizer means for converting said audio signals into audible synthesized human speech in any one of a plurality of voice sound from said digital speech data stored in said speech memory means as determined by said voice characteristics conversion means.
20170310821
15644589
0
1. A method of deterring unsolicited telephone calls, the method comprising: in response to user input received by a telephone switch from a telephone at a destination telephone number during a first call from an originating telephone number and terminated to the destination telephone number by the telephone switch, capturing, with a computer system, call processing data associated with the first call; identifying, with the computer system and based at least in part on the call processing data, an originating entity of the first call; and adding, with the computer system, information about the originating entity to a database of telephone numbers that originate unsolicited telephone calls.
20140010363
13920020
0
1. (canceled)
8374791
12694405
1
1. A computer implemented method of operating a navigation system to provide a guidance message for traveling a route comprising road segment, the method comprising: obtaining, by a processor, data from a geographic database associated with the navigation system identifying a feature visible from the road segment; obtaining, by the processor, preferred name data from the geographic database representing a preferred name of said feature visible from the road segment, wherein the preferred name provides a visual description of the feature in a first language; identifying, by the processor, a part-of-speech for at least two components of the preferred name; converting, by the processor, the at least two components of the preferred name into a text of a second language according to the part-of-speech for the at least two components and a grammar rule of the second language to provide the preferred name in the second language, wherein the second language is different from the first language; and providing the guidance message including the text of the second language.
20060085186
10967957
0
1. An electronic device comprising: a memory for storing a word, a first transcription of the word, and a second transcription of the word; a microphone for capturing a speech utterance; a speech recognition engine, coupled to the microphone and the memory, for evaluating the speech utterance against the first transcription and the second transcription, for determining a first probability factor for the first transcription, and for determining a second probability factor for the second transcription; and a processor, coupled to the speech recognition engine, for inactivating the first transcription if the first probability factor is below a threshold.
20150169638
14275067
0
1. A system for verifying a recognized object, comprising: a candidate database storing a plurality of candidate images; a verification engine communicatively coupled to the candidate database, configured to: receive a plurality of candidate results, wherein the candidate results comprises at least two of the plurality of candidate images and each of the plurality of candidate results corresponds to a potential match for a captured image; select a verification technique based on at least one candidate result from the plurality of candidate results; generate, using the selected verification technique, a match score for each of the at least one candidate result as a function of the captured image and the selected at least one candidate result; and classify the at least one candidate result based on the at least one generated match score.
7693951
12144373
1
1. A data processing system for managing messages, the data processing system comprising: receiving means for receiving a request to store a plurality of instant messages from a particular chat session with a particular contact; determining means for determining whether the particular contact has a listing on a contact list comprising a plurality of contacts, wherein each contact of the plurality of contacts having a listing on the contact list has an associated folder in a set of folders for storing chat sessions of the associated contact, and wherein each folder in the set of folders is linked with the listing of its associated contact on the contact list through a graphical user interface; and selecting means, responsive to determining that the particular contact has a listing on the contact list, for selecting the listing of the particular contact from the contact list to store the plurality of instant messages from the particular chat session in the associated folder linked with the listing of the particular contact on the contact list.
20130339014
13495509
0
1. A computer-implemented method employing at least one hardware implemented computer processor for performing cepstral mean normalization (CMN) in automatic speech recognition comprising: storing a current CMN function in a computer memory as a previous CMN function; updating the current CMN function based on a current audio input to produce an updated CMN function; using the updated CMN function to process the current audio input to produce a processed audio input; attempting to perform automatic speech recognition of the processed audio input to determine representative text; if the processed audio input is not recognized as representative text, replacing the updated CMN function with the previous CMN function.
7684076
10898046
1
1. A raster image processor comprising: means adapted for receiving image data, which image data is comprised of a linear bit stream representative of an associated image; segmenting means adapted for segmenting the image data into a plurality of chunks including M existing chunks and a remaining plurality of non-existing chunks, wherein each of the M existing chunks includes data corresponding to at least one pixel that has been used in connection with a prior rendering operation, wherein each of the plurality of existing chunks is comprised of N bits, and wherein M and N are integers greater than one; page map memory adapted for storing page map data, which page map data includes at least M entries, each entry corresponding to a single chunk from the plurality of existing chunks and non-existing chunks, each entry adapted for storing a first flag value representative of a first selected value of a chunk from the plurality of existing chunks and non-existing chunks corresponding thereto and at least a second flag value when the chunk corresponding thereto is other than the first selected value; means adapted for grouping the M existing chunks into a plurality of subsets thereof; and scan list memory adapted for storing scan list data, which scan list data is comprised of an indexed array wherein each element thereof functions as a pointer to an initial existing chunk in a subset thereof, and the initial existing chunk includes a pointer to a subsequent existing chunk in the subset.
20140250378
13784449
0
1. A method for using a wizard control panel in a natural language (NL) conversational system, comprising: receiving a user interaction with the NL conversational system that includes the use of an Automated Speech Recognition (ASR) component, a Natural Language Understanding (NLU) component, a Dialog Manager (DM) component, and a Natural Language Generation (NLG) component during a dialog flow; displaying the wizard control panel including a display of elements that are used for affecting an automatic operation of at least one of the different components of the NL conversational system during the dialog flow; determining when an input is received that is associated with one of the elements of the wizard control panel; determining a current component of the dialog flow; and submitting to the current component updated results based on the input that modify results automatically determined by the current component of the NL conversational system.
8401212
12251200
1
1. A communication device for use with an ear of a user, the ear comprising a pinna, an eardrum, an ear canal and an opening of the ear canal, the device comprising: an ear canal input transducer to detect high frequency localization cues of the pinna comprising high frequencies of sound above a resonance frequency of the ear canal when placed at least one of inside the ear canal or near the opening of the ear canal; an external input transducer to detect sound comprising frequencies of sound at or below the resonance frequency when placed outside the ear canal away from the ear canal opening; at least one output transducer sized for placement inside the ear canal to vibrate the eardrum of the user; and circuitry comprising a processor and amplifiers coupled to the ear canal input transducer, the external input transducer and the at least one output transducer, the processor configured to output the high frequencies of sound with a first high frequency gain from the ear canal input transducer and a second high frequency gain from the external input transducer, the first high frequency gain greater than the second high frequency gain in order to vibrate the eardrum with amplified high frequency localization cues of the pinna from the ear canal input transducer and wherein the processor outputs the frequencies of sound at or below the resonance frequency with a first gain from the ear canal input transducer and a second gain from the external input transducer, the second gain greater than the first gain to provide sound from the external input transducer to the user.
20110082683
12572021
0
1. A method for credibly providing machine-generated translations, the method comprising: translating a document from a source language to a target language by executing a machine-translation engine stored in memory to obtain a machine-generated translation; predicting a trust level of the machine-generated translation by executing a quality-prediction engine stored in memory, the trust level associated with translational accuracy of the machine-generated translation; and outputting the machine-generated translation and the trust level.
9984679
15212908
1
1. A method comprising: weighting a first automatic speech recognition model, to yield a weighted first automatic speech recognition model; weighting a second automatic speech recognition model, to yield a weighted second automatic speech recognition model; converting, via a processor, speech to text using the weighted first automatic speech recognition model, to yield a first transcript; converting, via the processor, the speech to text using the weighted second automatic speech recognition model, to yield a second transcript; receiving, from a user, a judgment of perceived accuracy of one of the first transcript and the second transcript; and updating, via the processor, one of the weighted first automatic speech recognition model and the weighted second automatic speech recognition model based on the judgment.
8428476
12457747
1
1. An image forming apparatus comprising: an image carrier that has a visible image carried on a rotating peripheral surface thereof; a driving source that generates a driving force for rotating and driving the image carrier; a rotation detection unit that detects a rotational angle speed or a rotational angle displacement of the image carrier; and a control unit that performs fluctuation pattern recognition processing for detecting a speed fluctuation of the image carrier based on an output from the rotation detection unit while driving the driving source in a state in which a print job in accordance with a user's instruction is not performed, thereby recognizing a speed fluctuation pattern per integer rotation of the image carrier, control pattern construction processing for constructing a speed control pattern of the driving source that reduces a cyclic speed fluctuation of the image carrier based on the speed fluctuation pattern, speed fine-adjustment processing for finely adjusting a driving speed of the driving source in accordance with the speed control pattern during a transfer process including at least a process for transferring the visible image on the peripheral surface of the image carrier to a transfer body or a process for transferring a visible image on another image carrier to the peripheral surface of the image carrier, and remaining pattern recognition processing for detecting a remaining speed fluctuation remaining in the image carrier even after the speed fine-adjustment processing is performed, thereby recognizing a remaining speed fluctuation pattern per integer rotation of the image carrier; wherein the control unit is configured to perform control pattern correction processing for setting a frequency band of the remaining speed fluctuation to be detected by the remaining pattern recognition processing narrower than a frequency band of the speed fluctuation to be detected by the fluctuation pattern recognition processing and correcting the speed control pattern so as to be a pattern capable of reducing even the remaining speed fluctuation based on the remaining speed fluctuation pattern recognized by the remaining pattern recognition processing.
20130024820
13117924
0
1. A method, comprising: activating, by a computing device, a graphical key of a graphical keyboard, wherein the graphical key is associated with a touch-target and is displayed at a touch-based interface of the computing device, wherein the computing device controls a graphical selector displayed at the touch-based interface using input received at the touch-target, and wherein the graphical selector comprises one of a text cursor or a pointer; upon activation of the graphical key, receiving gesture input corresponding to a directional gesture at the touch-based interface of the computing device, wherein the directional gesture is a swipe motion originating at the touch target; and moving the graphical selector from a first graphical location at the touch-based interface to a second graphical location at the touch-based interface by at least one selected increment, wherein the at least one selected increment is at least partially based on a speed of the gesture input originating at the touch target, wherein the at least one selected increment comprises a first increment if the speed of movement of the gesture input is below a speed threshold, and wherein the at least one selected increment comprises a second increment larger than the first increment if the speed of the gesture input is above or equal to the speed threshold.
9177558
13755790
1
1. A computer-implemented method of assessing speech pronunciation, comprising: receiving speech for analysis via a computer-readable storage medium; performing automatic speech recognition on speech using a processor to generate word hypotheses for the speech, the word hypotheses identifying a set words recognized by an automated speech recognizer in the speech using one or more data processors; performing time alignment between the speech and the word hypotheses using the automatic speech recognizer to associate the word hypotheses with corresponding sounds of the speech; calculating statistics regarding individual words and phonemes of the word hypotheses using the processor based on said alignment; calculating a plurality of features for use in assessing pronunciation of the speech based on the statistics using the processor; and calculating an assessment score based on one or more of the calculated features.
8280119
12329346
1
1. A method for iris recognition comprising: locating an eye with a camera; obtaining an image of the eye with the camera; assessing the image of the eye with a set of image quality metrics with a processor; and segmenting the iris in the image of the eye with the processor; wherein the set of image quality metrics comprises: an offset measurement of the eye in the image of the eye; and a gaze measurement of the eye in the image of the eye; wherein a calibration of the segmenting of the iris is determined by the offset and gaze measurements; and wherein: if the offset and gaze measurements indicate offset or gaze of the eye in the image of the eye, then the segmenting of the iris is based on no circular calibration; and if the offset and gaze measurements. indicate no offset or gaze of the eye, then the segmenting of the iris is based on circular calibration.
9922272
14865565
1
1. A method for similarity metric learning for multimodal medical image data, the method comprising: receiving a first set of image data of a volume, wherein the first set of image data is captured with a first imaging modality; receiving a second set of image data of the volume, wherein the second set of image data is captured with a second imaging modality; aligning the first set of image data and the second set of image data; training a first set of parameters with a multimodal stacked denoising auto encoder to generate a shared feature representation of the first set of image data and the second set of image data, the multimodal stacked denoising auto encoder comprising a first layer with independent and parallel denoising auto encoders; training a second set of parameters with a denoising auto encoder to generate a transformation of the shared feature representation; initializing, using the first set of parameters and the second set of parameters, a neural network classifier; training, using training data from the aligned first set of image data and the second set of image data, the neural network classifier to generate a similarity metric for the first and second imaging modalities, the similarity metric identifying which voxels from the first set of image data that correspond to the same position in the volume as voxels from the second set of image data; and performing image fusion on the first set of image data and the second set of image data using the identified voxels.
9263027
13150669
1
1. A broadcast signal receiver comprising: a text data receiver configured to receive broadcast text data and to transmit the broadcast text data to a user interface, wherein the broadcast text data includes at least one word; a text-to-speech (TTS) converter configured to convert received text data into an audio speech sound, wherein the TTS converter is configured to: detect whether the at least one word is also included in a stored list of words, and when the at least one word is also included in the stored list of words, convert the at least one word according to a conversion defined by the stored list, and when the at least one word is not included in the stored list of words, convert the at least one word according to a set of predetermined conversion rules; a conversion memory configured to store the list of words as initial data; an update receiver configured to receive, from a conversion repository, and via a network connection, update data, wherein the update data includes updated words, associated conversions, and updated conversion rules, and configured to store, in the conversion memory, the update data; and a commander circuitry configured to control an operation of the broadcast signal receiver, wherein the commander circuitry is configured to receive a user control input, wherein the user control input indicates an incorrect conversion carried out by the TTS converter; and wherein the broadcast signal receiver is configured to, in response to the user control input, send a message to a data provider, and thereby request update data, wherein the message indicates a conversion problem and indicates text which was converted, by the TTS converter, into speech.
10050920
15197032
1
1. A method comprising: determining a chat amount within a time window in an area of a virtual universe associated with an environmental chat associated with an avatar; determining a chat distance associated with the environmental chat based, at least in part, on the chat amount, wherein the chat distance comprises a radius defining a chat area around the avatar within which the avatar receives conversations between avatars in the area of the virtual universe; and modifying the chat distance in response to a change in the chat amount in the area of the virtual universe associated with the environmental chat, wherein the modified chat distance is based, at least in part, on the changed chat amount.
8117553
10848027
1
1. A method to maintain a user interface context in a domain application, the method including: generating a user interface on a web client at a client device for the domain application hosted by a remote server from one or more user interface components that control the user interface, the domain application executing under the web client at the client device, each user interface component of the domain application user interface including a metadata definition of user interface elements including a definition of a layout of the user interface elements on the client device, wherein each user interface component includes user interface logic to control the user interface elements defined in its metadata definition, the user interface elements providing a graphical representation of data objects hosted by the server, each user interface component being either displayed in the user interface or hidden from view, where data and interface logic of user interface components hidden from view are included in a persistent layer stored in the user interface, but are not displayed in the user interface, wherein data automatically flows between data objects bound to the user interface elements and their corresponding user interface elements defined in the user interface component, and wherein the user interface includes an input field to receive data input from a user to execute a task of the domain application; receiving a request at one of the user interface components during execution of the domain application for a first user interface element that is hidden from view on the user interface and is related to information needed to perform a task in the domain application but is not displayed in the user interface when the request is received, where the first user interface element exists within the user interface component but is not displayed; displaying the first user interface element in the user interface context in response to receiving the request at the one user interface component including integrating user interface elements of the one user interface component into the user interface, the first user interface element to provide data elements as options to enter into the input field to execute the task of the domain application to affect data objects hosted by the server and bound to the user interface elements; receiving a user selection at the user interface component of one or more data elements of the first user interface element representing the data objects; and regenerating the user interface of the executing domain application to integrate the selected data elements into the input field of the user interface in response to receiving the user selection.
6104788
08985388
1
1. A system for enabling a user to remotely access an electronic calendar using a digital telephone, comprising: an input interface configured to respond to a telephone network; an electronic calendar configured to store a plurality of schedules; a scheduler configured to access the plurality of schedules as a function of an output from the input interface; an output interface configured to transmit to the telephone network one or more output signals representative of textual information displayable on the digital telephone, wherein the textual information is derived from the stored schedules.
9529792
14862981
1
1. A glossary management device comprising: a read circuit that reads a document; a storage circuit that has a storage area for a glossary to which text segments extracted from the document that is read by the read circuit are to be added as entry terms; an acquisition circuit that acquires text data of the document; an analysis circuit that performs analysis of the text data acquired by the acquisition circuit to identify a language of the document and parts of speech of text segments in the text data and extracts one or more text segments from the document based on the analysis; a term matching circuit that performs matching for each of the extracted text segments against a public dictionary containing entry terms registered therein; and a registration circuit that adds to the glossary, each extracted text segment that does not match any entry term in the public dictionary, wherein the analysis circuit determines whether or not each extracted text segment is a proper noun, if the analysis circuit determines that the extracted text segment is not a proper noun, the term matching circuit performs matching of the extracted text segment against the public dictionary, and the registration circuit adds the extracted text segment to the glossary if the extracted text segment does not match any entry term in the public dictionary, and if the analysis circuit determines that the extracted text segment is a proper noun, the registration circuit adds the extracted text segment to the glossary without the term matching circuit performing matching of the extracted text segment against the public dictionary.
20010018652
09043171
0
1. A method of generating a synthetic waveform output corresponding to a sequence of substantially similar cycles, comprising the steps of (a) generating a synthetic waveform sample; (b) generating a successive waveform sample from said synthetic waveform sample and data defining the transformation followed by said cycles in the temporal vicinity of said synthetic waveform sample; (c) designating said successive waveform sample as a synthetic waveform sample and repeating step (b); (d) repeating step (c) a plurality of times to generate a sequence of said successive waveform samples corresponding to a plurality of said cycles; and (e) outputting the samples of said sequence to generate a waveform.
20040088154
10283770
0
1. A method for providing a user with information in audible form, said method comprising: determining content sources from which information is to be retrieved for responding to the request; retrieving grammar fragments corresponding to the content sources determined; providing a menu format including information for providing the grammar fragments to the user in audible form; and aggregating the grammar fragments retrieved such that the grammar fragments can be provided to the user in audible form in conformance with the menu format.
20080120112
11933191
0
1. In a speech-enabled communications system for facilitating a digital information service, said communications system including television, a set top box, a speech input system, and a head-end, wherein a user activates said speech input system by activating a switch associated with operation of a speech input device, a method for providing a set of immediate speech feedback overlays to inform a user of said communications system's states, said method comprising the steps of: (a) checking if a current screen is speech-enabled when said switch is activated; (b) if the current screen is speech-enabled, displaying a first tab signaling that a speech input system is activated; (c) if the current screen is not speech-enabled, displaying a second tab signaling a non speech-enabled alert, said second tab staying on screen for a first interval; and (d) if said switch is re-activated, repeating Step (a).
20150262581
14726943
0
1. (canceled)
8924197
11929734
1
1. A non-transitory computer-readable storage medium storing a computer program including instructions for converting a natural language query into one or more logical queries, the instructions causing a general purpose computer to perform the steps comprising: storing a plurality of specialized tools to the non-transitory computer-readable storage medium, each of the plurality of specialized tools being adapted to perform a highly specific domain independent recognition task, each of the specialized tools operable independently from any of the other specialized tools, and each of the tools being adapted to operate independently of any particular data; receiving a natural language query; determining one or more knowledge bases for use by each of the specialized tools to perform its specific recognition task; and converting the natural language query to generate one or more logical queries in accordance with the one or more specialized tools.
5537647
07972247
1
1. For use in a speech processing system having means for computing a plurality of temporal speech parameters including short-term parameters having time trajectories, a method for alleviating harmful effects of distortions of speech, the method comprising: performing a non-linear operation on a function of the short-term parameters of speech, the function being substantially linear for small values of the parameters and substantially logarithmic for large values of the parameters; and filtering data representing time trajectories of the short-term parameters of speech in a particular spectral domain to obtain a filtered spectrum and to minimize distortions due to convolutive noise and additive noise in speech.
7962849
11371810
1
1. A method for processing a user input character string entered by a user into a computer system that comprises a browser, said method comprising: receiving the user input character string, said user input character string conforming to a native character set and encoding of the browser for a language selected by the user; converting the user input character string to a converted character string consisting of characters of a Universal Character Set (UCS) which are independent of platform and language, wherein the converted character string comprises a plurality of leading whitespace characters, a plurality of trailing whitespace characters, and a middle character string comprising remaining whitespace characters that include at least one grouping of at least two consecutive whitespace characters, wherein the middle character string is disposed between the leading whitespace characters and the trailing whitespace characters, and wherein the leftmost character and the rightmost character of the middle character string are not whitespace characters; transforming the converted character string to a transformed character string by a first transformation or a second transformation, wherein said transforming the converted character string to the transformed character string by the first transformation comprises removing the leading whitespace characters and the trailing whitespace characters in the converted character string such that the transformed character string does not comprise any leading whitespace character, does not comprise any trailing whitespace character, and comprises the remaining whitespace characters; and wherein said transforming the converted character string to the transformed character string by the second transformation comprises removing the trailing whitespace characters in the converted character string such that the transformed character string does not comprise any trailing whitespace character and comprises both the leading whitespace characters and the remaining whitespace characters; and after said transforming, converting each grouping of the at least one grouping of at least two consecutive whitespace characters in the middle character string of the transformed character string to a single whitespace character, resulting in the transformed character string being converted to a resultant character string; wherein the method comprises modifying the user input character string to generate the resultant character string; wherein if said transforming consists of transforming the converted character string to the transformed character string by the first transformation, then said modifying consists of said converting the user input character string, said transforming the converted character string, and said converting each grouping; wherein if said transforming consists of transforming the converted character string to the transformed character string by the second transformation, then said modifying comprises said converting the user input character string, said transforming the converted character string, and said converting each grouping.
10068490
14912988
1
1. A system for improving student learning comprising: learning material having content for presentation to a student; an EEG system configured to measure a cognitive load of the student as the learning material is presented in a learning session; a device configured to measure physiological data of the student as the learning material is presented in the learning session, wherein the data includes at least three of a brain activity of the student, a measurement of the time it takes the student to read the material, a response time of the student to a request or question posed, a correctness of a response by the student, a total time the student has been engaged in learning, a position of gaze of the student and a posture of the student; a cognitive assessment algorithm configured to determine a cognitive state of the student based on the cognitive load and the physiological data; and a learning action algorithm configured to modify a continued presentation of the learning material in real time based on the cognitive state of the student.
9870775
15006226
1
1. An electronic device that performs voice recognition, comprising: a microphone configured to receives an input of a voice and generates a voice signal; a non-transitory storage unit configured to stores data processed based on voice recognition; and a processor functionally connected to the microphone and the storage unit, wherein the processor is configured to: detect an input of the voice signal through the microphone, determine a direction of a speaker based on the voice signal, determine a beamforming direction of the microphone based on the direction of the speaker, determine whether the direction of the speaker corresponds to the beamforming direction, if the direction of the speaker corresponds to the beamforming direction, perform a voice recognition about the voice signal, if the direction of the speaker does not corresponds to the beamforming direction, divide a voice recognition section for the voice recognition into a first section and a second section based on a predefined dividing method, process a voice recognition operation based on a first method for a first voice signal inputted during the first section, and process the voice recognition operation based on a second method for a second voice signal inputted during the second section.
20050288005
11158994
0
1. A method of providing carrier services at a mobile device comprising: storing data in the mobile device that specifies a voice interface for the mobile device, including storing first data that specifies a set of interface states associated with a first set of functions, and storing second data that specifies an interface for accessing the carrier services; receiving a command associated with a request to access the carrier services; processing one or more inputs from the user according to the second data; performing actions based on the one or more inputs and the second data to provide one of the carrier services at the mobile device.
8676728
13076201
1
1. A system comprising: a plurality of microphones configured in a pre-determined arrangement; a time-difference-of-arrival module configured to determine relative time-difference-of-arrival of an acoustic signal at the plurality of microphones; and a trained artificial neural network module configured to accept the determined time-difference-of-arrival and generate spatial coordinates of the acoustic source.
8036706
11848671
1
1. A method for selecting at least one additional service in a mobile telephone comprising a keyboard for dialing a number to set up a telephone communication, the at least one additional service being provided by an Integrated Circuit (IC) Card in the mobile telephone, the method comprising: comparing a dialed number with at least one service number stored in the IC Card and associated to the at least one additional service; terminating the set up of the telephone communication; and triggering the associated additional service corresponding to the dialed number, the at least one additional service displaying a submenu on the mobile telephone, the submenu including a plurality of application entries associated to the at least one additional service.
20140106824
14107762
0
1. A mobile telephone, comprising radio-frequency connection to a cellular network; a speaker; a stored contact list including telephone numbers for the contacts; a DTMF tone generator coupled to the speaker; and coded intelligence executing from a non-transitory physical medium, enabling a user to select a contact, causing the DTMF tone generator to produce a DTM tone sequence associated with the contact's telephone number audibly over the speaker.
20090043570
11834964
0
1. A method for processing speech signal data of at least one speech signal through use of a computing apparatus, the time domain of each speech signal divided into a plurality of frames, each frame characterized by a frame number T representing a unique interval of time, each speech signal characterized by a power spectrum with respect to frame T and frequency band ω of a plurality of frequency bands into which a frequency range of each speech signal has been divided, said method comprising: computing a speech segment of a first speech signal, said speech segment consisting of a first set of frames of the plurality of frames of the first signal; determining a reverberation segment of the first speech signal, said reverberation segment consisting of a second set of frames of the plurality of frames of the first signal; computing L filter coefficients W(k) (k=1, 2,. .. , L) respectively corresponding to L frames immediately preceding frame T such that the L filter coefficients minimize a function Φ in accordance with a set of equations for Φ consisting of: Φ = G Tail · φ Tail + G Speech · φ Speech φ Tail = ∑ T ∈ Tail ∑ ω { X ω ( T ) - ∑ k = 1 L W ( k ) · X ω ( T - k ) } 2 φ Speech = ∑ T ∈ Speech ∑ ω { ∑ l = 1 L W ( l ) · X ω ( T - l ) } 2 wherein X ω (T) denotes a power spectrum of the first speech signal, wherein G Tail and G Speech are weighting coefficients, wherein the frames T in the summation over T ε Speech encompass the first set of frames in the speech segment, wherein the frames T in the summation over T ε Tail encompass the second set of frames in the reverberation segment, and wherein the frequency bands in the summation over ω encompass the plurality of frequency bands; and storing the computed L filter coefficients within storage media of the computing apparatus.
20090324060
12485020
0
1. A learning apparatus for a pattern detector, which includes a plurality of weak classifiers and detects a specific pattern from input data by classifications of the plurality of weak classifiers, comprising: an acquisition unit configured to acquire a plurality of data for learning in each of which whether or not the specific pattern is included is given; a learning unit configured to make the plurality of weak classifiers learn by making the plurality of weak classifiers detect the specific pattern from the data for learning acquired by the acquisition unit; a selection unit configured to select a plurality of weak classifiers to be composited from the weak classifiers which have learned by the learning unit; and a composition unit configured to composite the plurality of weak classifiers selected by the selection unit into one composite weak classifier based on comparison between a performance of the composite weak classifier and performances of the plurality of weak classifiers.
20050220191
11103588
0
1. An object activity modeling method comprising the steps of: (a) obtaining an optical flow vector from a video sequence; (b) obtaining a probability distribution of a feature vector for a plurality of video frames, using the optical flow vector, wherein the feature vector is an d×L dimensional vector, d being a number of dimensions and L being a number of pixels in a video frame or in a region of interest; (c) modeling states, using the probability distribution of the feature vector; and (d) expressing the activity of the object in the video sequence based on state transition.
4471459
06307631
1
1. A method using a digital data processing means for separating words with acceptable spellings from words with nonacceptable spellings wherein each word comprises characters assigned character positions, character position in each word being assigned increasing values from one end of the word to the other, characters at the same number of positions from one end of each word being assigned the same value, the two words to be compared being called a query word and a candidate word, the method comprising the steps of: (a) comparing representation of a character in a given character position of the query word with representations of characters in the next lower character position, the same character position, and the next higher character position in the candidate word, and forming a compare type indication representing a match or a mismatch between such query word character and each of such candidate word characters under comparison; (b) changing the given character position of the query word under comparison in the preceding step of comparing to the next higher valued character position of the query word and repeating the preceding step of comparing at least once to form another compare type indication; and (c) processing and utilizing said compare type indications to thereby form a spelling classification indication for one of the words under comparison representing an acceptable spelling or a nonacceptable spelling.
20030048882
09949363
0
1. A voice message capture and retrieval method comprising: receiving and recording an audio input message; encoding the audio input message into a machine-readable representation; translating the audio input message into a text representation; and outputting at least one of the machine-readable representation and the text representation to at least one storage medium.
20040049388
10227653
0
1. A method of speech recognition comprising: providing a user interface which allows a user to select between generating a first and a second user input; responding to the generation of the first user input by performing large vocabulary recognizing on one or more utterances in a prior language context dependent mode, which recognizes at least the first word of such recognition depending in part on a language model context created by a previously recognized word; and responding to the generation of the second user input by performing large vocabulary recognizing on one or more utterances in a prior language context independent mode, which recognizes at least the first word of such recognition independently of a language model context created by any previously recognized word.
9990916
15139141
1
1. A computer-implemented method comprising: receiving an input, the input including textual data; identifying a regional noun in the textual data; determining a user accent classification based on a context of the input; accessing a phonetic inventory stored in a database, in which multiple phonetic transcriptions are stored for each of a plurality of regional nouns, including the regional noun; executing a search of the database using the user accent classification and the regional noun as search parameters; determining a personalized phonetic transcription of the regional noun, selected from the multiple phonetic transcriptions stored for the regional noun as a search result of executing the search of the phonetic inventory stored in the database; outputting the personalized phonetic transcription; and synthesizing audio output of the textual data, based on the personalized phonetic transcription.
20080100579
11853320
0
1. A text entry system comprising: (a) a user input device comprising an auto-correcting keyboard region comprising a plurality of the members of a character set, wherein locations having known coordinates in the auto-correcting keyboard region are associated with corresponding character set members, wherein user interaction with the user input device within the auto-correcting keyboard region determines a location associated with the user interaction and wherein the determined interaction location is added to a current input sequence of contact locations; (b) a memory containing a plurality of objects; (c) an output device with a text display area; and (d) a processor coupled to the user input device, memory, and output device, said processor comprising: (i) a distance value calculation component which, for each determined interaction location in the input sequence of interactions, calculates a set of distance values between the interaction locations and the known coordinate locations corresponding to one or a plurality of character set members within the auto-correcting keyboard region; (ii) an object evaluation component which, for each generated input sequence, identifies any candidate objects in memory, and for each of the identified candidate objects, evaluates each identified candidate object by calculating a matching metric based on the calculated distance values associated with the object, and ranks the evaluated candidate objects based on the calculated matching metric values if more than one object is identified; and (iii) a selection component for identifying one or more candidate objects according to an evaluated ranking, presenting at least one identified object to the user, and enabling the user to select from amongst said at least one presented object for output to the text display area on the output device.
20170125036
15215670
0
1. A method for waking up an electronic apparatus using voice trigger, comprising: receiving a current voice signal; performing a voice trigger algorithm; receiving and determining a user feedback; and adjusting the voice trigger algorithm.
7577654
10626856
1
1. A computer-implemented method of detecting new events comprising the steps of: determining at least one story characteristic based on an average story similarity story characteristic and a same event-same source story characteristic; determining a source-identified story corpus, each story associated with at least one event; determining a source-identified new story associated with at least one event; determining story-pairs based on the source-identified new-story and each story in the source-identified story corpus; determining at least one inter-story similarity metric for the story-pairs; wherein the inter-story similarity metrics are comprised of at least one story frequency model and at least one story characteristic frequency model combined using terms weights; and wherein an event frequency is determined based on term t and rule of interpretation (ROI) category rmax from the formula: ef r ⁢ ⁢ max ⁡ ( t ) = max r ∈ R ⁢ ( ef ⁡ ( r , t ) ) wherein r is an ROI category, R is the set of all possible ROIs, ef(rt) is the frequency of term t in ROI category r: determining at least one adjustment to the inter-story similarity metrics based on at least one story characteristic; and outputting a new story event indicator if the event associated with the new story is of a determined similarity or dissimilarity to the events associated with the source-identified story corpus based on the inter-similarity metrics and adjustments.
9280908
13840704
1
1. A method, in a data processing system comprising a processor and a memory, for modifying an operation of a question answering (QA) system, comprising: receiving, by the data processing system, an input question; processing, by the data processing system, the input question to generate at least one query to be applied to a corpus of information; applying, by the data processing system, the at least one query to the corpus of information to generate candidate answers to the input question; selecting, by the data processing system, a final answer from the candidate answers for output; and modifying, by a training engine associated with the data processing system, using a machine learning technique that compares the final answer to a known correct answer known to be a correct answer for the input question from a ground truth data storage, at least one of logic or configuration parameters of the QA system for at least one of the processing of the input question to generate the at least one query, applying of the at least one query to the corpus of information to generate the candidate answers, or the selecting of the final answer from the candidate answers.
4478523
06360674
1
1. A timepiece capable of providing at least two pieces of time-related information comprising: speech generation means for providing an audible message; speech instruction key means for activating said speech generation means; display means for providing a visual display of at least one of said at least two pieces of information; detection means for detecting actuation of said speech instruction key means; and means for enabling said speech generation means to provide an audible message indicative of one of said at least two pieces of information and said visual display to provide a visual display of the other of said speech at least two pieces of information according to the state of said instruction key means as detected by said detection means.
6166780
08954950
1
1. For use in connection with home television video recording, playback, and viewing equipment, apparatus for processing an electronic signal including audio portions and video portions corresponding to audible and visible portions of the electronic signal, with said audio portions containing a spoken component related to the audible portion and with said video portions containing an auxiliary information component providing a visible representation of a respective concurrent spoken component of said electronic signal, said apparatus comprising: a video input to receive video portion of an electronic signal with said video portion containing a synchronized auxiliary information component corresponding to a visible representation of a concurrent spoken component; an audio input to receive audio portion of an electronic signal with said audio portion corresponding to said video portion auxiliary information comment; a video output by which the video portion of an electronic signal is made available to a user of the apparatus; an audio output by which the audio portion of an electronic signal is made available to a user of the apparatus; a programmed microcomputer including a data memory for receiving said auxiliary information component from said video portion; said microcomputer being programmed for analyzing said auxiliary information component in order to determine if said auxiliary information component contains undesirable words or phrases received in said memory; a switch for muting a corresponding audio portion having a concurrent spoken comment if undesirable words or phrases are detected within an auxiliary information component segment; said microcomputer being programmed for removing or replacing with another word or phrase any detected undesirable word or phrase found within said auxiliary information segment; said switch being connected to disable mute at the conclusion of receipt of the modified auxiliary information component segment; and, an on-screen display and video combining unit connected to provide a modified auxiliary information component containing signal to said video output.
20160236690
14620742
0
1. An apparatus for adaptively interacting with a driver via voice interaction, the apparatus comprising: a computational models block configured to: receive driver related parameters, vehicle related parameters, and vehicle environment parameters from a plurality of sensors; generate a driver state model based on the driver related parameters; generate a vehicle state model based on the vehicle related parameter; and generate a vehicle environment state model based on the vehicle environment parameters; and an adaptive interactive voice system configured to generate a voice output based on a driver's situation and context as indicated on information included within at least one of the driver state model, the vehicle state model, and the vehicle environment state model.
9640186
14268459
1
1. A method, comprising: extracting deep scattering spectral features from an acoustic input signal to generate a deep scattering spectral feature representation of the acoustic input signal; inputting the deep scattering spectral feature representation to a speech recognition engine; decoding the acoustic input signal based on at least a portion of the deep scattering spectral feature representation input to a speech recognition engine; and outputting the decoded acoustic input signal; wherein the speech recognition engine utilizes a hybrid architecture comprising a combination of a deep neural network and a convolutional neural network to decode the acoustic input signal, and further wherein features in the deep scattering spectral feature representation that have a local correlation in frequency are fed into the convolutional neural network part of the hybrid architecture and features that remove the local correlation are fed into the deep neural network part of the hybrid architecture, and a set of probabilities output by the hybrid architecture are used to evaluate fit between a set of acoustic models and the acoustic input signal; and wherein a result of the evaluation of fit between the set of acoustic models and the acoustic input signal is output; wherein the extracting, inputting, decoding, and outputting steps are executed via a computer system comprising an acoustic signal processing unit and a memory.
8774423
12286995
1
1. A method for controlling adaptivity of signal modification, comprising: receiving a signal; updating a primary adaptation coefficient based on whether the primary adaptation coefficient satisfies an adaptation constraint; if the primary adaptation coefficient fails to satisfy the adaptation constraint: updating the primary adaptation coefficient based on whether a secondary adaptation coefficient satisfies the adaptation constraint of the signal, the primary and secondary adaptation coefficients both being based on the signal and updated with the same time constant; the secondary adaptation coefficient being a phantom coefficient such that the phantom secondary adaptation coefficient is not applied to the signal; the primary adaptation coefficient being updated toward a current observation if the phantom secondary adaptation coefficient satisfies the adaptation constraint of the signal; and the primary adaptation coefficient not being updated if the phantom secondary adaptation coefficient does not satisfy the adaptation constraint; generating a modified signal by applying the primary adaptation coefficient to the signal; and outputting the modified signal.
8457968
12633315
1
1. A method comprising: receiving an N-best list of speech recognition candidates; receiving a list of current partitions and a belief for each of the current partitions, wherein a partition is a group of dialog states; in an outer loop, iterating over each of the speech recognition candidates in the N-best list; in an inner loop, performing a split, update, and recombination process via a processor to generate a fixed number of partitions after each speech recognition candidate in the N-best list; and recognizing speech based on the N-best list and the fixed number of partitions.
8913187
14188558
1
1. A system to detect garbled closed captioning data, comprising: a closed captioning data detector to detect closed captioning data in a video data stream; a word extractor/counter to extract individual words from the closed captioning data, to store a count of the total number of words in the closed captioning data in a memory, and to store a count of the total number of words having a desired word length or range of word lengths in the closed captioning data in the memory; a percentage threshold detector to determine a percentage of words having the desired length or range of lengths in the closed captioning data as a ratio of the count of the number of words in the closed captioning data having the desired length or range of lengths to the count of the total number of words in the closed captioning data; and an alert that is provided when the determined percentage exceeds a predetermined threshold.
9166714
12879141
1
1. A method of determining interaction analytics for user interactions with audio and video content items, the method comprising: retrieving, from a first database, a collection of content identifiers for identifying a plurality of content items based on characteristics of the plurality of content items; retrieving, from a second database, a catalog of the plurality of content items, each content item of the plurality of the content items of the catalog being associated with metacontent describing the respective content item; cross-referencing metacontent associated with a first content item of the plurality of content items of the catalog retrieved from the second database with the collection of the content identifiers retrieved from the first database; associating, based on the cross-referencing, a content identifier with the first content item based on the metacontent associated with the first content item; providing, to a third database, monitored user interactions with the plurality of content items of the catalog and historical records of the user interactions, the historical records including the content identifier associated with the first content item and the user interactions including media navigation associated with the plurality of content items of the catalog at a specified period of time; determining, using the third database, interaction analytics for a second content item of the plurality of content items of the catalog and a set of users having interacted with the second content item based on the monitored user interactions and the historical records; determining whether the set of users will perform a specified interaction with the second content item during a future period of time based on the interaction analytics, wherein the specified interaction is one of a pause operation, a rewind operation, and a fast-forward operation; and identifying an advertisement associated with the second content item of the plurality of content items of the catalog based on the determination whether the set of users will perform the specified interaction.
20070192093
10265862
0
1. A method for comparing a first audio data source with a plurality of audio data sources, wherein the first audio data source has an utterance spoken by a first person and the plurality of audio data sources have the same utterance spoken by a second person, the method comprising: performing a speech recognition function on the first audio data source to isolate at least one element of the first audio data source; comparing the isolated element with a corresponding element in the plurality of audio data sources; and determining whether the utterance spoken by the first person contained an error based on the comparison.
9245524
13883716
1
1. A speech recognition device comprising: a coefficient storage unit which stores a suppression coefficient representing an amount of noise suppression and an adaptation coefficient representing an amount of adaptation which is generated on the basis of a predetermined noise and is synthesized to a clean acoustic model generated on the basis of a voice which does not include noise, in a manner to relate the suppression coefficient with the adaptation coefficient to each other; a noise estimation unit which estimates noise from an input signal; a noise suppression unit which suppresses a portion of the noise specified by a suppression amount specified on the basis of the suppression coefficient, among from the noise estimated by said noise estimation unit, from the input signal; an acoustic model adaptation unit which generates an adapted acoustic model which is noise-adapted, by synthesizing the noise model, which is generated on the basis of the noise estimated by said noise estimation unit in accordance with an amount of adaptation specified on the basis of the adaptation coefficient, to the clean acoustic model; and a search unit which recognizes voice on the basis of the input suppressed noise by said noise suppression unit and the adapted acoustic model generated by said acoustic model adaptation unit.
20030115045
10017811
0
1. A method for reducing audio overhang in a wireless call comprising the steps of: receiving voice frames that convey voice information for the wireless call, wherein at least some of the frames, silent frames, indicate that a portion of the wireless call comprises low voice activity or no voice activity; monitoring the number of voice frames stored in a frame buffer after being received; and when the number of voice frames stored in the frame buffer exceeds a size threshold, deleting at least one silent frame that was received thereby preventing conversion of the at least one silent frame to audio.
20060195574
11381032
0
1. A server for controlling a plurality of clients, said server comprising: a control device, adapted to (a) select at least one client which has a privilege for controlling an image sensing device remotely, (b) prohibit unselected clients from controlling the image sensing device while the selected client is controlling the image sensing device, and (c) enable the unselected clients to receive video information captured by the image sensing device; and a notification device, adapted to notify a user captured by the image sensing device, as to which client is selected for controlling the image sensing device and which clients receive the video information captured by the image sensing device, in response to a request from the user.
4074244
05597207
1
1. A warning alarm system comprising a pair of different visual signals, p1 audible signal means having a pair of different delivered signals corresponding respectively to each of the visual signals and having speaker means for delivery of said audible signals, signal input means responsive to a plurality of conditions, control means for said visual signals and said audible signal means responsive to one condition for maintaining said visual and audible signals in an inactive condition and to different predetermined conditions of warning for selectively simultaneously initiating one of said visual signals and delivery of a corresponding audible signal through said speaker means for one condition of warning and for selectively simultaneously initiating a different one of said visual signals and delivery of a corresponding different audible signal through said speaker means, said control means including members for providing a priority of one of said audible signals over the other, auxiliary voice input means including a microphone selectively connected to said speaker means, said voice input means including members for cutting off audible alarm signals to said speaker means and connecting said microphone to said speaker means thereby overriding said audible signal means without overriding said visual signals.
20040146218
10353508
0
1. A document scanning method comprising the steps of causing relative movement between a document and first and second imaging elements, such that each of a succession of scan lines of the document is exposed in turn to the imaging elements; generating by means of the first and second imaging elements respective first and second image data words representative of respective first and second overlapping portions of each scan line; and concatenating at least a portion of each of the first and second words to generate a third image data word representative of the scan line, the method being characterised by the steps of cross-correlating at least a portion of each of the first and second words to identify a portion of the second word that is included in the first word; discarding a portion of at least one of the first and second words; concatenating the first word or remainder thereof with the second word or remainder thereof to form the third image data word; and, if necessary, compressing or expanding the third word by linear interpolation so as to obtain an image data word of a predetermined length.
7912703
11953572
1
1. A computer implemented method for unsupervised stemming schema learning and lexicon acquisition from corpora, the computer implemented method comprising; obtaining a corpus from a corpora; analyzing the corpus to deduce a set of possible stemming schema, wherein analyzing the corpus to deduce the set of possible stemming schema further comprises: generating a first affix count in concepts, wherein each of the concepts is a word unique in the corpus; generating a second affix count in schemas, wherein each of the schemas contains a transformation from a first affix to a second affix for the word; and generating a schema score for the each of the schemas from a combination of the first affix count and the second affix count for the possible stemming schemas to identify useful stemming schemas comprising: identifying a first number of occurrences of the first affix in the corpus for each kernel size; identifying a second number of occurrences of the second affix in the corpus for the each kernel size; identifying a third number of occurrences of the each kernel size in the corpus; and dividing a lesser of the first number of occurrences and the second number of occurrences by the third number of occurrences to form the schema score; reviewing and revising the set of possible stemming schema to create a pruned set of stemming schema; and deducing a lexicon from the corpus using the pruned set of stemming schema.
8930186
13676463
1
1. A system, comprising: a speech enhancement processor configured to receive an input signal and output a processed signal; and an encoder device coupled with the speech enhancement processor and configured to receive the processed signal from the speech enhancement processor, where the encoder device supports one or more spectral shapes to encode the processed signal for transmission over a communication channel; where the speech enhancement processor is configured to modify a spectral tilt of the input signal, based on a spectral tilt associated with at least one of the one or more spectral shapes supported by the encoder device, to generate the processed signal; and where the speech enhancement processor is configured to modify the spectral tilt of the input signal in response to a determination that an input noise tilt of the input signal surpasses a maximum tilt limitation that is based on one or more spectral shapes available at the encoder device.
20120317101
13557646
0
1. A method, comprising: ranking categories of a keyword detected in a query; ranking a category based on human assisted searches performed for queries indicating the keyword; choosing a human search assistant based on the keyword; providing content identifying the query, the keyword and the category to the human search assistant when the category is ranked highest; and qualifying the query based on the category and an action received when the human search assistant is performing a search.
7502741
11064343
1
1. A computer-implemented method, tangibly embodied as a computer program recorded in a computer-readable medium, the method comprising: (A) identifying a first portion of an original audio signal, the first portion representing sensitive content, comprising: (A)(1) generating a report, the report comprising: (a) content representing information in the original audio signal, and (b) a timestamp indicating a temporal position of the first portion of the original audio signal; (A)(2) identifying a first personally identifying concept in the report; (A)(3) identifying a first timestamp in the report corresponding to the first personally identifying concept; and (A)(4) identifying a portion of the original audio signal corresponding to the first personally identifying concept by using the first timestamp; and (B) producing a modified audio signal in which the first portion is protected against unauthorized disclosure.
20050014537
10622398
0
1. A mobile terminal comprising: a housing; an electronic circuit positioned in the housing; a first speaker positioned adjacent a first side of the electronic circuit; and a second speaker positioned adjacent the first speaker on the first side of the electronic circuit.
20060107823
10993109
0
1. A system for generating a set of coordinate vectors from a sparse graph of media object similarities, comprising using a computing device for: receiving a sparse graph of media object similarities; computing a set of coordinate vectors from each media object comprising a subset of media objects represented by the sparse graph; and updating the set of coordinate vectors by computing coordinate vectors for each remaining media object represented by the sparse graph which was not included in the subset of media objects.
8918309
13723160
1
1. A method of identifying word combinations in a source language and corresponding likely translations in a target language, performed by one or more processors, the method comprising: acquiring, by one or more processors, a text in the source language and a text in the target language, wherein the text in the source language is a translation of the text in the target language; performing, by one or more processors, semantic analysis on the acquired text in the source language to build deep semantic structures of one or more sentences of the acquired text in the source language, where the deep semantic structures comprise language-independent semantic classes and deep slots; performing, by one or more processors, semantic analysis on the acquired text in the target language to build deep semantic structures of one or more sentences of the acquired text in the target language, where the deep semantic structures comprise language-independent semantic classes and deep slots; matching, by one or more processors, the deep semantic structures of the sentences in the text in the source language to the deep semantic structures of the sentences in the text in the target language; determining, by one or more processors, a correspondence between deep structure elements for sentences with essentially matching deep structures; and identifying, by one or more processors, word combinations in the source and the target languages that substantially often match into each other through matching the deep structure of the source sentence to the deep structure of the target sentence as likely translations.
9836595
15412081
1
1. A processor-implemented method for determining a password strength, the method comprising: receiving, by a processor, a user-entered password through a plurality of user interactions with an input device associated with a user device, wherein the plurality of user interactions are selected from a group consisting of a plurality of key presses, a plurality of screen traces, and a plurality of spoken words, and wherein the input device is selected from a group consisting of a physical keyboard, a digital keyboard, and a microphone; identifying a keyboard layout type associated with a keyboard utilized to enter the received user-entered password; mapping each character within a plurality of characters in the received user-entered password to a corresponding location on a grid associated with the identified keyboard layout type; determining a coordinate sequence associated with the received user-entered password based on the mapped plurality of characters, wherein the coordinate sequence is on an x-y coordinate grid and calculated based on a plurality of measurements, and wherein the plurality of measurements is selected from a group consisting of a plurality of inches from a source, a plurality of centimeters from the source, and a plurality of millimeters from the source; applying a Hough transform pattern recognition algorithm to the determined coordinate sequence, wherein applying the Hough transform pattern recognition algorithm further analyzes a previous password coordinate sequence associated with a user account; and determining a password strength based on the applied Hough transform pattern recognition algorithm and comparing the determined coordinate sequence of the received user-entered password with the analyzed previous password coordinate sequence, wherein the determined password strength is displayed to a user as a word, a number, or a color, and wherein the determined password strength will be weak when the determined coordinate sequence is the same as the analyzed previous password coordinate sequence.
20100216511
12393187
0
1. A mobile wireless communications device comprising: a housing; a wireless transceiver carried by said housing; at least one audio transducer carried by said housing; and a novelty voice alteration processor carried by said housing and coupled to said wireless transceiver and said at least one audio transducer and configured to alter voice communications.
7747445
11457057
1
1. A computer implemented method for a voice-enabled computing environment comprising: receiving a voice command related to an abstraction at at least one computer of the voice-enabled computing environment, wherein the voice command specifies an abstraction type; determining, from the voice command, which of a plurality of abstraction types supported by the voice-enabled computing environment is the abstraction type specified by the voice command, wherein each of the plurality of abstraction types is associated with an indication of whether any particular sequencing and/or timing is to be imposed; and responsive to the voice command, performing at least one programmatic action related to the abstraction, wherein the programmatic action is specific to the abstraction type specified by the voice command.
20120042230
13278528
0
1. A method for supplementing a web graph of linked web documents, the method comprising: identifying one or more experts for one of one or more categories; identifying one or more web documents tagged by the one or more experts for a given category; determining a corresponding category of the one or more web documents tagged by the one or more experts for the given category; assigning a proxy web document to each one of the one or more experts identified for the given category, the proxy web document representative of a corresponding one of the one or more experts; and linking one or more proxy web documents to the one or more web documents tagged by the one or more experts for the given category.
20140163981
13712032
0
1. A speech transcription system for producing a representative transcription text from one or more audio signals representing one or more speakers participating in a speech session, the system comprising: a preliminary transcription module for developing a preliminary transcription of the speech session using automatic speech recognition having a preliminary recognition accuracy performance; a speech selection module for user selection of one or more portions of the preliminary transcription to receive higher accuracy transcription processing; and a final transcription module responsive to the user selection for developing a final transcription output for the speech session having a final recognition accuracy performance for the selected one or more portions which is higher than the preliminary recognition accuracy performance.
9805703
15386840
1
1. A device for capturing vibrations produced by an object, the device comprising: a fixation element for fixing the device to an object; a sensor spaced apart from a surface of the object and located relative to the object; and a magnet adjacent the sensor; wherein the fixation element is configured to transmit vibrations from a fixation point on the object to the magnet, wherein the magnet is configured to transmit vibrations from the fixation point and vibrations from a surface of the object to the sensor, and wherein the sensor is a coil inductor.
20080292196
12110065
0
1. A method for classifying digital images, the method comprising: clustering optical parameters of the digital images into a set of meaningful clusters; associating the set of meaningful clusters to a set of associated classes used by a user; and classifying the digital images according to the set of associated classes.
7539618
11283909
1
1. A system for operating an electronic device using an animated character display comprising: a first electronic device runs agent software so as to display a simulated human animated character which converses with a user, recognizes speech obtained from that conversation by a voice recognition engine, prepares script reflecting the content of the conversation, and executes the prepared script to perform predetermined processing and a second electronic device runs agent software so as to display a simulated human animated character which converses with a user, recognizes speech obtained from that conversation by a voice recognition engine, prepares script reflecting the content of the conversation, and executes the prepared script to perform predetermined processing, wherein said first electronic device transfers the agent software and the voice recognition engine corresponding to said agent software to said second electronic device and said second electronic device, when the agent software and the voice recognition engine are transferred from said first electronic device, runs the transferred agent software so as to display a simulated human animated character which converses with a user, recognizes speech obtained from that conversation by the voice recognition engine, prepares script reflecting the content of the conversation, and executes the prepared script to perform predetermined processing, wherein said first electronic device changes the animated character corresponding to the agent software in the middle of transfer of the agent software to said second electronic device, said second electronic device changes the display of the animated character corresponding to the agent software in the middle of transfer of the agent software from said first electronic device, and further said first electronic device changes the display of the animated character corresponding to the agent software based on the amount of data transferred, and said second electronic device changes the display of the animated character corresponding to the agent software based on the amount of data transferred.
20080081697
11537040
0
1. Communication apparatus for an online gaming network comprising a voice input, voice recognition means, voice generation means and voice output means connectable to a voice bridge, the voice recognition means being operable to recognise voice received via the voice input and to generate voice data representative of recognised words, the voice generation means being operable to regenerate voice from the voice data using a predetermined voice type and to pass the regenerated voice to the voice output.
8060366
11778884
1
1. A method for providing verbal control of a conference call in a conferencing system, comprising: bridging a plurality of conference call legs to form a bridged conference stream in the conferencing system; evaluating the bridged conference stream with a speech recognition algorithm; determining if a first hot word is identified in the bridged conference stream, wherein at least a caller name is used to populate a voice template for identifying the first hot word; responsive to determining the first hot word is in the bridged conference stream, invoking a conference feature associated with the first hot word; and suppressing the first hot word in the bridged stream prior to transmission of the bridged stream to conference participants.
9372541
13794335
1
1. A method for verifying the operability of a gesture recognition system that comprises an image capture device having a field of view, the method comprising the steps of: providing a test target separate from, and within the field of view of, the image capture device, the test target configured to generate a test stimulus that is recognizable by the gesture recognition system; receiving, by the image capture device, the test stimulus from the test target; processing, in a processor, the test stimulus received by the image capture device to generate a test response; verifying, in the processor, that the test response corresponds to the test stimulus; and inactivating the entire gesture recognition system when the test response does not correspond to the test stimulus, whereby the gesture recognition system no longer generates any system output commands.
20110029530
12844792
0
1. A method for displaying relationships between concepts to provide classification suggestions via injection, comprising: designating a reference set comprising concepts each associated with a classification code; designating clusters of uncoded concepts; comparing one or more of the uncoded concepts from at least one cluster to the reference set; identifying at least one of the concepts in the reference set that is similar to the one or more uncoded concepts; injecting the similar concepts into the at least one cluster; and visually depicting relationships between the uncoded concepts and the similar concepts in the at least one cluster as suggestions for classifying the uncoded concepts.
9946862
14956180
1
1. A method for generating a notification by an electronic device, comprising: receiving a speech phrase; recognizing, by a processor, the speech phrase as a command to generate the notification; detecting, by at least one sensor, context data of the electronic device; determining a context score associated with the context data; determining, by the processor, whether to generate the notification at least based on a comparison of the context score to a threshold value; and generating the notification based on the comparison.
6035270
09000270
1
1. A non-intrusive method of assessing the quality of a first signal carrying speech, said method comprising the steps of: analyzing said signal carrying speech to generate output parameters according to a spectral representation imperfect vocal tract model capable of generating coefficients that can parametrically represent both speech and distortion signal elements, and weighting the output parameters according to a network definition function to generate an output derived from the weighted output parameters, the network definition function being generated using a trainable process, using well conditioned and/or ill-conditioned samples of a test signal, modeled by imperfect the vocal tract model.
20100158043
12592656
0
1. A method for retiming digital telecommunications data received by a digital logger from a plurality of T-carrier type telephone lines respectively having differing clock sources, the method for retiming comprising the steps of: a. extracting a single frame of digital audio data from each incoming T-carrier subchannel received by the digital logger; b. analyzing clock rates of digital audio data streams for all incoming T-carrier subchannels; c. without affecting T-carrier signaling data, increasing digital audio data of T-carrier subchannels determined to have a slow clock rate by appropriately adding a byte of digital audio data to digital audio data carried by such T-carrier subchannels; d. without affecting T-carrier signaling data, decreasing digital audio data of T-carrier subchannels determined to have a fast clock rate by appropriately removing a byte of digital audio data from digital audio data carried by such T-carrier subchannels; and e. after processing digital audio data for all received T-carrier subchannels, repackaging all frames of received digital audio data into a single T-carrier super-frame.
7580834
10505100
1
1. A CELP type speech decoder that receives an excitation gain code, an adaptive excitation vector code, and a fixed excitation vector code associated with encoded speech transmitted from a CELP type speech encoder and decodes the encoded speech, said CELP type speech decoder comprising: a quantized gain generating section that receives the excitation gain code from the CELP type speech encoder and decodes an adaptive excitation vector gain and a fixed excitation vector gain specified by the excitation gain code; an adaptive excitation codebook that receives the adaptive excitation vector code from the CELP type speech encoder and takes one frame of samples as an adaptive excitation vector from past excitation signal samples specified by the adaptive excitation vector code; a fixed excitation codebook that receives the fixed excitation vector code from the CELP type speech encoder and generates a fixed excitation vector specified by the fixed excitation vector code; an excitation vector generating section that generates an excitation vector by adding a vector obtained by multiplying the adaptive excitation vector gain and the adaptive excitation vector, and a vector obtained by multiplying the fixed excitation vector gain and the fixed excitation vector; a high-frequency emphasis section that performs high-frequency emphasis processing on the excitation vector generated by the excitation vector generating section; and a synthesis filter that performs filter synthesis of the excitation vector output from the high-frequency emphasis section employing a set of filter coefficients to output decoded speech data, wherein said fixed excitation codebook comprises: a comparing section that compares the shape of a pulse excitation vector with predetermined shapes to determine a predetermined shape which matches the shape of said pulse excitation vector; a storing section that stores sets of dispersion vectors that are designed exclusively for each of said predetermined shapes; a selecting section that selects a set of said dispersion vectors that are associated with the predetermined shape which matches the shape of said pulse excitation vector; and a convolving section that convolves said pulse excitation vector with one of the dispersion vectors in the selected set to obtain the fixed excitation vector.
5519443
08212119
1
1. In decoder apparatus for decoding data encoded in caption layer data of a television data stream, a method for decoding the encoded data characterized by steps of: receiving the encoded data as service blocks, each service block comprising a header and at least two data bytes, retrieving data packets from the service blocks, decoding the data packets, each data packet comprising a plurality of data bytes, the data packets indicated by particular bit indications appearing in one of the plurality of bytes of each data packet, storing a plurality of letter components in memory for at least two languages, a first language having the Roman alphabet and a second language having a syllabic alphabet, determining if the data packet represents a non-printable character, and mapping remaining bit positions of the data packet for a printable character into a number of letter component pointer bytes less than or equal to the number of bytes comprising the data packets, the letter component pointer bytes for pointing to one of a Roman alphabet letter and a syllabic alphabet letter component in memory for display.
20150205783
14588695
0
1. A method comprising: creating, by a processing device, an initial population comprising a vector of parameters for elements of syntactic and semantic descriptions of a source sentence; using a natural language compiler (NLC) system to translate the source sentence into a resulting translation based on universal syntactic and semantic descriptions of the source sentence; generating a vector of quality ratings, wherein each quality rating in the vector of quality ratings is of a corresponding parameter in the vector of parameters; and replacing a number of parameters in the vector of parameters with adjusted parameters, wherein replacing the number of parameters comprises: randomly selecting a first parameter from the vector of parameters and adjusting the first parameter to create a first adjusted parameter; computing a quality rating of a translation, corresponding to the first adjusted parameter; comparing the quality rating of the translation, corresponding to the first adjusted parameter to a quality rating of a translation, corresponding to the first parameter; and replacing the first parameter with the first adjusted parameter if the quality rating of the translation, corresponding to the first adjusted parameter is better than the quality rating of the translation, corresponding to the first parameter.
20090003703
11821858
0
1. A computer-readable medium having computer-executable instructions, which when executed perform steps, comprising: a) extracting features from a selected sample of a plurality of samples of digital ink training data, wherein the training data corresponds to digital ink representative of at least two different types of digital ink input, and the selected sample is associated with a recognition value as its label; b) processing a feature dataset of the selected sample into a recognition model including by adjusting the combined feature data of the class to which the selected sample belongs and maintaining data representative of the features in association with the recognition value; c) selecting another sample from the plurality and repeating steps a) and b) until each sample of the plurality has been processed; and d) providing a unified recognizer that recognizes an input item including between the two different types of digital ink input without mode selection or recognition parameter input, including by extracting features of the input item and determining which data representative of the features of a class the features of the input item best match, and outputting the recognition value associated with that data.
8355349
12643005
1
1. A method for establishing a voice call between a plurality of chat participants comprising: receiving a first message from a first chat participant requesting establishment of a voice call; sending a response message to the first chat participant providing voice call session information and an authorization code; receiving a second message comprising the voice call session information and the authorization code from a second chat participant requesting participation in the voice call, wherein the first chat participant passes the voice call session information and the authorization code as a private message in a chat conversation to the second chat participant; setting up a first voice link to the first chat participant; setting up a second voice link to the second chat participant, wherein one of the first voice link and the second voice link is a public switched telephone network link; bridging the first voice link and the second voice link so as to establish the voice call between the first chat participant and the second chat participant; and wherein if the public switched telephone network link is unavailable: determining if a subscriber line portion of the public switched telephone network link is in current use in an internet session through a respective internet service provider access server, wherein the public switched telephone network link is capable of supporting one of a text chat and an other data connection; forwarding the voice call to a network node having access to the internet session; sending a third message to the internet session indicating an arrival of an incoming voice call; and converting the voice call to one of a streaming audio and an internet voice call.
9318103
13773190
1
1. An automatic speech recognition system for recognizing a user voice command in a noisy environment, comprising: matching means for matching elements retrieved from speech units forming the command with templates stored in a template library; processing means for determining a sequence of templates that minimizes a distance between the elements and the templates, wherein the templates are posterior templates, the elements retrieved from the speech units are posterior vectors, and the posterior templates and the posterior vectors are generated with a MultiLayer Perceptron; calculating means for automatically selecting a subset of the posterior templates, the selection of the subset of the posterior templates including: (i) determining Gabriel or relative neighbors of the selected subset of the posterior templates by calculating a matrix of distances between all of the posterior templates, (ii) visiting each template of the subset of posterior templates, (iii) marking a template of the subset of the posterior templates if all of its neighbours are of a same phone class as the template; and (iv) deleting all marked posterior templates, wherein the remaining posterior templates constitute the selected subset of the posterior templates; and a dynamic time warping (DTW) decoder for matching the posterior vectors with the selected subset of posterior templates, wherein the DTW decoder receives input, the input comprising a sequence of posterior vectors to be recognized, a posterior template library, a dictionary and optionally a grammar, and the DTW decoder outputs one or more sequences of recognized words, time information and confidence measures.
20120197770
13388522
0
1. A system for real time streaming of a text string which is periodically updated comprising: streaming means adapted to communicate with a server and to process a first text string inputted via first text input means and update the first text string in real time with additional or edited text string portions inputted via second text input means to thereby generate a second text string; control means adapted to periodically monitor a state of at least the second text string and generate an output text string therefrom, the output text string being communicated to a server for streaming to a text viewing means, wherein the output text string is updated to substantially correspond to the second text string, such that the output text string represents the updated text string, thereby facilitating editing of text string portions already communicated to the server.
20100268979
12425054
0
1. A method of recovering from syntax errors encountered by a parser during a parsing procedure, wherein the parser includes a tuple defining at least one of a head, a middle, an end, and a synchronization point, the method comprising: receiving a token M that corresponds to the head of the tuple, recording an indentation characteristic of the token M, and placing a state corresponding to the token M in a stack; receiving a token N characterized by being first on a line; examining the stack to find a most recent head state that has not been matched with a corresponding end state, and comparing the recorded indentation characteristic of the most recent unmatched head state to an indentation characteristic of the token N; proceeding with the parsing procedure when the indentation characteristic of the token N is greater than the recorded indentation characteristic of the most recent unmatched head state; placing in the stack a token corresponding to the end of the tuple, and inserting token N in the stack when the indentation characteristic of the token N is less than the recorded indentation characteristic of the most recent unmatched head state; proceeding with the parsing procedure when the indentation characteristic of the token N is equal to the recorded indentation characteristic of the most recent unmatched head state, and the token N corresponds to the middle or end of the tuple; placing in the stack a token corresponding to the end of the tuple, and inserting token N in the stack when the indentation characteristic of the token N is equal to the recorded indentation characteristic of the most recent unmatched head state, and the token N corresponds to neither the middle nor the end of the tuple; receiving a token P that corresponds to the synchronization point of the tuple, examining the stack to detect any unmatched head states, and placing in the stack one or more tokens corresponding to the end of the tuple necessary to match the unmatched head states; and, placing the token P on the stack.
20130066723
13654678
0
1. A computer-implemented method for targeted delivery of advertising of a first or second sponsor via a cellular telephony infrastructure provided by a carrier to a plurality of cellular phones of a first or second type by establishing a competitive bid auction process, the method comprising the steps of: (a) receiving a credit card datum relating to a credit card of a respective user of the plurality of cellular phones, wherein the credit card datum includes a current balance or credit limit associated with the credit card; (b) presenting to the first and second sponsor data corresponding to: (1) the credit card datum; and (2) the first type and the second type of cellular phone, wherein a rendering capability of the first type of cellular phone is different from a rendering capability of the second type of cellular phone; (c) receiving a first advertising content associated with the credit card datum and a second advertising content associated with the credit card datum each from the first sponsor and the second sponsor, wherein the first advertising content requires the rendering capability of the first type of cellular phone to be rendered thereon and wherein the second advertising content requires the rendering capability of the second type of cellular phone to be rendered thereon, wherein the first advertising content is incompatible with the second type of cellular phone and the second advertising content is incompatible with the first type of cellular phone; (d) receiving a bid from the first sponsor, wherein the bid includes: (1) a selection by the first sponsor of the first type of cellular phone; and (2) an amount offered for delivery of the advertising of the first sponsor to the plurality of cellular phones of the first type; (e) receiving a bid from the second sponsor, wherein the bid includes: (1) a selection by the second sponsor of the first type of cellular phone; and (2) an amount offered for delivery of the advertising of the second sponsor to the plurality of cellular phones of the first type; (f) attributing a priority to the delivery of the advertising of the first sponsor over the delivery of the advertising of the second sponsor based upon a determination that a resultant yield based on the amount of the first sponsor is greater than a resultant yield based on the amount of the second sponsor; (g) receiving an advertising request associated with the first type of cellular phone; (h) determining that the relevance to the advertising request of the first advertising content and second advertising content is the same; (i) determining that the first type of cellular phone can render the first advertising content and cannot render the second advertising content; and (j) transmitting via the cellular telephony infrastructure the first advertising content instead of the second advertising content to the plurality of cellular phones of the first type.
20130124191
13295661
0
1. A method comprising: processing multiple resources to build a word dictionary configured to enable summarizing a plurality of microblogs; using the word dictionary to create concepts, at least some individual concepts comprising a semantic tag comprising multiple words; assigning a plurality of microblogs to a plurality of the concepts effective to form potential clusters; computing a membership score for each microblog/cluster pairing; and using the membership score to assign a microblog to a cluster.
20160320907
15207260
0
1. A method, comprising: at an electronic device with one or more processors, memory, and a touch-sensitive display: receiving data relating to device movement and data relating to device orientation; receiving a touch input on the touch-sensitive display; processing the received data relating to device movement and the received data relating to device orientation to determine whether the electronic device is in one of a first or second state based on the received data relating to device movement and data relating to device orientation, wherein the first state occurs when a user is looking at the electronic device; if it is determined that the electronic device is in the first proximity state, processing the touch input as an intentional touch input; if it is determined that the electronic device is in the second proximity state, processing the touch input as an unintentional touch input; and in response to detecting a first change in the received data relating to device movement and the received data relating to device orientation, determining that the device has changed from the second state to the first state.
20030236672
10210667
0
1. An apparatus for testing speech recognition in a vehicle, said apparatus comprising: a speaker arrangement which propagates speech output; a testing arrangement adapted to test the accuracy of speech input associated with the speech output propagated by said speaker arrangement; wherein said speaker arrangement is configured to simulate the propagation of a human voice.
7917364
10668141
1
1. A method of automatic speech recognition (ASR), comprising: receiving a speech utterance from a user; assessing resources, by a processor, by monitoring both port utilization and processing utilization of each of a plurality of different ASR engines to determine which of the plurality of different ASR engines are busy serving users; assigning the speech utterance to a single ASR engine when the plurality of different ASR engines are busy such that the port and processing utilizations are within a set of threshold values; assigning the speech utterance to the plurality of different ASR engines when the plurality of different ASR engines are not busy such that the port and processing utilizations are within another set of threshold values; and generating text of the speech utterance with either the single ASR engine or the plurality of different ASR engines; wherein assigning the speech utterance to a single engine if processing utilization is within a threshold value when the port utilization of the single ASR engine is lower than a port utilization threshold of 80%.
20090055333
11843642
0
1. A system, comprising: a reception component that obtains at least one output of an artificial neuron network; and a gather component that selects data for presentment based upon received output of the artificial neuron network, wherein artificial neuron network output is at least in part an estimation of information appropriateness for presentment.
20040019585
10465596
0
1. A memo image managing apparatus connectable to an information appliance capable of manipulating a memo image, comprising: a memo image accumulating section adapted to accumulate memo images; a memo image retrieving section adapted to retrieve a predetermined memo image from said accumulated memo images in response to a request from said information appliance or the other information appliance than said information appliance; a character information recognizing section adapted to recognize and extract character information from said memo image; and a memo image distributing section adapted to distribute to said information appliance said memo image retrieved by said memo image retrieving section or said character information extracted by said character information recognizing section.