patent_num
int64 3.93M
10.2M
| claim_num1
int64 1
519
| claim_num2
int64 2
520
| sentence1
stringlengths 40
15.9k
| sentence2
stringlengths 88
20k
| label
float64 0.5
0.99
|
---|---|---|---|---|---|
8,014,500 | 4 | 5 | 4. The method of claim 3 further comprising: recognizing the second voice using a voice recognition technology to obtain an answer corresponding to the second voice; and determining whether the answer conforms to the specific answer. | 4. The method of claim 3 further comprising: recognizing the second voice using a voice recognition technology to obtain an answer corresponding to the second voice; and determining whether the answer conforms to the specific answer. 5. The method of claim 4 further comprising recording the second voice. | 0.5 |
9,224,172 | 1 | 3 | 1. A method comprising: detecting, by a computing device, the presence of a first user of an online social network; delivering, by a computing device, content to the first user, the content allowing enhancement by the first user; receiving, by the computing device, one or more enhancements to the content by the first user; detecting, by the computing device, the presence of a second user on the social network, the second user different than the first user, the detecting the presence of the second user comprising detecting that a first client node to which the content may be displayed is geographically proximate to a second client node associated with the second user; determining, by the computing device, a social context of the second user, the social context comprising data associated with the second user with respect to the social network, the data associated with the second user with respect to the social network comprising data related to an interaction with the particular content by friends or contacts of the second user; modifying, by the computing device, the content, the modifying based on the determined social context of the second user and the one or more enhancements to the content by the first user; notifying, by the computing device, the second user that the content has been modified by the first user, the notifying comprising displaying at least a portion of the modified content; and delivering, by the computing device, the modified content to the second user, the delivering comprising displaying the modified content to the first client node in response to the determination that the first client node is geographically proximate to the second client node. | 1. A method comprising: detecting, by a computing device, the presence of a first user of an online social network; delivering, by a computing device, content to the first user, the content allowing enhancement by the first user; receiving, by the computing device, one or more enhancements to the content by the first user; detecting, by the computing device, the presence of a second user on the social network, the second user different than the first user, the detecting the presence of the second user comprising detecting that a first client node to which the content may be displayed is geographically proximate to a second client node associated with the second user; determining, by the computing device, a social context of the second user, the social context comprising data associated with the second user with respect to the social network, the data associated with the second user with respect to the social network comprising data related to an interaction with the particular content by friends or contacts of the second user; modifying, by the computing device, the content, the modifying based on the determined social context of the second user and the one or more enhancements to the content by the first user; notifying, by the computing device, the second user that the content has been modified by the first user, the notifying comprising displaying at least a portion of the modified content; and delivering, by the computing device, the modified content to the second user, the delivering comprising displaying the modified content to the first client node in response to the determination that the first client node is geographically proximate to the second client node. 3. The method of claim 1 , wherein the content includes an interactive advertisement. | 0.914315 |
8,799,234 | 15 | 16 | 15. The process of claim 1 , wherein the one or more input items of the same type as the input items of the input-output examples are received in conjunction with receiving the input-output examples, and wherein the process action of parsing the input items and the output items to produce a weighted set of parses, comprises an action of parsing the input and output items associated with the input-output examples, as well as the received input items of the same type as the input items of the input-output examples. | 15. The process of claim 1 , wherein the one or more input items of the same type as the input items of the input-output examples are received in conjunction with receiving the input-output examples, and wherein the process action of parsing the input items and the output items to produce a weighted set of parses, comprises an action of parsing the input and output items associated with the input-output examples, as well as the received input items of the same type as the input items of the input-output examples. 16. The process of claim 15 , wherein input-output examples and the one or more input items of the same type as the input items of the input-output examples comprise spreadsheet data. | 0.5 |
8,140,463 | 2 | 13 | 2. A system as recited in claim 1 , wherein said analysis engine comprising: a metadata generator; an information aggregation heuristics unit; an information retrieval harness; and a database; wherein said analysis engine identifies said object; wherein said analysis engine identifies said first metadata; wherein said analysis engine identifies said one or more taxonomies; wherein said analysis engine identifies information aggregation heuristics; wherein said analysis engine identifies one or more information retrieval routines that are used; wherein said analysis engine identifies contextual information that is used; wherein said analysis engine identifies third metadata for said one or more sub-objects that are previously stored in said database; wherein said information retrieval harness collects data from a plurality of said one or more information retrieval routines; wherein said information retrieval harness passes collected said data to said information aggregation heuristics unit; wherein said information aggregation heuristics unit processes said data and sends the processed data to said metadata generator; wherein said metadata generator generates said second metadata according to configured or stored taxonomies, metadata schema, organizational policies, and semantic spaces, and sends it to said service layer; and wherein said configuration module stores said one or more information retrieval routines, said taxonomies, said organizational policies, said metadata schema, and said semantic spaces used by said analysis engine and said information retrieval routines. | 2. A system as recited in claim 1 , wherein said analysis engine comprising: a metadata generator; an information aggregation heuristics unit; an information retrieval harness; and a database; wherein said analysis engine identifies said object; wherein said analysis engine identifies said first metadata; wherein said analysis engine identifies said one or more taxonomies; wherein said analysis engine identifies information aggregation heuristics; wherein said analysis engine identifies one or more information retrieval routines that are used; wherein said analysis engine identifies contextual information that is used; wherein said analysis engine identifies third metadata for said one or more sub-objects that are previously stored in said database; wherein said information retrieval harness collects data from a plurality of said one or more information retrieval routines; wherein said information retrieval harness passes collected said data to said information aggregation heuristics unit; wherein said information aggregation heuristics unit processes said data and sends the processed data to said metadata generator; wherein said metadata generator generates said second metadata according to configured or stored taxonomies, metadata schema, organizational policies, and semantic spaces, and sends it to said service layer; and wherein said configuration module stores said one or more information retrieval routines, said taxonomies, said organizational policies, said metadata schema, and said semantic spaces used by said analysis engine and said information retrieval routines. 13. A system as stated in claim 2 , wherein said system caches data and recognizes the relationships among said objects and said one or more sub-objects. | 0.921377 |
9,916,300 | 9 | 12 | 9. A method comprising: receiving, by use of a processor, a handwritten sub-character, the handwritten sub-character comprising one or more character strokes; determining a hint list based on the handwritten sub-character, the hint list comprising at least one entry, each entry being a character that comprises the handwritten sub-character; identifying a number of post-character strokes corresponding to each entry in the hint list, a post-character stroke being a stroke added to the sub-character to form the entry; receiving the at least one indication of an additional stroke relating to the handwritten sub-character, wherein the at least one indication of an additional stroke is a different input than a character stroke, said indication selected from the group consisting of: a tap that represents an additional character stroke and does not indicate a specific character stroke at a location of the touch-sensitive input panel adjacent to a location of the handwritten sub-character, a button press that represents an additional character stroke and does not indicate a specific character stroke; and updating the hint list based on a number of additional strokes determined using the at least one indication of an additional stroke, wherein updating the hint list based on the number of additional strokes comprises removing each entry in the hint list whose number of post-character strokes is less than the number of additional strokes. | 9. A method comprising: receiving, by use of a processor, a handwritten sub-character, the handwritten sub-character comprising one or more character strokes; determining a hint list based on the handwritten sub-character, the hint list comprising at least one entry, each entry being a character that comprises the handwritten sub-character; identifying a number of post-character strokes corresponding to each entry in the hint list, a post-character stroke being a stroke added to the sub-character to form the entry; receiving the at least one indication of an additional stroke relating to the handwritten sub-character, wherein the at least one indication of an additional stroke is a different input than a character stroke, said indication selected from the group consisting of: a tap that represents an additional character stroke and does not indicate a specific character stroke at a location of the touch-sensitive input panel adjacent to a location of the handwritten sub-character, a button press that represents an additional character stroke and does not indicate a specific character stroke; and updating the hint list based on a number of additional strokes determined using the at least one indication of an additional stroke, wherein updating the hint list based on the number of additional strokes comprises removing each entry in the hint list whose number of post-character strokes is less than the number of additional strokes. 12. The method of claim 9 , wherein receiving the at least one indication of an additional stroke relating to the handwritten sub-character comprises receiving at least one tap that represents an additional character stroke and does not indicate a specific character stroke, said at least one tap received at a location of the touch-sensitive input panel adjacent to an input location of the sub-character, wherein updating the hint list based on the number of additional strokes comprises updating the hint list based on a received number of taps. | 0.5 |
8,024,337 | 6 | 9 | 6. The method of claim 5 , wherein the second query distribution comprises a volume of the instances of the second query per unit of time over the time period. | 6. The method of claim 5 , wherein the second query distribution comprises a volume of the instances of the second query per unit of time over the time period. 9. The method of claim 6 , wherein comparing the first and second query distributions comprises: comparing the volume of the instances of the first query per unit of time with the volume of the instances of the second query per unit of time over the time period. | 0.5 |
9,152,632 | 8 | 9 | 8. The method of claim 1 , wherein associating the one or more matching chemical structures with the respective source file further comprises associating a respective hierarchical level of each chemical structure of the one or more matching chemical structures with the respective source file, wherein the respective hierarchical level relates to a hierarchical level of the respective chemical structure within a particular dictionary of the at least one dictionary in which the respective chemical structure resides. | 8. The method of claim 1 , wherein associating the one or more matching chemical structures with the respective source file further comprises associating a respective hierarchical level of each chemical structure of the one or more matching chemical structures with the respective source file, wherein the respective hierarchical level relates to a hierarchical level of the respective chemical structure within a particular dictionary of the at least one dictionary in which the respective chemical structure resides. 9. The method of claim 8 , wherein generating the first virtual relational network comprises generating a hierarchical network corresponding to the hierarchical levels of the one or more matching chemical structures of each respective source file of the first collection of source files. | 0.5 |
8,799,661 | 15 | 16 | 15. The system of claim 14 wherein said cryptographic token is expressed as symbols consisting of symbols recognized by said document object model. | 15. The system of claim 14 wherein said cryptographic token is expressed as symbols consisting of symbols recognized by said document object model. 16. The system of claim 15 wherein said cryptographic token is expressed as symbols purposefully imitative of markup language recognized as functional by said document object model. | 0.5 |
10,135,887 | 21 | 23 | 21. A non-transitory computer readable medium containing computer executable instructions which when executed by a computer perform a method comprising: receiving a piece of source video content; receiving a selection to share annotations with a group of users who have access to the source video content; receiving an indication to record a first annotation comprising a video annotation having an audio track associated with a location in the source video content, wherein the location comprises an annotation linkage location; creating metadata associating the first annotation to the location in the source video content; receiving a new annotation associated with the first annotation and the location in the source video content; creating metadata associating the new annotation with the first annotation and the location in the source video content; converting the audio track of the video annotation to text to enable display of the text of the converted audio track and preclude simultaneous audio track playback; notifying one or more users of the group of users that the annotations are shared and that the first annotation and the new annotation are associated with the source video content; controlling interaction with the annotations according to varying levels of control including controlling which of the one or more users can access the annotations and which of the one or more users can create associated annotations; requesting a list of annotation linkage locations for annotations associated with the source video content prior to playback of the source video content including annotation linkage locations associated with the new annotation and the first annotation; enabling a display of the first annotation and the new annotation synchronously with playback of the source video content with the simultaneous audio track playback according to an optimal viewing format; enabling a display of the first annotation and the new annotation synchronously with playback of the source video content absent the simultaneous audio track playback according to a different optimal viewing format; storing the first annotation, the new annotation, and the associated metadata, wherein the associated metadata includes annotation metadata comprising a list of points in the source video content and a link to one or more annotations associated with each point including a timestamp, a frame marker, a chapter marker, or another annotation; and receiving an indication of a selection to view the source video content and the associated annotations. | 21. A non-transitory computer readable medium containing computer executable instructions which when executed by a computer perform a method comprising: receiving a piece of source video content; receiving a selection to share annotations with a group of users who have access to the source video content; receiving an indication to record a first annotation comprising a video annotation having an audio track associated with a location in the source video content, wherein the location comprises an annotation linkage location; creating metadata associating the first annotation to the location in the source video content; receiving a new annotation associated with the first annotation and the location in the source video content; creating metadata associating the new annotation with the first annotation and the location in the source video content; converting the audio track of the video annotation to text to enable display of the text of the converted audio track and preclude simultaneous audio track playback; notifying one or more users of the group of users that the annotations are shared and that the first annotation and the new annotation are associated with the source video content; controlling interaction with the annotations according to varying levels of control including controlling which of the one or more users can access the annotations and which of the one or more users can create associated annotations; requesting a list of annotation linkage locations for annotations associated with the source video content prior to playback of the source video content including annotation linkage locations associated with the new annotation and the first annotation; enabling a display of the first annotation and the new annotation synchronously with playback of the source video content with the simultaneous audio track playback according to an optimal viewing format; enabling a display of the first annotation and the new annotation synchronously with playback of the source video content absent the simultaneous audio track playback according to a different optimal viewing format; storing the first annotation, the new annotation, and the associated metadata, wherein the associated metadata includes annotation metadata comprising a list of points in the source video content and a link to one or more annotations associated with each point including a timestamp, a frame marker, a chapter marker, or another annotation; and receiving an indication of a selection to view the source video content and the associated annotations. 23. The computer readable medium of claim 21 , further comprising receiving an indication of a selection to view the source video content and the associated annotations on a same device or on separate devices. | 0.846324 |
7,788,266 | 1 | 2 | 1. A user-interface method for searching a relatively large set of content items in response to unresolved keystroke entry by a user from a keypad with overloaded keys in which a given key is in fixed association with a plurality of alphabetical and numerical symbols and the entry has relatively few keystrokes so that a subset of targeted content item results is quickly presented, the method comprising: using an ordering criteria to rank and associate subsets of content items with corresponding strings of one or more unresolved keystrokes for overloaded keys so that the subsets of content items are directly mapped to the corresponding strings of unresolved keystrokes; subsequent to ranking and associating the content items with strings of unresolved keystrokes, receiving a first unresolved keystroke from a user, wherein one of the plurality of alphabetical and numerical symbols in fixed association with the first unresolved keystroke is a symbol the user is using to search for desired content items; selecting and presenting the subset of content items that is associated with the first unresolved keystroke based on the direct mapping of unresolved keystrokes to the subsets of content items; subsequent to receiving the first unresolved keystroke, receiving subsequent unresolved keystrokes from the user and forming a string of unresolved keystrokes including the first unresolved keystroke and the subsequent unresolved keystrokes in the order received; an selecting and presenting the subset of content items that is associated with the string of unresolved keystrokes received based on the direct mapping of unresolved keystrokes to the subsets of content items; wherein at least one of selecting the subset of content items associated with the first unresolved keystroke and selecting the subset of content items associated with the string of unresolved keystrokes is performed using a data structure or a term intersection process or a combination thereof, the data structure including a first storage structure and a second storage structure, the first storage structure including a plurality of subsets of content items, each subset being associated with a corresponding string of unresolved keystrokes, wherein using the data structure to select a subset of content items includes returning the subset of content items of the first storage structure that is associated with the string of unresolved keystrokes entered by the user and retrieving additional content items from the second storage structure if the desired content items are not present in the first storage structure. | 1. A user-interface method for searching a relatively large set of content items in response to unresolved keystroke entry by a user from a keypad with overloaded keys in which a given key is in fixed association with a plurality of alphabetical and numerical symbols and the entry has relatively few keystrokes so that a subset of targeted content item results is quickly presented, the method comprising: using an ordering criteria to rank and associate subsets of content items with corresponding strings of one or more unresolved keystrokes for overloaded keys so that the subsets of content items are directly mapped to the corresponding strings of unresolved keystrokes; subsequent to ranking and associating the content items with strings of unresolved keystrokes, receiving a first unresolved keystroke from a user, wherein one of the plurality of alphabetical and numerical symbols in fixed association with the first unresolved keystroke is a symbol the user is using to search for desired content items; selecting and presenting the subset of content items that is associated with the first unresolved keystroke based on the direct mapping of unresolved keystrokes to the subsets of content items; subsequent to receiving the first unresolved keystroke, receiving subsequent unresolved keystrokes from the user and forming a string of unresolved keystrokes including the first unresolved keystroke and the subsequent unresolved keystrokes in the order received; an selecting and presenting the subset of content items that is associated with the string of unresolved keystrokes received based on the direct mapping of unresolved keystrokes to the subsets of content items; wherein at least one of selecting the subset of content items associated with the first unresolved keystroke and selecting the subset of content items associated with the string of unresolved keystrokes is performed using a data structure or a term intersection process or a combination thereof, the data structure including a first storage structure and a second storage structure, the first storage structure including a plurality of subsets of content items, each subset being associated with a corresponding string of unresolved keystrokes, wherein using the data structure to select a subset of content items includes returning the subset of content items of the first storage structure that is associated with the string of unresolved keystrokes entered by the user and retrieving additional content items from the second storage structure if the desired content items are not present in the first storage structure. 2. The method of claim 1 wherein said ordering criteria include one or more of: temporal relevance, location relevance, popularity, personal preferences and character count. | 0.741791 |
8,688,448 | 1 | 5 | 1. A system comprising at least one processor programmed to: segment an unstructured text into a plurality of text sections; identify a portion of text that fully or partially identifies a section heading for a first text section of the plurality of text sections; remove, from the first text section, the portion of text that fully or partially identifies the section heading; create a structured text comprising the first text section and the section heading for the first text section, wherein the portion of text that fully or partially identifies the section heading has been removed from the first text section; and provide the structured text to a user. | 1. A system comprising at least one processor programmed to: segment an unstructured text into a plurality of text sections; identify a portion of text that fully or partially identifies a section heading for a first text section of the plurality of text sections; remove, from the first text section, the portion of text that fully or partially identifies the section heading; create a structured text comprising the first text section and the section heading for the first text section, wherein the portion of text that fully or partially identifies the section heading has been removed from the first text section; and provide the structured text to a user. 5. The system of claim 1 , wherein the at least one processor is further programmed to: receive user input indicative of the user wishing to move a border between a second text section and a third text section from a first position in the structured text to a second position in the structured text; and provide to the user a second structured text in which the border between the second text section and the third text section has been moved to the second position indicated by the user. | 0.5 |
8,583,448 | 19 | 23 | 19. The search engine method in claim 17 wherein: said providing said plurality of results is performed at least partly according to a first relevance-ranking methodology, said first relevance-ranking methodology being selected from the group consisting of (i) a link-based methodology, (ii) a payment-based methodology, (iii) a vote-based methodology, and (iv) a freshness-based methodology. | 19. The search engine method in claim 17 wherein: said providing said plurality of results is performed at least partly according to a first relevance-ranking methodology, said first relevance-ranking methodology being selected from the group consisting of (i) a link-based methodology, (ii) a payment-based methodology, (iii) a vote-based methodology, and (iv) a freshness-based methodology. 23. The search engine method in claim 19 wherein: said website analytic data comprises at least a measure of performance, said measure of performance pertaining to a first URL. | 0.670412 |
10,140,977 | 1 | 11 | 1. A system comprising one or more computers and one or more storage devices storing instructions that when executed by the one or more computers cause the one or more computers to perform operations comprising: obtaining, during operation of a computer-implemented dialogue system comprising a natural language understanding engine, data identifying (i) a first input conversational turn that was provided as input to the natural language understanding engine during a dialogue between a user and the computer-implemented dialogue system and (ii) a first annotation of the first input conversational turn generated by the natural language understanding engine, wherein the natural language understanding engine has been trained on a first set of training data comprising a plurality of training conversational turns; determining that the first annotation accurately characterized the first input conversational turn; determining, based on the training conversational turns in the first set of training data, that the natural language understanding engine is likely to generate inaccurate annotations of other conversational turns that are similar to the first input conversational turn; in response to determining that (i) the first annotation accurately characterized the first input conversational turn but (ii) the natural language understanding engine is likely to generate inaccurate annotations of other conversational turns that are similar to the first input conversational turn: obtaining one or more first paraphrases of the first input conversational turn; and generating, for each of the one or more first paraphrases, a respective first training example that identifies the first annotation as the correct annotation for the first paraphrase; and training the natural language understanding engine on at least the first training examples. | 1. A system comprising one or more computers and one or more storage devices storing instructions that when executed by the one or more computers cause the one or more computers to perform operations comprising: obtaining, during operation of a computer-implemented dialogue system comprising a natural language understanding engine, data identifying (i) a first input conversational turn that was provided as input to the natural language understanding engine during a dialogue between a user and the computer-implemented dialogue system and (ii) a first annotation of the first input conversational turn generated by the natural language understanding engine, wherein the natural language understanding engine has been trained on a first set of training data comprising a plurality of training conversational turns; determining that the first annotation accurately characterized the first input conversational turn; determining, based on the training conversational turns in the first set of training data, that the natural language understanding engine is likely to generate inaccurate annotations of other conversational turns that are similar to the first input conversational turn; in response to determining that (i) the first annotation accurately characterized the first input conversational turn but (ii) the natural language understanding engine is likely to generate inaccurate annotations of other conversational turns that are similar to the first input conversational turn: obtaining one or more first paraphrases of the first input conversational turn; and generating, for each of the one or more first paraphrases, a respective first training example that identifies the first annotation as the correct annotation for the first paraphrase; and training the natural language understanding engine on at least the first training examples. 11. The system of claim 1 , the operations further comprising: obtaining, during operation of the computer-implemented dialogue system, data identifying (i) a second input conversational turn that was provided as input to the natural language understanding engine during the dialogue between the user and the computer-implemented dialogue system and (ii) a second annotation of the second input conversational turn generated by the natural language understanding engine; determining that the second annotation did not accurately characterize the second input conversational turn; in response to determining that the second annotation did not accurately characterize the second input conversational turn: determining a correct annotation for the second conversational turn; obtaining one or more second paraphrases of the second input conversational turn; and generating, for each of the one or more second paraphrases, a respective second training example that identifies the correct annotation for the second conversational turn as the correct annotation for the second paraphrase; and training the natural language understanding engine on at least the second training examples. | 0.5 |
9,275,064 | 12 | 16 | 12. A computer system comprising a computer processor coupled to a computer-readable memory unit, said memory unit comprising instructions that when executed by the computer processor implements a method comprising: computing, by said computer processor, a term frequency-inverse document frequency (tf-idf) associated with n-grams of an n-gram model of a domain; determining, by said computer processor based on said tf-idf, a frequently occurring group of n-grams of said n-grams; generating, by said computer processor executing a deep parser component of said computing system with respect to said frequently occurring group of n-grams, a deep parse output comprising results of said executing said deep parser component with respect to said frequently occurring group of n-grams; storing, by said computer processor in a database cache, said deep parse output; indexing, by said computer processor executing said frequently occurring group of n-grams in said database cache, said deep parse output; and verifying, by said computer processor, if a pre-computed specified text word sequence of said deep parse output is available in said database cache, wherein said verifying comprises: retrieving from said deep parse output, a plurality of tokens of said deep parser output, wherein said plurality of tokens are associated with a portion of said pre-computed specified text word sequence, wherein said plurality of tokens comprise suffixes associated with structures of said deep parser output, and wherein said plurality of tokens comprise a version token; and determining based on said plurality of tokens, variations associated with said pre-computed specified text word sequence. | 12. A computer system comprising a computer processor coupled to a computer-readable memory unit, said memory unit comprising instructions that when executed by the computer processor implements a method comprising: computing, by said computer processor, a term frequency-inverse document frequency (tf-idf) associated with n-grams of an n-gram model of a domain; determining, by said computer processor based on said tf-idf, a frequently occurring group of n-grams of said n-grams; generating, by said computer processor executing a deep parser component of said computing system with respect to said frequently occurring group of n-grams, a deep parse output comprising results of said executing said deep parser component with respect to said frequently occurring group of n-grams; storing, by said computer processor in a database cache, said deep parse output; indexing, by said computer processor executing said frequently occurring group of n-grams in said database cache, said deep parse output; and verifying, by said computer processor, if a pre-computed specified text word sequence of said deep parse output is available in said database cache, wherein said verifying comprises: retrieving from said deep parse output, a plurality of tokens of said deep parser output, wherein said plurality of tokens are associated with a portion of said pre-computed specified text word sequence, wherein said plurality of tokens comprise suffixes associated with structures of said deep parser output, and wherein said plurality of tokens comprise a version token; and determining based on said plurality of tokens, variations associated with said pre-computed specified text word sequence. 16. The computer system of claim 12 , wherein deep parse output comprises a cache value. | 0.887755 |
8,949,170 | 1 | 12 | 1. A method for analyzing ambiguities in language for natural language processing, said method comprising: an input device receiving a first sentence or phrase from a source; wherein a vocabulary database stores words or phrases; wherein a language grammar template database stores language grammar templates; an analyzer module segmenting said first sentence or phrase, using words or phrases obtained from said vocabulary database and language grammar templates obtained from said language grammar template database; said analyzer module parsing said first sentence or phrase into one or more sentence or phrase components; said analyzer module determining Z-valuation for said one or more sentence or phrase components as a value of an attribute for said one or more sentence or phrase components; wherein said Z-valuation for said one or more sentence or phrase components are based on one or more parameters with unsharp class boundary or fuzzy membership function; said analyzer module processing language ambiguities in said first sentence or phrase for natural language processing, using said Z-valuation for said one or more sentence or phrase components. | 1. A method for analyzing ambiguities in language for natural language processing, said method comprising: an input device receiving a first sentence or phrase from a source; wherein a vocabulary database stores words or phrases; wherein a language grammar template database stores language grammar templates; an analyzer module segmenting said first sentence or phrase, using words or phrases obtained from said vocabulary database and language grammar templates obtained from said language grammar template database; said analyzer module parsing said first sentence or phrase into one or more sentence or phrase components; said analyzer module determining Z-valuation for said one or more sentence or phrase components as a value of an attribute for said one or more sentence or phrase components; wherein said Z-valuation for said one or more sentence or phrase components are based on one or more parameters with unsharp class boundary or fuzzy membership function; said analyzer module processing language ambiguities in said first sentence or phrase for natural language processing, using said Z-valuation for said one or more sentence or phrase components. 12. The method for analyzing ambiguities in language for natural language processing as recited in claim 1 , wherein said method comprises: applying a similar-sound database. | 0.57971 |
7,475,010 | 9 | 10 | 9. The method for resolving natural language ambiguities within text documents of claim 1 , further comprising resolving anaphora references of said text documents using an anaphora resolution module whereby additional contextual features are extracted. | 9. The method for resolving natural language ambiguities within text documents of claim 1 , further comprising resolving anaphora references of said text documents using an anaphora resolution module whereby additional contextual features are extracted. 10. The method for resolving anaphora references of claim 9 , comprising the following steps of: training a probabilistic anaphora-alignment classifier using training data containing anaphora to antecedent annotations; determining an anaphor to antecedent alignment for each anaphor in said text documents by maximizing the probability computed using said probabilistic anaphora-alignment classifier based on contextual features; and integrating additional contextual features as generated by one or more of the following natural language processing modules into said probabilistic classifiers whereby said measure of confidence is improved: using a word sense disambiguation module to determine word senses and the associated measure of confidence for each word; using a chunking module to identify multi-word phrases and the associated measure of confidence for each phrase; using a named-entity recognition module to identify named entities and the associated measure of confidence for each entity; using a syntactic parsing module to construct sentential parse trees and the associated measure of confidence for each tree; using a discourse categorization module to determine document categories and the associated measure of confidence for each category; using a discourse structure analysis module to determine discourse structures and the associated measure of confidence for each structure. | 0.5 |
8,358,703 | 26 | 27 | 26. The method of claim 25 wherein replacing the first coded portion is part of watermarking the coded bitstream, and the accessed piece of information comprises payload information. | 26. The method of claim 25 wherein replacing the first coded portion is part of watermarking the coded bitstream, and the accessed piece of information comprises payload information. 27. The method of claim 26 wherein the value of the payload information dictates whether or not the first coded portion is to be replaced. | 0.701299 |
5,537,526 | 1 | 2 | 1. An apparatus for document processing for use in a computer system having a processor, a storage and a display under control of the processor, the apparatus comprising: (a) a document framework stored in the storage, the document framework defining a plurality of model classes, each one of the plurality of model classes defining means for referencing data stored in the storage, means for creating a container object to hold a plurality of objects instantiated from one or more of the plurality of model classes and program logic means for processing the data and objects held in the container object; (b) means for instantiating a root model object from one of the plurality of model classes, the root model object containing a reference to data of a first type; (c) means for instantiating a plurality of additionally model objects from the plurality of model classes, each one of the plurality of additional model objects containing a reference to data of a type different from the first type; (d) means for creating a compound document from the root model object by adding at least one additional model object instantiated from the plurality of additional model objects to a container in the root model object, wherein the root model object and each one of the at least one additional model objects provide a hierarchy of model objects which represent a containership hierarchy of the compound document; and (e) means for processing the compound document by processing the root model object, which applies the processing to the at least one additional model object in the container in the root model object. | 1. An apparatus for document processing for use in a computer system having a processor, a storage and a display under control of the processor, the apparatus comprising: (a) a document framework stored in the storage, the document framework defining a plurality of model classes, each one of the plurality of model classes defining means for referencing data stored in the storage, means for creating a container object to hold a plurality of objects instantiated from one or more of the plurality of model classes and program logic means for processing the data and objects held in the container object; (b) means for instantiating a root model object from one of the plurality of model classes, the root model object containing a reference to data of a first type; (c) means for instantiating a plurality of additionally model objects from the plurality of model classes, each one of the plurality of additional model objects containing a reference to data of a type different from the first type; (d) means for creating a compound document from the root model object by adding at least one additional model object instantiated from the plurality of additional model objects to a container in the root model object, wherein the root model object and each one of the at least one additional model objects provide a hierarchy of model objects which represent a containership hierarchy of the compound document; and (e) means for processing the compound document by processing the root model object, which applies the processing to the at least one additional model object in the container in the root model object. 2. The apparatus of claim 1, wherein the document framework stored in the storage includes means for streaming, wherein in response to a first model being streamed, the means for streaming streams the first model and each of a plurality of embedded models contained within the first model. | 0.5 |
7,509,581 | 10 | 11 | 10. The method of claim 9 , wherein said segment group information includes a level information. | 10. The method of claim 9 , wherein said segment group information includes a level information. 11. The method of claim 10 , wherein said level information defines multiple levels. | 0.5 |
9,317,869 | 10 | 13 | 10. A method comprising: receiving, from an entity associated with a brand page, one or more tags associated with content posted to the brand page by the entity, the entity and the brand page stored in a social networking system; presenting the content posted on the brand page to users of the social networking system connected to the brand page; receiving interactions with the content posted on the brand page from a plurality of the users of the social networking system connected to the brand page; generating, by the social networking system, a group of users, of the plurality of the users, who performed the received interactions with the content associated with one or more tags; selecting, by the social networking system, additional content for the group of users, the additional content selected based on the additional content having a tag matching to the one or more tags associated with the content interacted with by the group of users of the social networking system; and sending, by the social networking system, the additional content in a story in a news feed to a client device associated with a viewing user included in the group of users. | 10. A method comprising: receiving, from an entity associated with a brand page, one or more tags associated with content posted to the brand page by the entity, the entity and the brand page stored in a social networking system; presenting the content posted on the brand page to users of the social networking system connected to the brand page; receiving interactions with the content posted on the brand page from a plurality of the users of the social networking system connected to the brand page; generating, by the social networking system, a group of users, of the plurality of the users, who performed the received interactions with the content associated with one or more tags; selecting, by the social networking system, additional content for the group of users, the additional content selected based on the additional content having a tag matching to the one or more tags associated with the content interacted with by the group of users of the social networking system; and sending, by the social networking system, the additional content in a story in a news feed to a client device associated with a viewing user included in the group of users. 13. The method of claim 10 , wherein an interaction with the content posted on the brand page comprises at least one of: sharing the content with another user, indicating a preference for the content or sharing a link to the content with another user. | 0.5 |
8,396,878 | 15 | 16 | 15. A non-transitory computer-readable medium having sets of instructions stored thereon which, when executed by a computer, cause the computer to: receive one or more manually generated tags associated with a video file; based at least in part on the one or more manually entered tags, determine a preliminary category for the video file; based on the preliminary category, generate a targeted transcript of the video file, wherein the targeted transcript includes a plurality of words; generate an ontology of the plurality of words based on the targeted transcript; rank the plurality of words in the ontology based on a plurality of scoring factors; based on the ranking of the plurality of words, generate one or more automated tags associated with the video file; and generate a heat map for the video file, wherein the heat map comprises a graphical display which indicates offset locations of words within the video file with the highest rankings, wherein the plurality of scoring factors consists of two or more of: proximity of words relative to other words, distribution of words throughout the targeted transcript of the video file, words related to the plurality of words throughout the targeted transcript of the video file, occurrence age of the related words, information associated with the one or more manually entered tags, vernacular meaning of the plurality of words, or colloquial considerations of the meaning of the plurality of words. | 15. A non-transitory computer-readable medium having sets of instructions stored thereon which, when executed by a computer, cause the computer to: receive one or more manually generated tags associated with a video file; based at least in part on the one or more manually entered tags, determine a preliminary category for the video file; based on the preliminary category, generate a targeted transcript of the video file, wherein the targeted transcript includes a plurality of words; generate an ontology of the plurality of words based on the targeted transcript; rank the plurality of words in the ontology based on a plurality of scoring factors; based on the ranking of the plurality of words, generate one or more automated tags associated with the video file; and generate a heat map for the video file, wherein the heat map comprises a graphical display which indicates offset locations of words within the video file with the highest rankings, wherein the plurality of scoring factors consists of two or more of: proximity of words relative to other words, distribution of words throughout the targeted transcript of the video file, words related to the plurality of words throughout the targeted transcript of the video file, occurrence age of the related words, information associated with the one or more manually entered tags, vernacular meaning of the plurality of words, or colloquial considerations of the meaning of the plurality of words. 16. The computer-readable medium of claim 15 , wherein the sets of instructions when further executed by the computer cause the computer to: generate a web page which includes an embedded widget; based on the one or more automated tags searching web content to return a list of videos, blogs, audio, and web pages that match the one or more automated tags; and providing within the widget a view of each of the returned results. | 0.5 |
9,331,712 | 1 | 9 | 1. A method for compressing an input stream of a plurality of input words, the method comprising: for each successive input word of the input stream, determining whether the input word matches an entry in a lookback table, the lookback table storing a plurality of entries; updating the lookback table in response to the input word; generating a codeword by entropy encoding a data type corresponding to the input word, each input word being one of a plurality of data types, the plurality of data types including at least a first data type indicating full matching between the input word and an entry in the lookback table and a second data type indicating partial matching between the input word and an entry in the lookback table; and generating an output stream, the output stream including codewords ordered correspondingly to the input words of the input stream. | 1. A method for compressing an input stream of a plurality of input words, the method comprising: for each successive input word of the input stream, determining whether the input word matches an entry in a lookback table, the lookback table storing a plurality of entries; updating the lookback table in response to the input word; generating a codeword by entropy encoding a data type corresponding to the input word, each input word being one of a plurality of data types, the plurality of data types including at least a first data type indicating full matching between the input word and an entry in the lookback table and a second data type indicating partial matching between the input word and an entry in the lookback table; and generating an output stream, the output stream including codewords ordered correspondingly to the input words of the input stream. 9. The method of claim 1 , wherein full matching between an input word and an entry in the lookback table comprises bit-wise matching of all bits of the input word and all bits of the entry in the lookback table. | 0.822742 |
9,020,931 | 1 | 8 | 1. A system for searching multiple contacts from a single query having multiple sub-queries, the system comprising: a delimiter module configured to divide the single query into multiple sub-queries based upon predefined rules; a parser module configured to search for multiple contacts corresponding to each of the multiple sub-queries; a display module configured to display the multiple contacts corresponding to the multiple sub-queries; and a processor configured to perform one of establishing a conference call and sending a message to at least one of the multiple contacts based on a user's preference. | 1. A system for searching multiple contacts from a single query having multiple sub-queries, the system comprising: a delimiter module configured to divide the single query into multiple sub-queries based upon predefined rules; a parser module configured to search for multiple contacts corresponding to each of the multiple sub-queries; a display module configured to display the multiple contacts corresponding to the multiple sub-queries; and a processor configured to perform one of establishing a conference call and sending a message to at least one of the multiple contacts based on a user's preference. 8. The system of claim 1 , wherein the single query comprises the multiple sub-queries for adding multiple recipients into a conference call. | 0.645729 |
8,971,644 | 1 | 2 | 1. A method of determining an annotation for a particular image, the method comprising: determining a plurality of images related to the particular image, the plurality of images stored in one or more computer systems; identifying a plurality of annotations associated with the plurality of images; generating an ontology for the particular image wherein the ontology comprises: a plurality of terms, the plurality of annotations, and the plurality of images arranged in a hierarchy with the plurality of annotations being downstream from the plurality of terms associated with a highest level of the hierarchy, and the plurality of images being downstream from the plurality of annotations, and a plurality of links defining relationships between respective terms, annotations, or images, wherein each link is associated with a respective relevance value indicating a measure of relevance between two respective terms, annotations, or images connected by a respective link; determining a total relevance value for each term associated with the highest level, wherein for each term associated with the highest level, the total relevance value is a sum of relevance values of links downstream from the term; and associating one of the plurality of terms having a highest total relevance value with the particular image as an image annotation. | 1. A method of determining an annotation for a particular image, the method comprising: determining a plurality of images related to the particular image, the plurality of images stored in one or more computer systems; identifying a plurality of annotations associated with the plurality of images; generating an ontology for the particular image wherein the ontology comprises: a plurality of terms, the plurality of annotations, and the plurality of images arranged in a hierarchy with the plurality of annotations being downstream from the plurality of terms associated with a highest level of the hierarchy, and the plurality of images being downstream from the plurality of annotations, and a plurality of links defining relationships between respective terms, annotations, or images, wherein each link is associated with a respective relevance value indicating a measure of relevance between two respective terms, annotations, or images connected by a respective link; determining a total relevance value for each term associated with the highest level, wherein for each term associated with the highest level, the total relevance value is a sum of relevance values of links downstream from the term; and associating one of the plurality of terms having a highest total relevance value with the particular image as an image annotation. 2. The method of claim 1 , wherein the step of determining a plurality of images related to the particular image further comprises: generating a fingerprint associated with the particular image; and determining the plurality of images related to the particular image, based on the fingerprint associated with the particular image and fingerprints associated with the plurality of images. | 0.5 |
9,619,450 | 6 | 8 | 6. A computer program product comprising a non-transitory computer usable medium including a computer readable program, wherein the computer readable program when executed on a computer causes the computer to: learn sets of equivalent syntactic patterns from a corpus of documents; map the sets of equivalent syntactic patterns to corresponding items in a knowledge graph; receive a set of one or more input documents; process the set of one or more input documents for one or more expressions matching a first set of equivalent syntactic patterns from among the sets of equivalent syntactic patterns; process the one or more expressions to determine one or more entities; determine a set of entities that are relevant to a main event described by the set of one or more input documents from the one or more entities; identify entity types for the set of entities; generate a refined set of equivalent syntactic patterns by excluding the equivalent syntactic patterns with a relevance score below a predefined threshold; select an equivalent syntactic pattern from among the refined set of equivalent syntactic patterns for a headline, the selected equivalent syntactic pattern reflecting the main event described by the set of one or more input documents; generate the headline by populating the selected equivalent syntactic pattern with the one or more entities, wherein an order of entities in the headline is based on the entity types of the one or more entities; determine one or more entries in the knowledge graph corresponding to the one or more entities described by the one or more expressions; and update the one or more entries in the knowledge graph to reflect the main event using the headline. | 6. A computer program product comprising a non-transitory computer usable medium including a computer readable program, wherein the computer readable program when executed on a computer causes the computer to: learn sets of equivalent syntactic patterns from a corpus of documents; map the sets of equivalent syntactic patterns to corresponding items in a knowledge graph; receive a set of one or more input documents; process the set of one or more input documents for one or more expressions matching a first set of equivalent syntactic patterns from among the sets of equivalent syntactic patterns; process the one or more expressions to determine one or more entities; determine a set of entities that are relevant to a main event described by the set of one or more input documents from the one or more entities; identify entity types for the set of entities; generate a refined set of equivalent syntactic patterns by excluding the equivalent syntactic patterns with a relevance score below a predefined threshold; select an equivalent syntactic pattern from among the refined set of equivalent syntactic patterns for a headline, the selected equivalent syntactic pattern reflecting the main event described by the set of one or more input documents; generate the headline by populating the selected equivalent syntactic pattern with the one or more entities, wherein an order of entities in the headline is based on the entity types of the one or more entities; determine one or more entries in the knowledge graph corresponding to the one or more entities described by the one or more expressions; and update the one or more entries in the knowledge graph to reflect the main event using the headline. 8. The computer program product of claim 6 , wherein to learn the sets of equivalent syntactic patterns further includes: receiving sets of related documents; determining, for each of the sets of related documents, expressions involving corresponding information; determining sets of equivalent syntactic patterns based on the expressions; and storing the sets of equivalent syntactic patterns in a data store. | 0.5 |
9,020,244 | 1 | 8 | 1. A method comprising: generating a plurality of model-generated scores; wherein each model-generated score of the plurality of model-generated scores corresponds to a candidate image from a plurality of candidate images for a particular video item; wherein generating the plurality of model-generated scores includes, for each candidate image of the plurality of candidate images, using a set of input parameter values with a trained machine learning engine to produce the model-generated score that corresponds to the candidate image, wherein the set of input parameter values include at least one input parameter value for an activity feature that reflects one or more actions that one or more users have performed, during playback of the particular video item, relative to a frame that corresponds to the particular candidate image; establishing a ranking of the candidate images, from the plurality of candidate images, for the particular video item based, at least in part, on the model-generated scores that correspond to the candidate images; selecting a candidate image, from the plurality of candidate images, as a representative image for the particular video item based, at least in part, on the ranking; wherein the method is performed by one or more computing devices. | 1. A method comprising: generating a plurality of model-generated scores; wherein each model-generated score of the plurality of model-generated scores corresponds to a candidate image from a plurality of candidate images for a particular video item; wherein generating the plurality of model-generated scores includes, for each candidate image of the plurality of candidate images, using a set of input parameter values with a trained machine learning engine to produce the model-generated score that corresponds to the candidate image, wherein the set of input parameter values include at least one input parameter value for an activity feature that reflects one or more actions that one or more users have performed, during playback of the particular video item, relative to a frame that corresponds to the particular candidate image; establishing a ranking of the candidate images, from the plurality of candidate images, for the particular video item based, at least in part, on the model-generated scores that correspond to the candidate images; selecting a candidate image, from the plurality of candidate images, as a representative image for the particular video item based, at least in part, on the ranking; wherein the method is performed by one or more computing devices. 8. The method of claim 1 wherein using a set of input parameter values with the trained machine learning engine for a particular candidate image includes using with the trained machine learning engine at least one input parameter for a feature that reflects how similar the particular candidate image is to images in other video items that belong to a collection to which the particular video item belongs. | 0.504878 |
4,427,848 | 47 | 48 | 47. The system of claim 36, wherein said system further includes means, responsive to said memory means output signals, for generating audible feedback signals to said telephone set, indicative of said symbol code transmitted to said computer. | 47. The system of claim 36, wherein said system further includes means, responsive to said memory means output signals, for generating audible feedback signals to said telephone set, indicative of said symbol code transmitted to said computer. 48. The system of claim 47 wherein said audible feedback signals constitute a synthesized speech signal corresponding to said symbol. | 0.5 |
7,506,322 | 6 | 7 | 6. A hardware component that facilitates executing an interpretive language in a system, the system including processing component and a memory component, wherein the hardware component provides an interface between the processing component and the memory component, the hardware component comprising: a first multiplexer for receiving an address from the processing component and providing an output to the memory component; an interpreter language program counter for providing inputs of the first multiplexer; a decoding component for: receiving the address, comparing the received address to stored addresses, the stored addresses including a fixed instruction fetch address and a plurality of fixed operand fetch addresses, and controlling the output of the first multiplexer based on a result of the comparing; a second multiplexer for receiving data from the memory component and providing an output to the processing component; an instruction jump address generator component for receiving the data and providing inputs to the second multiplexer; an operand storing component for receiving the data, storing any operands of the data, and providing inputs to the second multiplexer, wherein the decoding component controls the second multiplexer based on the result of the comparing; and a counter component for receiving an input from the decoding component and providing outputs to the second multiplexer and the decoding component. | 6. A hardware component that facilitates executing an interpretive language in a system, the system including processing component and a memory component, wherein the hardware component provides an interface between the processing component and the memory component, the hardware component comprising: a first multiplexer for receiving an address from the processing component and providing an output to the memory component; an interpreter language program counter for providing inputs of the first multiplexer; a decoding component for: receiving the address, comparing the received address to stored addresses, the stored addresses including a fixed instruction fetch address and a plurality of fixed operand fetch addresses, and controlling the output of the first multiplexer based on a result of the comparing; a second multiplexer for receiving data from the memory component and providing an output to the processing component; an instruction jump address generator component for receiving the data and providing inputs to the second multiplexer; an operand storing component for receiving the data, storing any operands of the data, and providing inputs to the second multiplexer, wherein the decoding component controls the second multiplexer based on the result of the comparing; and a counter component for receiving an input from the decoding component and providing outputs to the second multiplexer and the decoding component. 7. The hardware component of claim 6 , wherein the decoding component sets the first multiplexer to provide the received address as the output of the first multiplexer when the received address fails to match any stored address. | 0.766393 |
7,490,073 | 1 | 30 | 1. A method of encoding a diagnostic troubleshooting procedure for detecting problems in a deployment of a software application, comprising: providing a human-readable diagnostic procedure configured to be used to detect problems in the deployment, the diagnostic procedure represented as a machine-readable tree comprising decision factors, at least some of the decision factors having “yes” and “no” paths; encoding the decision factors into a machine-readable format, including encoding input and output information for each encoded decision factor; using a computer-implemented tool to automatically convert the tree into a plurality of machine-readable rules, each of the rules corresponding to one of the problems, each of the rules comprising a specific navigation of a plurality of the decision factors and a plurality of “yes” or “no” paths of the tree, each of said navigations comprising a navigation from a root node of the tree to one of a plurality of terminal nodes of the tree; and storing the rules in computer storage, in a format in which the rules can be used by a computer to automatically detect said problems in the deployment of the software application. | 1. A method of encoding a diagnostic troubleshooting procedure for detecting problems in a deployment of a software application, comprising: providing a human-readable diagnostic procedure configured to be used to detect problems in the deployment, the diagnostic procedure represented as a machine-readable tree comprising decision factors, at least some of the decision factors having “yes” and “no” paths; encoding the decision factors into a machine-readable format, including encoding input and output information for each encoded decision factor; using a computer-implemented tool to automatically convert the tree into a plurality of machine-readable rules, each of the rules corresponding to one of the problems, each of the rules comprising a specific navigation of a plurality of the decision factors and a plurality of “yes” or “no” paths of the tree, each of said navigations comprising a navigation from a root node of the tree to one of a plurality of terminal nodes of the tree; and storing the rules in computer storage, in a format in which the rules can be used by a computer to automatically detect said problems in the deployment of the software application. 30. The method of claim 1 , the tool comprising a Microsoft Visio™ plugin. | 0.852 |
8,566,790 | 15 | 17 | 15. The computer executable method of claim 14 , wherein a type definition of the data representation language schema is used to interpret the script. | 15. The computer executable method of claim 14 , wherein a type definition of the data representation language schema is used to interpret the script. 17. The computer executable method of claim 15 , wherein a function in the script interprets data according to the type definition in the data representation language schema. | 0.5 |
9,753,737 | 6 | 7 | 6. A non-transitory, computer-readable medium storing computer-executable code for developing applications that provide data security when executed by one or more processors associated with one or more computer systems, the non-transitory, computer-readable medium comprising: code configured to define a view object of an application development framework that presents data stored in datasources according to a predetermined view in applications built on the datasources using the application development framework, and store the view object in memory; code configured to receive information mapping each of a plurality of attributes of the view object to data stored in at least one datasource of the datasources; code to configure attribute security with respect to at least two attributes of the view object at least in part by: receiving expressions that restrict output in the applications of the data mapped to the at least two attributes of the plurality of attributes of the view object in the predetermined view, wherein a first expression of the expressions corresponds to a first role and a first privilege of an identified user, and a second expression of the expressions corresponds to a second role and a second privilege of the identified user; and defining a first security property and a second security property of the at least two attributes, and storing values for the first security property and the second security property in a table of properties defined for the view object; code configured to generate an application with the application development framework based at least in part on the view object; code configured to, responsive to a request corresponding to the view object, and based at least in part on the first security property and the second security property and the values stored in the table of the properties defined for the view object, restricting access, allowed to the identified user according to the first role, to first data mapped to the at least two attributes of the view object, and restrict output, via the application, of second data mapped to the at least two attributes of the view object in the predetermined view according to the second role so that data corresponding to the at least two attributes is excluded from a record and/or a row that is output; wherein each attribute of the plurality of attributes corresponds to one or both of a respective field of the record and a respective column of the row. | 6. A non-transitory, computer-readable medium storing computer-executable code for developing applications that provide data security when executed by one or more processors associated with one or more computer systems, the non-transitory, computer-readable medium comprising: code configured to define a view object of an application development framework that presents data stored in datasources according to a predetermined view in applications built on the datasources using the application development framework, and store the view object in memory; code configured to receive information mapping each of a plurality of attributes of the view object to data stored in at least one datasource of the datasources; code to configure attribute security with respect to at least two attributes of the view object at least in part by: receiving expressions that restrict output in the applications of the data mapped to the at least two attributes of the plurality of attributes of the view object in the predetermined view, wherein a first expression of the expressions corresponds to a first role and a first privilege of an identified user, and a second expression of the expressions corresponds to a second role and a second privilege of the identified user; and defining a first security property and a second security property of the at least two attributes, and storing values for the first security property and the second security property in a table of properties defined for the view object; code configured to generate an application with the application development framework based at least in part on the view object; code configured to, responsive to a request corresponding to the view object, and based at least in part on the first security property and the second security property and the values stored in the table of the properties defined for the view object, restricting access, allowed to the identified user according to the first role, to first data mapped to the at least two attributes of the view object, and restrict output, via the application, of second data mapped to the at least two attributes of the view object in the predetermined view according to the second role so that data corresponding to the at least two attributes is excluded from a record and/or a row that is output; wherein each attribute of the plurality of attributes corresponds to one or both of a respective field of the record and a respective column of the row. 7. The non-transitory, computer-readable medium of claim 6 , further comprising: code configured to receive information adding a predetermined named property designated by the application development framework for attribute-based security. | 0.542146 |
9,171,253 | 16 | 18 | 16. A computer system for selecting a classifier for production, the computer system comprising: a processor; a computer-readable storage medium including executable code, the code when executed by the processor performs steps comprising: identifying a plurality of classifiers; selecting a set of test cases based on time; grouping the set of test cases into a plurality of datasets based on time, each of the plurality of datasets associated with a different interval of time; applying each of the plurality of classifiers to each of the plurality of datasets to generate classifications for test cases in each of the plurality of datasets; determining, for each of the plurality of classifiers, a classification performance score for each of the plurality of datasets based on the classifications generated for the test cases of each dataset; determining, for each of the plurality of classifiers, a variance across the classification performance scores for the plurality of data sets; and selecting a classifier from among the plurality of classifiers for production, the selected classifier having a least amount of variance across the classification performance scores for the plurality of datasets associated with the different intervals of time. | 16. A computer system for selecting a classifier for production, the computer system comprising: a processor; a computer-readable storage medium including executable code, the code when executed by the processor performs steps comprising: identifying a plurality of classifiers; selecting a set of test cases based on time; grouping the set of test cases into a plurality of datasets based on time, each of the plurality of datasets associated with a different interval of time; applying each of the plurality of classifiers to each of the plurality of datasets to generate classifications for test cases in each of the plurality of datasets; determining, for each of the plurality of classifiers, a classification performance score for each of the plurality of datasets based on the classifications generated for the test cases of each dataset; determining, for each of the plurality of classifiers, a variance across the classification performance scores for the plurality of data sets; and selecting a classifier from among the plurality of classifiers for production, the selected classifier having a least amount of variance across the classification performance scores for the plurality of datasets associated with the different intervals of time. 18. The computer system of claim 16 , wherein determining the classification performance score of each of the plurality of classifiers comprises: identifying, for the plurality of datasets, a plurality of first labels for the test cases in each dataset, each first label describing an accurate classification of a corresponding test case as being associated with a first classification or a second classification; comparing, for each of the plurality of classifiers, the generated classifications of test cases for each of the plurality of datasets with the plurality of first labels corresponding to the test cases in each of the plurality of datasets; and determining, for each of the plurality of classifiers, a classification performance score associated with each of the plurality of datasets based on the comparison. | 0.5 |
8,886,856 | 9 | 11 | 9. The integrated circuit of claim 1 , wherein the lanes of the multi-lane link have measurable latencies, and wherein the designated lane is configured for low-latency. | 9. The integrated circuit of claim 1 , wherein the lanes of the multi-lane link have measurable latencies, and wherein the designated lane is configured for low-latency. 11. The integrated circuit of claim 9 , wherein the designated lane is configured for low-latency by preferential routing to reduce skew. | 0.5 |
7,664,644 | 14 | 15 | 14. The method of claim 13 , further comprising: applying active multitask active learning to select re-use data for use in the step of retraining a respective model. | 14. The method of claim 13 , further comprising: applying active multitask active learning to select re-use data for use in the step of retraining a respective model. 15. The method of claim 14 , wherein the selected re-use data is existing labeled data. | 0.5 |
10,146,766 | 1 | 5 | 1. An email suggestor system for interfacing each of two or more affiliate merchant devices including at least a first merchant device and a second merchant device, with a payment application to reduce a transaction time for consumer-facing operations in a retail environment, the email suggestor system comprising an apparatus, the apparatus comprising at least one processor and at least one memory including program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to: receive, from the first merchant device, during a first transaction, a first name and a last name of a consumer and an associated payment method; subsequent to the first transaction, receive, during a second transaction, with the second merchant device, identity information, the second merchant device being different than the first merchant device; determine that the consumer previously interacted with an affiliate merchant device, the affiliate merchant device being the first merchant device; subsequent to the determination that the consumer previously interacted with the affiliate merchant device, match the identity information received during the second transaction to the first and last name of the consumer received during the first transaction; identify, based on the match of the identity information received during the second transaction to the first and last name of the consumer received during the first transaction, a consumer profile associated with the payment method, the consumer profile associated with the email suggestor system; identify, from the consumer profile, an associated email address associated with the consumer profile; provide, during the second transaction, an interface to a third-party payment application configured to reduce the transaction time, subsequent to the identification of the associated email address associated with the consumer profile; and cause, during the second transaction, display of the email address associated with the consumer profile associated with the payment method. | 1. An email suggestor system for interfacing each of two or more affiliate merchant devices including at least a first merchant device and a second merchant device, with a payment application to reduce a transaction time for consumer-facing operations in a retail environment, the email suggestor system comprising an apparatus, the apparatus comprising at least one processor and at least one memory including program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to: receive, from the first merchant device, during a first transaction, a first name and a last name of a consumer and an associated payment method; subsequent to the first transaction, receive, during a second transaction, with the second merchant device, identity information, the second merchant device being different than the first merchant device; determine that the consumer previously interacted with an affiliate merchant device, the affiliate merchant device being the first merchant device; subsequent to the determination that the consumer previously interacted with the affiliate merchant device, match the identity information received during the second transaction to the first and last name of the consumer received during the first transaction; identify, based on the match of the identity information received during the second transaction to the first and last name of the consumer received during the first transaction, a consumer profile associated with the payment method, the consumer profile associated with the email suggestor system; identify, from the consumer profile, an associated email address associated with the consumer profile; provide, during the second transaction, an interface to a third-party payment application configured to reduce the transaction time, subsequent to the identification of the associated email address associated with the consumer profile; and cause, during the second transaction, display of the email address associated with the consumer profile associated with the payment method. 5. The email suggestor system according to claim 1 , wherein the receiving identity information during the second transaction comprises: receiving, in an instance in which the consumer has previously visited a website associated with the first merchant device or an affiliated merchant device, the first name and the last name from the merchant during the second transaction. | 0.642176 |
7,814,475 | 1 | 2 | 1. A computer-implemented method for automatically deploying objects into a networked display, comprising the steps of: receiving an object to be deployed as a client-side rendered component at a networked portal system, said object having a name and a class; and after said receiving of said object having said name and said class: (i) automatically incorporating said name and said class in a markup language document; (ii) automatically creating a view component that enables an embedding of said object in a portal iView, which is displayable by a browser; and (iii) thereafter automatically transmitting said object, said markup language document, and said view component to a server where the view component is registered into a portal as client-side rendered components. | 1. A computer-implemented method for automatically deploying objects into a networked display, comprising the steps of: receiving an object to be deployed as a client-side rendered component at a networked portal system, said object having a name and a class; and after said receiving of said object having said name and said class: (i) automatically incorporating said name and said class in a markup language document; (ii) automatically creating a view component that enables an embedding of said object in a portal iView, which is displayable by a browser; and (iii) thereafter automatically transmitting said object, said markup language document, and said view component to a server where the view component is registered into a portal as client-side rendered components. 2. The method according to claim 1 , further comprising the step of wrapping said object prior to performing said step of transmitting. | 0.633152 |
7,735,144 | 1 | 10 | 1. A computer implemented method, comprising: reading an electronic document, the electronic document comprising content rules, an existing signed state, and existing content items; authenticating the electronic document one or more times, each authenticating comprising: classifying the existing content items of the electronic document into existing invariant content items and existing modifiable content items according to the content rules; receiving one or more user actions for one or more of the existing content items; determining whether the one or more user actions are permitted by the content rules; reclassifying one or more existing modifiable content items into new invariant content items in response to the one or more user actions; generating an object digest for an aggregation of the existing invariant content items and the new invariant content items by digesting the existing invariant content items and the new invariant content items, wherein the aggregation includes a simple content item, a semi-complex content item and a complex content item, the object digest is generated according to their complexity and generating the object digest includes ignoring one or more existing modifiable content items in the electronic document that are not reclassified into new invariant content items; generating a saved state of the electronic document; and adding a new signed state to the electronic document, the new signed state comprising the object digest, the saved state, and an electronic signature wherein generating an object digest is preformed by a processor of an electronic document reader. | 1. A computer implemented method, comprising: reading an electronic document, the electronic document comprising content rules, an existing signed state, and existing content items; authenticating the electronic document one or more times, each authenticating comprising: classifying the existing content items of the electronic document into existing invariant content items and existing modifiable content items according to the content rules; receiving one or more user actions for one or more of the existing content items; determining whether the one or more user actions are permitted by the content rules; reclassifying one or more existing modifiable content items into new invariant content items in response to the one or more user actions; generating an object digest for an aggregation of the existing invariant content items and the new invariant content items by digesting the existing invariant content items and the new invariant content items, wherein the aggregation includes a simple content item, a semi-complex content item and a complex content item, the object digest is generated according to their complexity and generating the object digest includes ignoring one or more existing modifiable content items in the electronic document that are not reclassified into new invariant content items; generating a saved state of the electronic document; and adding a new signed state to the electronic document, the new signed state comprising the object digest, the saved state, and an electronic signature wherein generating an object digest is preformed by a processor of an electronic document reader. 10. The method of claim 1 , the authenticating further comprising: saving the electronic document with the new signed state to a computer-readable medium. | 0.891243 |
7,694,285 | 10 | 11 | 10. One or more computer-readable storage media having stored thereon computer-executable instructions that, when executed by a processor, cause a computing system to perform the following: access source code written in a programming language, wherein such access is performed by a Common Runtime Language (CRL) environment; use the CRL environment to process the source code, wherein processing the source code includes: identifying an impermissible expression in the source code, the impermissible expression improperly defining a delegate construction according to the programming language; inserting a stub in place of the impermissible expression defining an improper delegate construction, wherein the stub provides a proper exact match in the programming language, and wherein the stub has a form of: Function Y exact. (X As Z) As Z; and defining a constructor by a delegate class passed a specification of an object method to make the stub convert a currently impermissible expression to a currently permissible delegate construction; and moving a requirement of the CRL environment that a check for an exact match be performed at delegate creation, such that the CRL environment instead checks for an exact match at a function call. | 10. One or more computer-readable storage media having stored thereon computer-executable instructions that, when executed by a processor, cause a computing system to perform the following: access source code written in a programming language, wherein such access is performed by a Common Runtime Language (CRL) environment; use the CRL environment to process the source code, wherein processing the source code includes: identifying an impermissible expression in the source code, the impermissible expression improperly defining a delegate construction according to the programming language; inserting a stub in place of the impermissible expression defining an improper delegate construction, wherein the stub provides a proper exact match in the programming language, and wherein the stub has a form of: Function Y exact. (X As Z) As Z; and defining a constructor by a delegate class passed a specification of an object method to make the stub convert a currently impermissible expression to a currently permissible delegate construction; and moving a requirement of the CRL environment that a check for an exact match be performed at delegate creation, such that the CRL environment instead checks for an exact match at a function call. 11. The one or more computer-readable storage media of claim 10 , wherein the computer-executable instructions further cause the computing system to: create a delegate from a shared function. | 0.642322 |
8,140,549 | 1 | 5 | 1. A computer program product for performing operations via a spreadsheet, the computer program product comprising: one or more computer-readable, tangible storage devices; program instructions, stored on at least one of the one or more storage devices, to create in the spreadsheet a multidimensional array object, wherein: at least one element of the multidimensional array object constitutes an array with a plurality of elements, wherein the multidimensional array object comprise a set of elements, one element for each distinct list of coordinates of the multidimensional array object, the list of coordinates comprising a coordinate for each dimension of the multidimensional array object; program instructions, stored on at least one of the one or more storage devices, to access the elements of the multidimensional array object, the accessing comprising displaying the elements of the multidimensional array object as cells of the spreadsheet; and program instructions, stored on at least one of the one or more storage devices, to modify the elements of the multidimensional array object via modifying the contents of the cells of the spreadsheet. | 1. A computer program product for performing operations via a spreadsheet, the computer program product comprising: one or more computer-readable, tangible storage devices; program instructions, stored on at least one of the one or more storage devices, to create in the spreadsheet a multidimensional array object, wherein: at least one element of the multidimensional array object constitutes an array with a plurality of elements, wherein the multidimensional array object comprise a set of elements, one element for each distinct list of coordinates of the multidimensional array object, the list of coordinates comprising a coordinate for each dimension of the multidimensional array object; program instructions, stored on at least one of the one or more storage devices, to access the elements of the multidimensional array object, the accessing comprising displaying the elements of the multidimensional array object as cells of the spreadsheet; and program instructions, stored on at least one of the one or more storage devices, to modify the elements of the multidimensional array object via modifying the contents of the cells of the spreadsheet. 5. The computer program product of claim 1 , further comprising: program instructions, stored on at least one of the one or more storage devices, to link a multidimensional array object of a first workbook to a multidimensional array object of a second workbook, the linking comprising: specifying a path to the second workbook; specifying a name of the second workbook; specifying a name of the multidimensional array object of the second workbook; and specifying an external array object name; and program instructions, stored on at least one of the one or more storage devices, to refer to the multidimensional array object of the second workbook via a reference to the multidimensional array object of the first workbook. | 0.538217 |
8,826,338 | 1 | 7 | 1. A method for providing media assets in multiple languages, the method comprising: receiving a first user selection of a language for interacting with an interactive media guidance application; transmitting media guidance data associated with the selected language to a user device; receiving a second user selection of a media asset, wherein the media asset has a plurality of associated tracks having content in at least two different languages, wherein each track is associated with a single language, and wherein the first and the second user selections are separate selections; and in response to the second user selection, transmitting only a track associated with the selected language to the user device. | 1. A method for providing media assets in multiple languages, the method comprising: receiving a first user selection of a language for interacting with an interactive media guidance application; transmitting media guidance data associated with the selected language to a user device; receiving a second user selection of a media asset, wherein the media asset has a plurality of associated tracks having content in at least two different languages, wherein each track is associated with a single language, and wherein the first and the second user selections are separate selections; and in response to the second user selection, transmitting only a track associated with the selected language to the user device. 7. The method of claim 1 , wherein the media asset is a video-on-demand media asset with a plurality of associated tracks having content in at least two different languages. | 0.763661 |
8,116,567 | 1 | 2 | 1. A method for performing page verification of a document, comprising the steps of: performing a recognition technique on a document to recognize one or more objects in the document; excluding the one or more recognized objects from the document; and performing page verification of the document, wherein page verification comprises visual inspection of the document excluding the one or more recognized objects and further comprises facilitating viewing potentially neglected content in the document and deciding whether the potentially neglected content should be addressed or not; wherein at least one of the steps is carried out by a computer device. | 1. A method for performing page verification of a document, comprising the steps of: performing a recognition technique on a document to recognize one or more objects in the document; excluding the one or more recognized objects from the document; and performing page verification of the document, wherein page verification comprises visual inspection of the document excluding the one or more recognized objects and further comprises facilitating viewing potentially neglected content in the document and deciding whether the potentially neglected content should be addressed or not; wherein at least one of the steps is carried out by a computer device. 2. The method of claim 1 , wherein page verification is performed before a manual validation of the recognition technique. | 0.723982 |
8,005,843 | 5 | 8 | 5. A computer program product, residing on a computer readable storage medium, for creating a distinguishing identifier of a collection of data comprising a primary document and one or more auxiliary documents, comprising instructions for causing a computer to: digest each auxiliary document to create a respective auxiliary document digest; and create a distinguishing identifier by digesting a concatenation of the primary document with all auxiliary document digests. | 5. A computer program product, residing on a computer readable storage medium, for creating a distinguishing identifier of a collection of data comprising a primary document and one or more auxiliary documents, comprising instructions for causing a computer to: digest each auxiliary document to create a respective auxiliary document digest; and create a distinguishing identifier by digesting a concatenation of the primary document with all auxiliary document digests. 8. The computer program product of claim 5 , wherein the primary document includes one or more references to the auxiliary documents, and wherein interpretation of the references causes content from the auxiliary documents to be displayed as part of the primary document. | 0.5 |
9,286,527 | 13 | 21 | 13. A system comprising: one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: obtaining, for a sequence of strokes that represent a handwritten input, cut point data indicating one or more particular candidate cut points that are identified within the sequence of strokes; obtaining, for the one or more of the particular candidate cut points, feature data indicating one or more features of the particular candidate cut point; for each of the one or more particular candidate cut points, providing the feature data to a classifier that is trained to predict, based on one or more features of a candidate cut point, a likelihood of the candidate cut point being a correct cut point; for each of the one or more particular candidate cut points, receiving, from the classifier, data indicating the likelihood that the particular candidate cut point is a correct cut point; selecting a set of one or more of the particular candidate cut points whose respective likelihoods satisfy a threshold; and using the set of candidate cut points to segment the sequence of strokes. | 13. A system comprising: one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: obtaining, for a sequence of strokes that represent a handwritten input, cut point data indicating one or more particular candidate cut points that are identified within the sequence of strokes; obtaining, for the one or more of the particular candidate cut points, feature data indicating one or more features of the particular candidate cut point; for each of the one or more particular candidate cut points, providing the feature data to a classifier that is trained to predict, based on one or more features of a candidate cut point, a likelihood of the candidate cut point being a correct cut point; for each of the one or more particular candidate cut points, receiving, from the classifier, data indicating the likelihood that the particular candidate cut point is a correct cut point; selecting a set of one or more of the particular candidate cut points whose respective likelihoods satisfy a threshold; and using the set of candidate cut points to segment the sequence of strokes. 21. The system of claim 13 , wherein the operations further comprise: determining the threshold based on a user setting, a default setting, a number of the one or more particular candidate cut points, or a distribution of the likelihoods that the particular candidate cut points are correct cut points. | 0.551929 |
8,671,364 | 1 | 7 | 1. An apparatus, comprising: a logic device; and an information visualization application operative on the logic device, the information visualization application comprising a multivariable presentation component arranged to generate a multivariable decomposition visualization to present hierarchical information for a response variable and multiple reporting variables defined for the response variable in a single user interface view, the multivariable decomposition visualization comprising multiple graphical user interface (GUI) elements each representing a reporting variable value of multiple reporting variables for multiple hierarchical levels, with a GUI element of a reporting variable value of a reporting variable of a hierarchical level selectable for decomposition into multiple GUI elements representing reporting variable values of a different reporting variable for a different hierarchical level, the selectable GUI element of the hierarchical level positioned adjacent to the decomposed GUI elements of the different hierarchical level when the selectable GUI element is selected for decomposition. | 1. An apparatus, comprising: a logic device; and an information visualization application operative on the logic device, the information visualization application comprising a multivariable presentation component arranged to generate a multivariable decomposition visualization to present hierarchical information for a response variable and multiple reporting variables defined for the response variable in a single user interface view, the multivariable decomposition visualization comprising multiple graphical user interface (GUI) elements each representing a reporting variable value of multiple reporting variables for multiple hierarchical levels, with a GUI element of a reporting variable value of a reporting variable of a hierarchical level selectable for decomposition into multiple GUI elements representing reporting variable values of a different reporting variable for a different hierarchical level, the selectable GUI element of the hierarchical level positioned adjacent to the decomposed GUI elements of the different hierarchical level when the selectable GUI element is selected for decomposition. 7. The apparatus of claim 1 , the multivariable presentation component operative to generate the multivariable decomposition visualization with the selectable GUI element and the decomposed GUI elements each having an arrow shape comprising a line end and a pointer end, with a pointer end of the selectable GUI element positioned adjacent to a line end for each of the decomposed GUI elements when the selectable GUI element is selected for decomposition. | 0.646512 |
8,139,900 | 1 | 2 | 1. A computer-implemented method comprising: supplementing an image with metadata that identifies an object in the image; providing the metadata with the image when the image is rendered so that at least a portion of the image depicting the object is interactive, in order to enable a user to enter a selection input that corresponds to the object; and performing an action in response to detecting the selection input that corresponds to the object depicted in the image. | 1. A computer-implemented method comprising: supplementing an image with metadata that identifies an object in the image; providing the metadata with the image when the image is rendered so that at least a portion of the image depicting the object is interactive, in order to enable a user to enter a selection input that corresponds to the object; and performing an action in response to detecting the selection input that corresponds to the object depicted in the image. 2. The method of claim 1 , wherein performing an action includes specifying a content for display in response to detecting the selection input. | 0.58908 |
8,674,850 | 6 | 7 | 6. The computer-implemented method of claim 5 , further comprising determining a display and signaling level code corresponding to the relevance code, wherein providing the notification associated with the weather information according to the level of relevance for the target phase of flight comprises providing the notification associated with the weather information according to the display and signaling level code. | 6. The computer-implemented method of claim 5 , further comprising determining a display and signaling level code corresponding to the relevance code, wherein providing the notification associated with the weather information according to the level of relevance for the target phase of flight comprises providing the notification associated with the weather information according to the display and signaling level code. 7. The computer-implemented method of claim 6 , wherein the notification comprises a textual portion and a graphical portion, wherein at least a portion of the weather information is presented as text in the textual portion and concurrently presented as a graphical representation in the graphical portion, and wherein the graphical representation is visibly identifiable as representing the text. | 0.5 |
10,083,176 | 2 | 3 | 2. The method of claim 1 , wherein the term is selected from a plurality of terms from a corpus of the plurality of documents. | 2. The method of claim 1 , wherein the term is selected from a plurality of terms from a corpus of the plurality of documents. 3. The method of claim 2 , wherein the plurality of terms do not include terms with low Inverse Document Frequency and do not include terms with low global term frequency associated with the plurality of documents. | 0.5 |
8,897,151 | 3 | 4 | 3. The system of claim 1 further comprises a grammar optimizer configured to receive the extraction specification which is incomplete in relation to the context-free grammar and operable to translate the extraction specification to a complete extraction specification using the context-free grammar. | 3. The system of claim 1 further comprises a grammar optimizer configured to receive the extraction specification which is incomplete in relation to the context-free grammar and operable to translate the extraction specification to a complete extraction specification using the context-free grammar. 4. The system of claim 3 wherein the grammar optimizer transforms the complete extraction specification to an equivalent regular grammar. | 0.857588 |
7,539,669 | 13 | 15 | 13. The computer-implemented method of claim 9 , wherein the object types include a first object type that is in a structured format and a second object type that is in an unstructured format. | 13. The computer-implemented method of claim 9 , wherein the object types include a first object type that is in a structured format and a second object type that is in an unstructured format. 15. The computer-implemented method of claim 13 , wherein the first object type includes objects that are associated with a structured database. | 0.789474 |
5,434,929 | 9 | 10 | 9. A method as recited in claim 8 wherein said step of positioning further comprises the substeps of: determining a probability weight associated with a first variant group; shading said first variant group in said first character space position in accordance with said probability weight; defining a next character space position; and positioning a next variant group within said next character space position. | 9. A method as recited in claim 8 wherein said step of positioning further comprises the substeps of: determining a probability weight associated with a first variant group; shading said first variant group in said first character space position in accordance with said probability weight; defining a next character space position; and positioning a next variant group within said next character space position. 10. A method as recited in claim 9 wherein said step of positioning is performed for each variant group. | 0.5 |
4,320,451 | 32 | 33 | 32. A trap semaphore as recited in claim 31 including eighth means responsive to said seventh means for dequeueing said message notification from said second means. | 32. A trap semaphore as recited in claim 31 including eighth means responsive to said seventh means for dequeueing said message notification from said second means. 33. A trap semaphore as recited in claim 32 including ninth means responsive to said eighth means for assigning said dequeued message notification to said preferential mode of said second process. | 0.5 |
9,881,616 | 8 | 9 | 8. The method according to claim 6 , wherein said method comprises: at predefined intervals during said providing, by the voice biometrics system, determining that sound within the received at least one microphone output signal matches the voice model. | 8. The method according to claim 6 , wherein said method comprises: at predefined intervals during said providing, by the voice biometrics system, determining that sound within the received at least one microphone output signal matches the voice model. 9. The method according to claim 8 , wherein said predefined intervals are predetermined numbers of words. | 0.5 |
9,535,895 | 16 | 19 | 16. An electronic book reader, comprising: a display upon which to display electronic content of different languages; one or more processors; memory containing instructions that are executable by the one or more processors to perform actions comprising: displaying electronic content on the display, the electronic content including text; identifying multiple n-grams of at least a portion of the electronic content; for a first language: calculating a first probability based at least in part on a frequency of occurrence, in the first language, of a first sample n-gram of the multiple n-grams; calculating a second probability based at least in part on a frequency of occurrence, in the first language, of a second sample n-gram of the multiple n-grams; generating a first average based at least in part on the first probability and the second probability; for a second language: calculating a third probability based at least in part on a frequency of occurrence, in the second language, of the first sample n-gram of the multiple sample n-grams; calculating a fourth probability based at least in part on a frequency of occurrence, in the second language, of the second sample n-gram of the multiple n-grams; generating a second average based at least in part on the third probability and the fourth probability; determining a language of the sample electronic text based at least in part on comparing at least the first average and the second average; receiving designation of a first word within the electronic content; looking up a meaning of the designated first word in a dictionary of the determined language; and presenting the meaning of the designated word to the user. | 16. An electronic book reader, comprising: a display upon which to display electronic content of different languages; one or more processors; memory containing instructions that are executable by the one or more processors to perform actions comprising: displaying electronic content on the display, the electronic content including text; identifying multiple n-grams of at least a portion of the electronic content; for a first language: calculating a first probability based at least in part on a frequency of occurrence, in the first language, of a first sample n-gram of the multiple n-grams; calculating a second probability based at least in part on a frequency of occurrence, in the first language, of a second sample n-gram of the multiple n-grams; generating a first average based at least in part on the first probability and the second probability; for a second language: calculating a third probability based at least in part on a frequency of occurrence, in the second language, of the first sample n-gram of the multiple sample n-grams; calculating a fourth probability based at least in part on a frequency of occurrence, in the second language, of the second sample n-gram of the multiple n-grams; generating a second average based at least in part on the third probability and the fourth probability; determining a language of the sample electronic text based at least in part on comparing at least the first average and the second average; receiving designation of a first word within the electronic content; looking up a meaning of the designated first word in a dictionary of the determined language; and presenting the meaning of the designated word to the user. 19. The electronic book reader of claim 16 , wherein the at least a portion of the electronic content comprises at least a paragraph that contains the designated first word. | 0.810722 |
9,633,661 | 3 | 12 | 3. A portable device comprising: a microphone; a talk actuator; a power detector configured to detect a first power state and a second power state of the portable device; the portable device being configured to operate in a first mode when in the first power state and a second mode when in the second power state; wherein operating in the first mode comprises: detecting actuation of the talk actuator; generating, based at least in part on the actuation of the talk actuator, first audio data corresponding to first speech input; sending the first audio data to a speech support service server that is external to the portable device; receiving second audio data from the speech support service server, wherein the second audio data is based at least in part on the first audio data; and outputting audible content corresponding to the second audio data; and wherein operating in the second mode comprises: receiving second speech input; generating third audio data corresponding to the second speech input; and analyzing the third audio data. | 3. A portable device comprising: a microphone; a talk actuator; a power detector configured to detect a first power state and a second power state of the portable device; the portable device being configured to operate in a first mode when in the first power state and a second mode when in the second power state; wherein operating in the first mode comprises: detecting actuation of the talk actuator; generating, based at least in part on the actuation of the talk actuator, first audio data corresponding to first speech input; sending the first audio data to a speech support service server that is external to the portable device; receiving second audio data from the speech support service server, wherein the second audio data is based at least in part on the first audio data; and outputting audible content corresponding to the second audio data; and wherein operating in the second mode comprises: receiving second speech input; generating third audio data corresponding to the second speech input; and analyzing the third audio data. 12. The portable device of claim 3 , wherein operating the portable device in the second mode further comprises: detecting an utterance of a trigger expression in the second speech input; and generating fourth audio data corresponding to third speech input based at least in part on detecting the utterance of the trigger expression. | 0.615473 |
9,569,438 | 1 | 4 | 1. A computer-implemented method, comprising: accessing, by at least one processor, a corpus of documents; determining, by the at least one processor, that a particular document by a particular author and in the corpus of documents includes two or more different content pieces that each occur in at least one of one or more other documents in the corpus of documents; determining, by the at least one processor, a quantity of (i) other documents in the corpus of documents whose content pieces are included in the particular document by the particular author and in the corpus of documents, or (ii) authors associated with the other documents in the corpus of documents whose content pieces are included in the particular document by the particular author and in the corpus of documents; adjusting, by the at least one processor, a rank of the particular author in relation to other authors based at least in part on the quantity of (i) other documents in the corpus of documents whose content pieces are included in the particular document by the particular author and in the corpus of documents, or (ii) authors associated with the other documents in the corpus of documents whose content pieces are included in the particular document by the particular author and in the corpus of documents; and indexing, by the at least one processor, a quantity of the particular document and other documents by the particular author at a greater frequency than a quantity of documents by another author who is ranked lower than the particular author, wherein the quantity of the particular document and other documents by the particular author is greater than the quantity of documents by the other author who is ranked lower than the particular author. | 1. A computer-implemented method, comprising: accessing, by at least one processor, a corpus of documents; determining, by the at least one processor, that a particular document by a particular author and in the corpus of documents includes two or more different content pieces that each occur in at least one of one or more other documents in the corpus of documents; determining, by the at least one processor, a quantity of (i) other documents in the corpus of documents whose content pieces are included in the particular document by the particular author and in the corpus of documents, or (ii) authors associated with the other documents in the corpus of documents whose content pieces are included in the particular document by the particular author and in the corpus of documents; adjusting, by the at least one processor, a rank of the particular author in relation to other authors based at least in part on the quantity of (i) other documents in the corpus of documents whose content pieces are included in the particular document by the particular author and in the corpus of documents, or (ii) authors associated with the other documents in the corpus of documents whose content pieces are included in the particular document by the particular author and in the corpus of documents; and indexing, by the at least one processor, a quantity of the particular document and other documents by the particular author at a greater frequency than a quantity of documents by another author who is ranked lower than the particular author, wherein the quantity of the particular document and other documents by the particular author is greater than the quantity of documents by the other author who is ranked lower than the particular author. 4. The method of claim 1 , wherein adjusting the rank of the particular author comprises adjusting the rank of the particular author in relation to other authors based on the quantity of (i) other documents in the corpus of documents whose content pieces are included in the particular document by the particular author and in the corpus of documents, or (ii) authors associated with the other documents in the corpus of documents whose content pieces are included in the particular document by the particular author and in the corpus of documents. | 0.5 |
8,731,918 | 1 | 10 | 1. A method for categorizing interactions in a call center of an organization, comprising: capturing in the call center at least one vocal interaction and at least one non-vocal interaction, using logging or capturing devices, wherein the at least one vocal interaction and the at least one non-vocal interaction are captured in accordance with pre-defined rules that regulate which interaction is to be captured, and wherein the at least one vocal interaction and the at least one non-vocal interaction having dissimilar contents of common semantic context; retrieving at least one first word from the at least one vocal interaction; retrieving at least one second word from the at least one non-vocal interaction; assigning the at least one vocal interaction into a first category, wherein the first category is based on technical data of the at least one vocal interaction, using the at least one first word; assigning the at least one non-vocal interaction into a second category, wherein the second category is based on technical data of the at least one non-vocal interaction, using the at least one second word; and associating the first category and the second category into a multi-channel category based on the common semantic context thereof, thus aggregating the at least one vocal interaction and the at least one non-vocal interaction. | 1. A method for categorizing interactions in a call center of an organization, comprising: capturing in the call center at least one vocal interaction and at least one non-vocal interaction, using logging or capturing devices, wherein the at least one vocal interaction and the at least one non-vocal interaction are captured in accordance with pre-defined rules that regulate which interaction is to be captured, and wherein the at least one vocal interaction and the at least one non-vocal interaction having dissimilar contents of common semantic context; retrieving at least one first word from the at least one vocal interaction; retrieving at least one second word from the at least one non-vocal interaction; assigning the at least one vocal interaction into a first category, wherein the first category is based on technical data of the at least one vocal interaction, using the at least one first word; assigning the at least one non-vocal interaction into a second category, wherein the second category is based on technical data of the at least one non-vocal interaction, using the at least one second word; and associating the first category and the second category into a multi-channel category based on the common semantic context thereof, thus aggregating the at least one vocal interaction and the at least one non-vocal interaction. 10. The method of claim 1 further comprising normalization and injection of the at least one vocal interaction or the least one non-vocal interaction into a unified format. | 0.627706 |
10,102,851 | 21 | 26 | 21. One or more non-transitory computer readable media comprising executable code that, when executed, cause one or more computing devices to perform a process comprising: generating first speech recognition results using a first portion of a plurality of sequential portions of audio data, wherein the audio data represents a user utterance, and wherein the first speech recognition results are generated without using any sequential portion, of the plurality of sequential portions of audio data, following the first portion; generating a first semantic representation of the user utterance using the first speech recognition results and without using speech recognition results representing a second portion, of the plurality of sequential portions of audio data, following the first portion; determining a first score indicating a degree of confidence in the first semantic representation; determining that the first score satisfies a first threshold; in response to determining that the first score satisfies the first threshold: identifying a content item being requested by the user utterance; and presenting a notification indicating the content item is ready for presentation; generating second speech recognition results using the first portion and the second portion of the plurality of sequential portions of audio data; generating a second semantic representation of the user utterance using the second speech recognition results; determining a second score indicating a degree to which the second semantic representation is the same as the first semantic representation; determining that the second score satisfies a second threshold; and in response to determining that the second score satisfies the second threshold, initiating presentation of the content item. | 21. One or more non-transitory computer readable media comprising executable code that, when executed, cause one or more computing devices to perform a process comprising: generating first speech recognition results using a first portion of a plurality of sequential portions of audio data, wherein the audio data represents a user utterance, and wherein the first speech recognition results are generated without using any sequential portion, of the plurality of sequential portions of audio data, following the first portion; generating a first semantic representation of the user utterance using the first speech recognition results and without using speech recognition results representing a second portion, of the plurality of sequential portions of audio data, following the first portion; determining a first score indicating a degree of confidence in the first semantic representation; determining that the first score satisfies a first threshold; in response to determining that the first score satisfies the first threshold: identifying a content item being requested by the user utterance; and presenting a notification indicating the content item is ready for presentation; generating second speech recognition results using the first portion and the second portion of the plurality of sequential portions of audio data; generating a second semantic representation of the user utterance using the second speech recognition results; determining a second score indicating a degree to which the second semantic representation is the same as the first semantic representation; determining that the second score satisfies a second threshold; and in response to determining that the second score satisfies the second threshold, initiating presentation of the content item. 26. The one or more non-transitory computer readable media of claim 21 , wherein presenting the notification indicating the content item is ready for presentation comprises presenting a user interface of a content presentation application. | 0.606908 |
7,761,462 | 3 | 18 | 3. A computer readable medium including instructions stored thereon that when executed by a processor result in: partitioning a plurality of queries into subsets of queries and at least one database into subdatabases; designating searching tasks by associating each of the subsets of queries with one or more of the subdatabases; assigning at least one searching task to at least one computer of a group of computers operating in parallel; designating two or more subtasks as related tasks on a virtual shared memory bulletin board; executing the at least one searching task using the at least one computer of the group of computers operating in parallel; and generating a search result responsive to the executing. | 3. A computer readable medium including instructions stored thereon that when executed by a processor result in: partitioning a plurality of queries into subsets of queries and at least one database into subdatabases; designating searching tasks by associating each of the subsets of queries with one or more of the subdatabases; assigning at least one searching task to at least one computer of a group of computers operating in parallel; designating two or more subtasks as related tasks on a virtual shared memory bulletin board; executing the at least one searching task using the at least one computer of the group of computers operating in parallel; and generating a search result responsive to the executing. 18. The computer readable medium of claim 3 where the generating of the search results occurs at a single computer of the group of computers operating in parallel. | 0.682879 |
9,396,270 | 9 | 14 | 9. A computer-implemented method, comprising: providing one or more recommendations to a user in response to a query related to the user by integrating contextual information of a context related to the user in a recommendation model while considering a granular structure of the context and the contextual information thereof, structural elements of the granular structure including at least a multiple of granular elements for location arranged in a hierarchy of levels along a location context granularity path, and a multiple of granular elements for time arranged in a hierarchy of levels along a time context granularity path, wherein context combination paths of the context are formed by a cross product of all context granularity paths in the granular structure, and wherein the providing includes: detecting the context of the user, determining the granular structure of the context including one or more structural elements of the granular structure based on the contextual information that characterizes the information related to the user, pre-filtering the contextual information according one or more context combination paths between different structural element of the granular structure, selecting historical network navigation data related to the user for comparison with each of the context combination paths, modeling each context combination path for the query in the recommendation model by calculating a recommendation performance for each different context combination path as compared with the selected historical network navigation data while considering the granular structure of the context and the contextual information thereof, and determining at least one of the context combination paths as having a best recommendation performance to provide the one or more recommendations to the user in response to the query. | 9. A computer-implemented method, comprising: providing one or more recommendations to a user in response to a query related to the user by integrating contextual information of a context related to the user in a recommendation model while considering a granular structure of the context and the contextual information thereof, structural elements of the granular structure including at least a multiple of granular elements for location arranged in a hierarchy of levels along a location context granularity path, and a multiple of granular elements for time arranged in a hierarchy of levels along a time context granularity path, wherein context combination paths of the context are formed by a cross product of all context granularity paths in the granular structure, and wherein the providing includes: detecting the context of the user, determining the granular structure of the context including one or more structural elements of the granular structure based on the contextual information that characterizes the information related to the user, pre-filtering the contextual information according one or more context combination paths between different structural element of the granular structure, selecting historical network navigation data related to the user for comparison with each of the context combination paths, modeling each context combination path for the query in the recommendation model by calculating a recommendation performance for each different context combination path as compared with the selected historical network navigation data while considering the granular structure of the context and the contextual information thereof, and determining at least one of the context combination paths as having a best recommendation performance to provide the one or more recommendations to the user in response to the query. 14. The method of claim 9 , the method further comprising: automatically detecting the context related to the user based on the historical network navigation data related to the user; automatically detecting dependencies between the structural elements of the context and the contextual information related to the user including dependencies between one or more granular levels and one or more granular components within each granular level; automatically discovering one or more potentially predictive relationships and dependencies between the structural elements of the granular structure of the context and the contextual information related to the user; and automatically detecting the one or more best context combinations for the query as a context aware recommendation provided to the user. | 0.5 |
9,170,785 | 11 | 16 | 11. A system for generating a parameter value for an executable statement, comprising: a processor and a memory coupled to the processor, the memory configured to store program code executable by the processor; the program code configured, when executed by the processor, to identify, in a plurality of program statements, an input statement, wherein the input statement comprises input information; the program code configured, when executed by the processor, to identify, in the plurality of program statements, an output statement associated with the input statement; wherein: the output statement comprises a reference to a temporary data set, and another of the plurality of program statements also includes a parameter reference to the temporary data set; the program code configured, when executed by the processor, to modify the input information by performing symbolic substitution on at least a portion of the input information to produce modified input information; and the program code configured, when executed by the processor, to output according to the output statement the modified input information to the temporary data set. | 11. A system for generating a parameter value for an executable statement, comprising: a processor and a memory coupled to the processor, the memory configured to store program code executable by the processor; the program code configured, when executed by the processor, to identify, in a plurality of program statements, an input statement, wherein the input statement comprises input information; the program code configured, when executed by the processor, to identify, in the plurality of program statements, an output statement associated with the input statement; wherein: the output statement comprises a reference to a temporary data set, and another of the plurality of program statements also includes a parameter reference to the temporary data set; the program code configured, when executed by the processor, to modify the input information by performing symbolic substitution on at least a portion of the input information to produce modified input information; and the program code configured, when executed by the processor, to output according to the output statement the modified input information to the temporary data set. 16. The system of claim 11 , wherein the input information includes: a first portion that does not comprise a symbolic variable; and a second portion comprising a symbolic variable; and wherein the performing symbolic substitution comprises performing symbolic substitution of the symbolic variable. | 0.5 |
9,953,088 | 13 | 16 | 13. A non-transitory computer-readable medium storing instructions, the instructions, when executed by one or more processors, cause the processors to perform operations comprising: receiving a user request, the user request including at least a speech input and seeks an informational answer or performance of a task, wherein: the user request is associated with a detected failure to provide a satisfactory response to the user request; and one or more crowd sourcing information sources relevant to the user request are queried in response to detecting the failure to provide a satisfactory response to the user request; and generating a response to the user request based on the one or more answers obtained from querying the one or more crowd sourcing information sources. | 13. A non-transitory computer-readable medium storing instructions, the instructions, when executed by one or more processors, cause the processors to perform operations comprising: receiving a user request, the user request including at least a speech input and seeks an informational answer or performance of a task, wherein: the user request is associated with a detected failure to provide a satisfactory response to the user request; and one or more crowd sourcing information sources relevant to the user request are queried in response to detecting the failure to provide a satisfactory response to the user request; and generating a response to the user request based on the one or more answers obtained from querying the one or more crowd sourcing information sources. 16. The non-transitory computer-readable medium of claim 13 , wherein the operations further comprise: prior to the one or more crowd sourcing information sources being queried: requesting user permission to send the information contained in the user request to the one or more crowd sourcing information sources; and receiving user permission to send the information contained in the user request to the one or more crowd sourcing information sources. | 0.5 |
8,943,395 | 22 | 23 | 22. The computer program product of claim 16 , further comprising instructions configured to cause a data processing apparatus to: searching, by the electronic device, for the search term within the plurality of documents, and displaying a plurality of pages, wherein the plurality of pages is selected so as to include pages from a plurality of different documents that include the search term. | 22. The computer program product of claim 16 , further comprising instructions configured to cause a data processing apparatus to: searching, by the electronic device, for the search term within the plurality of documents, and displaying a plurality of pages, wherein the plurality of pages is selected so as to include pages from a plurality of different documents that include the search term. 23. The computer program product of claim 22 , further comprising instructions configured to cause a data processing apparatus to: receive a selection of a displayed page of the plurality of pages, and, in response to receiving the selection of the displayed page, display a plurality of pages from a document associated with the selected displayed page. | 0.5 |
8,745,148 | 1 | 2 | 1. A method of providing user communications, the method comprising: providing an application software program for installation on a mobile computing device associated with a user; providing, by a computer system comprising a computing device and a network interface, a communication service to a web page of the user; receiving, at the computer system, from a visitor to the web page of the user a communication request to communicate with the user via a communication interface displayed in association with the web page of the user, the communication interface including a text entry field configured to receive a text message from the visitor for the user; causing, at least in part by the communication service, at least a first system to transmit a text message entered by the visitor into the text entry field to the application software program, wherein the application software program is installed on the user mobile computing device, without the visitor providing, and without revealing to the visitor, a mobile communication device phone address of the user; creating a contact record in a contact database accessible by the application software program; determining by the computer system if the user has a first account; and if the user does not have a first account, requesting that the user provide at least a first type of registration information prior to enabling the application software program to be provided to the user mobile computing device. | 1. A method of providing user communications, the method comprising: providing an application software program for installation on a mobile computing device associated with a user; providing, by a computer system comprising a computing device and a network interface, a communication service to a web page of the user; receiving, at the computer system, from a visitor to the web page of the user a communication request to communicate with the user via a communication interface displayed in association with the web page of the user, the communication interface including a text entry field configured to receive a text message from the visitor for the user; causing, at least in part by the communication service, at least a first system to transmit a text message entered by the visitor into the text entry field to the application software program, wherein the application software program is installed on the user mobile computing device, without the visitor providing, and without revealing to the visitor, a mobile communication device phone address of the user; creating a contact record in a contact database accessible by the application software program; determining by the computer system if the user has a first account; and if the user does not have a first account, requesting that the user provide at least a first type of registration information prior to enabling the application software program to be provided to the user mobile computing device. 2. The method as defined in claim 1 , the method further comprising: providing via the communication interface a voice control; and in response to a visitor selecting the voice control, causing, at least in part by the communication service, a phone call between the visitor and the user to be established without the visitor providing, and without revealing to the visitor, the mobile communication device phone address of the user. | 0.5 |
10,140,293 | 1 | 5 | 1. A computer-implemented method, comprising: receiving, at a viewing application executing in a foreground at a computing device, a touch input from a user, the touch input comprising: (i) a first portion indicating a selection of a single selected word in a document in a source language, the document being displayed in the viewing application, wherein the viewing application is not configured to perform language translation; and (ii) a second portion following the first portion and indicating a trigger command for obtaining a translation of the selected word from the source language to the target language; and in response to receiving the touch input: obtaining, by an operating system executing at the computing device, contextual information from at least a portion of a full screen capture of the document, wherein the full screen capture includes an entire viewable area of the computing device, wherein the portion of the full screen capture is associated with the selected word, and wherein the contextual information is indicative of a context of the selected word as it is used in the document; extracting, by the computing device, contextual features of the selected word using the contextual information, each contextual feature being a machine-learned feature indicative of a use of the selected word, wherein the contextual features include both (i) one or more first words from at least the portion of the full screen capture and (ii) an image from at least the portion of the full screen capture, wherein the image does not include the one or more first words and the selected word; providing, from the operating system and to a different translation application, the selected word and its contextual information, wherein receipt of the selected word and its contextual information causes the translation application to obtain and output potential translated words to the operating system, wherein the translation application (i) is distinct from the viewing application, (ii) is launched by the operating system in a background at the computing device or is already executing in the background at the computing device, and (iii) is configured to obtain the potential translated words using the selected word and its extracted contextual features; providing, from the operating system to the viewing application, the potential translated words, each potential translated word being a potential translation of the selected word to a different target language that is preferred by the user; and displaying, by the viewing application, the potential translated words. | 1. A computer-implemented method, comprising: receiving, at a viewing application executing in a foreground at a computing device, a touch input from a user, the touch input comprising: (i) a first portion indicating a selection of a single selected word in a document in a source language, the document being displayed in the viewing application, wherein the viewing application is not configured to perform language translation; and (ii) a second portion following the first portion and indicating a trigger command for obtaining a translation of the selected word from the source language to the target language; and in response to receiving the touch input: obtaining, by an operating system executing at the computing device, contextual information from at least a portion of a full screen capture of the document, wherein the full screen capture includes an entire viewable area of the computing device, wherein the portion of the full screen capture is associated with the selected word, and wherein the contextual information is indicative of a context of the selected word as it is used in the document; extracting, by the computing device, contextual features of the selected word using the contextual information, each contextual feature being a machine-learned feature indicative of a use of the selected word, wherein the contextual features include both (i) one or more first words from at least the portion of the full screen capture and (ii) an image from at least the portion of the full screen capture, wherein the image does not include the one or more first words and the selected word; providing, from the operating system and to a different translation application, the selected word and its contextual information, wherein receipt of the selected word and its contextual information causes the translation application to obtain and output potential translated words to the operating system, wherein the translation application (i) is distinct from the viewing application, (ii) is launched by the operating system in a background at the computing device or is already executing in the background at the computing device, and (iii) is configured to obtain the potential translated words using the selected word and its extracted contextual features; providing, from the operating system to the viewing application, the potential translated words, each potential translated word being a potential translation of the selected word to a different target language that is preferred by the user; and displaying, by the viewing application, the potential translated words. 5. The computer-implemented method of claim 1 , wherein the second portion of the touch input is a slide input in a specific direction. | 0.81405 |
9,665,628 | 1 | 6 | 1. A data classification system, comprising: an input interface configured to receive documents comprising data entries, at least some of the data entries having associated features represented directly in the documents; a data warehouse backed by a non-transitory computer readable storage medium and configured to store curated and classified data elements; a model registry storing a plurality of different model stacks, each model stack including at least one classification model and at least one confidence model that is separate from the at least classification model in the respective model stack; and processing resources including at least one processor and a memory, the memory storing instructions, the instructions being executed by the at least one processor to at least: inspect documents received via the input interface to identify, as heterogeneous input data, data entries and associated features located in the inspected documents; segment the heterogeneous input data into different, respectively homogenous processing groups, the different processing groups having associated levels of information uncertainty; for each different processing group, starting with the processing group associated with a lowest level of information uncertainty and moving upwardly: (a) identify one or more model stacks from the model registry to be executed on the respective processing group; (b) execute each identified model stack on the respective processing group to arrive at a classification result and a confidence level for each data entry in the respective processing group using the classification and confidence models in the respective model stack, wherein classification results map features from the data entries to predefined concepts associated with the classification models; (c) ensemble results from the execution of each identified model stack, using the classification results and the confidence levels, to group the data entries in the processing group into one of first and second classification type groups, the first classification type group corresponding to a confirmed classification and the second classification type group corresponding to an unconfirmed classification; (d) move each data entry in the first classification type group to a final result set; and (e) for the second classification type group: determine, for each data entry in the second classification type group, the processing group from among those processing groups not yet processed that is most closely related to it; and move each data entry in the second classification type group to the corresponding determined most closely related processing group; once all of the different processing groups have been processed in accordance with (a) through (e), treat as unclassified any data entries remaining in the second classification type group; store each data entry in the final result set, with or without additional processing, to the data warehouse, in accordance with the corresponding arrived at classification result; and reference records in the data warehouse in response to queries from a computer terminal. | 1. A data classification system, comprising: an input interface configured to receive documents comprising data entries, at least some of the data entries having associated features represented directly in the documents; a data warehouse backed by a non-transitory computer readable storage medium and configured to store curated and classified data elements; a model registry storing a plurality of different model stacks, each model stack including at least one classification model and at least one confidence model that is separate from the at least classification model in the respective model stack; and processing resources including at least one processor and a memory, the memory storing instructions, the instructions being executed by the at least one processor to at least: inspect documents received via the input interface to identify, as heterogeneous input data, data entries and associated features located in the inspected documents; segment the heterogeneous input data into different, respectively homogenous processing groups, the different processing groups having associated levels of information uncertainty; for each different processing group, starting with the processing group associated with a lowest level of information uncertainty and moving upwardly: (a) identify one or more model stacks from the model registry to be executed on the respective processing group; (b) execute each identified model stack on the respective processing group to arrive at a classification result and a confidence level for each data entry in the respective processing group using the classification and confidence models in the respective model stack, wherein classification results map features from the data entries to predefined concepts associated with the classification models; (c) ensemble results from the execution of each identified model stack, using the classification results and the confidence levels, to group the data entries in the processing group into one of first and second classification type groups, the first classification type group corresponding to a confirmed classification and the second classification type group corresponding to an unconfirmed classification; (d) move each data entry in the first classification type group to a final result set; and (e) for the second classification type group: determine, for each data entry in the second classification type group, the processing group from among those processing groups not yet processed that is most closely related to it; and move each data entry in the second classification type group to the corresponding determined most closely related processing group; once all of the different processing groups have been processed in accordance with (a) through (e), treat as unclassified any data entries remaining in the second classification type group; store each data entry in the final result set, with or without additional processing, to the data warehouse, in accordance with the corresponding arrived at classification result; and reference records in the data warehouse in response to queries from a computer terminal. 6. The system of claim 1 , wherein the classification results are structured to comport with at least one predefined hierarchical taxonomy. | 0.862919 |
8,571,187 | 1 | 9 | 1. A method comprising: detecting an electronic message; detecting a message term within the electronic message; searching for a match between the message term and a stored term in a database, wherein the database stores a plurality of stored terms and corresponding definitions; in response to finding a match between the message term and the stored term in the database, displaying a definition from the database that corresponds to the stored term, wherein the definition defines the message term; in response to not finding a match between the message term and the stored term in the database, determining whether a new definition corresponding to the message term should be added to the database based on whether the message term appears a threshold number of times in one or more electronic messages; in response to determining that the new definition should be added based on the message term appearing a threshold number of times in the one or more electronic messages, forming the new definition based on analysis of a context of the message term within the electronic message and of a context of the message term when used within one or more electronic messages other than the message; and, adding the new definition corresponding to the message term to the database. | 1. A method comprising: detecting an electronic message; detecting a message term within the electronic message; searching for a match between the message term and a stored term in a database, wherein the database stores a plurality of stored terms and corresponding definitions; in response to finding a match between the message term and the stored term in the database, displaying a definition from the database that corresponds to the stored term, wherein the definition defines the message term; in response to not finding a match between the message term and the stored term in the database, determining whether a new definition corresponding to the message term should be added to the database based on whether the message term appears a threshold number of times in one or more electronic messages; in response to determining that the new definition should be added based on the message term appearing a threshold number of times in the one or more electronic messages, forming the new definition based on analysis of a context of the message term within the electronic message and of a context of the message term when used within one or more electronic messages other than the message; and, adding the new definition corresponding to the message term to the database. 9. The method according to claim 1 wherein the electronic message is an electronic mail message. | 0.710843 |
6,144,939 | 1 | 7 | 1. A concatenative speech synthesizer, comprising: a database containing (a) demi-syllable waveform data associated with a plurality of demi-syllables and (b) filter parameter data associated with said plurality of demi-syllables; a unit selection system for extracting selected demi-syllable waveform data and filter parameters from said database that correspond to an input string to be synthesized; a waveform cross fade mechanism for joining pairs of extracted demi-syllable waveform data into syllable waveform signals; a filter parameter cross fade mechanism for defining a set of syllable-level filter data by interpolating said extracted filter parameters; and a filter module receptive of said set of syllable-level filter data and operative to process said syllable waveform signals to generate synthesized speech. | 1. A concatenative speech synthesizer, comprising: a database containing (a) demi-syllable waveform data associated with a plurality of demi-syllables and (b) filter parameter data associated with said plurality of demi-syllables; a unit selection system for extracting selected demi-syllable waveform data and filter parameters from said database that correspond to an input string to be synthesized; a waveform cross fade mechanism for joining pairs of extracted demi-syllable waveform data into syllable waveform signals; a filter parameter cross fade mechanism for defining a set of syllable-level filter data by interpolating said extracted filter parameters; and a filter module receptive of said set of syllable-level filter data and operative to process said syllable waveform signals to generate synthesized speech. 7. The synthesizer of claim 1 wherein said filter parameter cross fade mechanism performs sigmoidal interpolation between the respective extracted filter parameters of two demi-syllables. | 0.5 |
7,535,844 | 15 | 16 | 15. A circuit to interface communications channel comprising: serial-to-parallel receivers configured to receive serial data signals of data lanes of the communications channel and to recover characters of parallel format from the data lanes; decoders configured to determine character types of the characters recovered by the serial-to-parallel receivers of from the data lanes; detectors configured to detect a start-of-frame (SOF) character and an end-of-frame (EOF) character; a parser configured to parse the characters recovered by the serial-to-parallel receivers based on the character types determined by the decoders and based on placement of the characters relative to the SOF character detected for separating out valid data and invalid data from the characters recovered; a packer configured to group the valid data from the characters parsed to provide a valid data group; and a generator configured to provide the SOF character as a first sideband signal associated with the valid data group for sideband encapsulation. | 15. A circuit to interface communications channel comprising: serial-to-parallel receivers configured to receive serial data signals of data lanes of the communications channel and to recover characters of parallel format from the data lanes; decoders configured to determine character types of the characters recovered by the serial-to-parallel receivers of from the data lanes; detectors configured to detect a start-of-frame (SOF) character and an end-of-frame (EOF) character; a parser configured to parse the characters recovered by the serial-to-parallel receivers based on the character types determined by the decoders and based on placement of the characters relative to the SOF character detected for separating out valid data and invalid data from the characters recovered; a packer configured to group the valid data from the characters parsed to provide a valid data group; and a generator configured to provide the SOF character as a first sideband signal associated with the valid data group for sideband encapsulation. 16. The circuit of claim 15 , wherein the decoders resolve the character types from a group consisting of the SOF character, the EOF character, valid data characters, and an idle character; and the parser invalidates those of the characters determined to be outside a frame delineated by at least one of the SOF character and the EOF character detected. | 0.5 |
8,843,377 | 1 | 4 | 1. A foreign language processing system, comprising: a user control device; a processing device operatively connected to said user input device; a microphone for sensing a spoken word from the user and operatively connected to said processing device; and a display operatively connected to said processing device; wherein: said processing device executes computer readable code to select a foreign language word which corresponds to the meaning of a native language word entered by a user using said user control device; wherein: said processing device executes computer readable code to create a first visual representation of frequency relationships within said foreign language word for output on said display; and wherein: said first visual representation is generated according to a method comprising the steps of: (a) placing twelve labels in a pattern of a circle, said twelve labels corresponding to twelve respective frequencies, such that moving clockwise or counter-clockwise between adjacent ones of said labels represents a first frequency interval; (b) identifying an occurrence of a first frequency within the foreign language word; (c) identifying an occurrence of a second frequency within the foreign language word; (d) identifying a first label corresponding to the first frequency; (e) identifying a second label corresponding to the second frequency; (f) creating a first line connecting the first label and the second label, wherein: (1) the first line is a first color if the first frequency and the second frequency are separated by the first frequency interval: (2) the first line is a second color if the first frequency and the second frequency are separated by a first multiple of the first frequency interval; (3) the first line is a third color if the first frequency and the second frequency are separated by a second multiple of the first frequency interval; (4) the first line is a fourth color if the first frequency and the second frequency are separated by a third multiple of the first frequency interval; (5) the first line is a fifth color if the first frequency and the second frequency are separated by a fourth multiple of the first frequency interval; and (6) the first line is a sixth color if the first frequency and the second frequency are separated by a fifth multiple of the first frequency interval. | 1. A foreign language processing system, comprising: a user control device; a processing device operatively connected to said user input device; a microphone for sensing a spoken word from the user and operatively connected to said processing device; and a display operatively connected to said processing device; wherein: said processing device executes computer readable code to select a foreign language word which corresponds to the meaning of a native language word entered by a user using said user control device; wherein: said processing device executes computer readable code to create a first visual representation of frequency relationships within said foreign language word for output on said display; and wherein: said first visual representation is generated according to a method comprising the steps of: (a) placing twelve labels in a pattern of a circle, said twelve labels corresponding to twelve respective frequencies, such that moving clockwise or counter-clockwise between adjacent ones of said labels represents a first frequency interval; (b) identifying an occurrence of a first frequency within the foreign language word; (c) identifying an occurrence of a second frequency within the foreign language word; (d) identifying a first label corresponding to the first frequency; (e) identifying a second label corresponding to the second frequency; (f) creating a first line connecting the first label and the second label, wherein: (1) the first line is a first color if the first frequency and the second frequency are separated by the first frequency interval: (2) the first line is a second color if the first frequency and the second frequency are separated by a first multiple of the first frequency interval; (3) the first line is a third color if the first frequency and the second frequency are separated by a second multiple of the first frequency interval; (4) the first line is a fourth color if the first frequency and the second frequency are separated by a third multiple of the first frequency interval; (5) the first line is a fifth color if the first frequency and the second frequency are separated by a fourth multiple of the first frequency interval; and (6) the first line is a sixth color if the first frequency and the second frequency are separated by a fifth multiple of the first frequency interval. 4. The system of claim 1 , wherein the first color is red, the second color is orange, the third color is yellow, the fourth color is green, the fifth color is blue and the sixth color is purple. | 0.780405 |
8,930,182 | 1 | 4 | 1. A method for voice transformation, comprising: transforming a source speech of a person using transformation parameters, wherein the transforming comprises modifying the source speech to sound as if the source speech were spoken by a different person; and encoding information on the transformation parameters in an output speech using steganography, wherein the source speech can be reconstructed using the output speech and the information on the transformation parameters, and wherein at least one of the transforming and the encoding is performed by a processor. | 1. A method for voice transformation, comprising: transforming a source speech of a person using transformation parameters, wherein the transforming comprises modifying the source speech to sound as if the source speech were spoken by a different person; and encoding information on the transformation parameters in an output speech using steganography, wherein the source speech can be reconstructed using the output speech and the information on the transformation parameters, and wherein at least one of the transforming and the encoding is performed by a processor. 4. The method as claimed in claim 1 , wherein the information on the transformation parameters is usable to reconstruct the output speech to a close approximation to the source speech. | 0.836299 |
8,463,769 | 14 | 17 | 14. The system of claim 12 , further comprising logic that takes remedial measures if the at least one search term is identified as a potential missing at least one search term. | 14. The system of claim 12 , further comprising logic that takes remedial measures if the at least one search term is identified as a potential missing at least one search term. 17. The system of claim 14 , wherein the logic that takes remedial measures if the search of the electronic repository does not associate the at least one search term with the item further comprises logic that associates the at least one search term with at least one attribute associated with the item in the electronic repository. | 0.521614 |
7,502,731 | 1 | 18 | 1. A system for performing a speech recognition procedure, comprising: a sound transducer device that captures and converts a spoken utterance into input speech data for performing said speech recognition procedure; a recognizer configured to compare said input speech data to dictionary entries from a dictionary that is implemented by utilizing a mixed-language technique that incorporates multiple different languages into said dictionary entries, said dictionary being implemented to include dictionary entries that represent phone strings of a Cantonese language without utilizing corresponding tonal information as part of said phone strings; and a processor configured to control said recognizer to thereby perform said speech recognition procedure to generate and output one or more recognized words as a speech recognition result. | 1. A system for performing a speech recognition procedure, comprising: a sound transducer device that captures and converts a spoken utterance into input speech data for performing said speech recognition procedure; a recognizer configured to compare said input speech data to dictionary entries from a dictionary that is implemented by utilizing a mixed-language technique that incorporates multiple different languages into said dictionary entries, said dictionary being implemented to include dictionary entries that represent phone strings of a Cantonese language without utilizing corresponding tonal information as part of said phone strings; and a processor configured to control said recognizer to thereby perform said speech recognition procedure to generate and output one or more recognized words as a speech recognition result. 18. The system of claim 1 wherein said dictionary entries of said dictionary are divided into a Cantonese category, an English category, and a mixed Cantonese-English category. | 0.752113 |
9,158,746 | 13 | 16 | 13. A computer system for managing concurrent editing in a collaborative editing environment, the computer system comprising: one or more processors, one or more computer-readable memories and one or more computer-readable storage media, and program instructions, stored on at least one of the one or more storage media for execution by at least one of the one or more processors via at least one of the one or more computer-readable memories, the program instructions comprising: program instructions to receive an input to edit an electronic document from a first editor through a first user interface; program instructions to track a cursor within the electronic document associated with the first user interface; program instructions to lock content of the electronic document within a proximity of the cursor associated with the first user interface to prevent access to the content of the electronic document within the proximity of the cursor by one or more second editors, wherein scope of the proximity and a length of a delay associated with the cursor are based, at least in part, on one or more dynamic rules, wherein the scope of the proximity of the cursor and the length of the delay are based, at least in part, on a nested relationship of content of the electronic document, wherein a length of the delay at a word-level is greater than a length of the delay at a sentence-level, and the length of the delay at a sentence-level is greater than a length of the delay at a paragraph-level; and program instructions to, responsive to the cursor moving to a new location within the electronic document, unlock the content no longer in the proximity of the cursor. | 13. A computer system for managing concurrent editing in a collaborative editing environment, the computer system comprising: one or more processors, one or more computer-readable memories and one or more computer-readable storage media, and program instructions, stored on at least one of the one or more storage media for execution by at least one of the one or more processors via at least one of the one or more computer-readable memories, the program instructions comprising: program instructions to receive an input to edit an electronic document from a first editor through a first user interface; program instructions to track a cursor within the electronic document associated with the first user interface; program instructions to lock content of the electronic document within a proximity of the cursor associated with the first user interface to prevent access to the content of the electronic document within the proximity of the cursor by one or more second editors, wherein scope of the proximity and a length of a delay associated with the cursor are based, at least in part, on one or more dynamic rules, wherein the scope of the proximity of the cursor and the length of the delay are based, at least in part, on a nested relationship of content of the electronic document, wherein a length of the delay at a word-level is greater than a length of the delay at a sentence-level, and the length of the delay at a sentence-level is greater than a length of the delay at a paragraph-level; and program instructions to, responsive to the cursor moving to a new location within the electronic document, unlock the content no longer in the proximity of the cursor. 16. The computer system of claim 13 , the scope of the proximity of the cursor and the length of the delay are based, at least in part, on a classified type of electronic document that is being edited. | 0.690769 |
9,829,984 | 1 | 39 | 1. A computer-implemented method for recognizing a visual gesture, the method comprising: receiving a visual gesture formed by a part of a human body, the visual gesture being captured in a video having a plurality of video frames; determining a region of interest (ROI) in the plurality of video frames of the video based on motion vectors associated with the part of the human body, a centroid of the ROI aligned to be a centroid of a cluster of the motion vectors; selecting a visual gesture recognition process based on a user selection of a visual gesture recognition process from a plurality of visual gesture recognition processes; applying the selected visual gesture recognition process to the plurality of video frames to recognize the visual gesture; determining variations in the centroid, shape, and size of an object within the ROI of the plurality of video frames, the centroid, shape, and size of the object changing according to motion of the object in the plurality of video frames in an affine motion model, wherein said determination of the variations in the centroid, shape and size of the object within the ROI is performed by a track-learning-detection-type (TLD-type) process, wherein the TLD-type process is a signal processing scheme in which following functions are performed simultaneously: object tracking, by use of motion estimation in the affine motion model, either using optical flow, or block-based motion estimation and employing estimation error metrics comprising a sum of absolute differences (SAD) and normalized correlation coefficient (NCC); object feature learning, which automatically learns features of objects within the ROI, the features including size, centroids, statistics and edges; and object detection comprising: feature extraction employing edge analysis, spatial transforms, and background subtraction, feature analysis employing clustering and vector quantization, and feature matching employing signal matching using similarity metrics, neural networks, support vector machines, and maximum posteriori probability; and deriving three or more dimensional information and relationships of objects contained in the visual gesture from the plurality of video frames capturing the visual gesture based on the analysis of the variations in the centroid, shape, and size of the object within the ROI. | 1. A computer-implemented method for recognizing a visual gesture, the method comprising: receiving a visual gesture formed by a part of a human body, the visual gesture being captured in a video having a plurality of video frames; determining a region of interest (ROI) in the plurality of video frames of the video based on motion vectors associated with the part of the human body, a centroid of the ROI aligned to be a centroid of a cluster of the motion vectors; selecting a visual gesture recognition process based on a user selection of a visual gesture recognition process from a plurality of visual gesture recognition processes; applying the selected visual gesture recognition process to the plurality of video frames to recognize the visual gesture; determining variations in the centroid, shape, and size of an object within the ROI of the plurality of video frames, the centroid, shape, and size of the object changing according to motion of the object in the plurality of video frames in an affine motion model, wherein said determination of the variations in the centroid, shape and size of the object within the ROI is performed by a track-learning-detection-type (TLD-type) process, wherein the TLD-type process is a signal processing scheme in which following functions are performed simultaneously: object tracking, by use of motion estimation in the affine motion model, either using optical flow, or block-based motion estimation and employing estimation error metrics comprising a sum of absolute differences (SAD) and normalized correlation coefficient (NCC); object feature learning, which automatically learns features of objects within the ROI, the features including size, centroids, statistics and edges; and object detection comprising: feature extraction employing edge analysis, spatial transforms, and background subtraction, feature analysis employing clustering and vector quantization, and feature matching employing signal matching using similarity metrics, neural networks, support vector machines, and maximum posteriori probability; and deriving three or more dimensional information and relationships of objects contained in the visual gesture from the plurality of video frames capturing the visual gesture based on the analysis of the variations in the centroid, shape, and size of the object within the ROI. 39. The method of claim 1 , wherein the visual gesture is a visual code representing one of a plurality of visual gestures and corresponding user input commands. | 0.831942 |
9,418,566 | 11 | 12 | 11. The apparatus of claim 8 , wherein the comprehensiveness metric comprises a difficulty metric, wherein determining the value for the comprehensiveness metric comprises: mapping the sets of concepts to topics in the syllabus; for each given question in the question paper, building a tree of topics comprising a root node representing a central topic, at least one child node representing a topic having concepts that help in understanding concepts of the central topic, and at least one leaf node representing a topic having fundamental concepts; and determining a value of a difficulty metric for the given question to be equal to a depth of the tree. | 11. The apparatus of claim 8 , wherein the comprehensiveness metric comprises a difficulty metric, wherein determining the value for the comprehensiveness metric comprises: mapping the sets of concepts to topics in the syllabus; for each given question in the question paper, building a tree of topics comprising a root node representing a central topic, at least one child node representing a topic having concepts that help in understanding concepts of the central topic, and at least one leaf node representing a topic having fundamental concepts; and determining a value of a difficulty metric for the given question to be equal to a depth of the tree. 12. The apparatus of claim 11 , wherein determining the value for the comprehensiveness metric further comprises: building a forest of the trees corresponding to the questions of the question paper; and determining value of the difficulty metric for the question paper to be equal to a depth of the forest. | 0.5 |
9,830,044 | 9 | 13 | 9. One or more non-transitory computer storage media storing computer-readable instructions that, when executed, instruct one or more processors to perform operations comprising: analyzing a previous conversation with a first virtual assistant to identify a topic that has been discussed in the previous conversation more than a predetermined number of times; identifying a second virtual assistant that is not currently associated with an account of a user and that is configured to perform one or more tasks that are relevant to the topic that has been discussed in the previous conversation more than the predetermined number of times; providing, to a device associated with the user, a suggestion to add the second virtual assistant to a group of virtual assistants that are associated with the account of the user, the group of virtual assistants being configured with different personas; receiving user input indicating a selection of the second virtual assistant to be added to the group of virtual assistants; and based at least in part on the user input, adding the second virtual assistant to the group of virtual assistants by associating the second virtual assistant with the account of the user. | 9. One or more non-transitory computer storage media storing computer-readable instructions that, when executed, instruct one or more processors to perform operations comprising: analyzing a previous conversation with a first virtual assistant to identify a topic that has been discussed in the previous conversation more than a predetermined number of times; identifying a second virtual assistant that is not currently associated with an account of a user and that is configured to perform one or more tasks that are relevant to the topic that has been discussed in the previous conversation more than the predetermined number of times; providing, to a device associated with the user, a suggestion to add the second virtual assistant to a group of virtual assistants that are associated with the account of the user, the group of virtual assistants being configured with different personas; receiving user input indicating a selection of the second virtual assistant to be added to the group of virtual assistants; and based at least in part on the user input, adding the second virtual assistant to the group of virtual assistants by associating the second virtual assistant with the account of the user. 13. The one or more non-transitory computer storage media of claim 9 , wherein the operations further comprise: analyzing location information describing a previous or future location of the device associated with the user; wherein the identifying the second virtual assistant is based at least in part on the analysis of the location information. | 0.51264 |
9,824,331 | 6 | 7 | 6. A system for handling social media inputs in an existing multi-channel converged CSTA-based infrastructure, the system comprising: a crawler to obtain social media posts as social media inputs; a sentiment analyzer using the social media inputs capable to generate sentiment analysis inputs based on textual and non-textual content including images describing facial expression and emotions; an indexer to determine a priority and potential churn index from the sentiment analysis inputs based on predetermined parameters and to translate the priority and potential churn index into qualifiers using predetermined combinations of a severity index and an anticipated churn index; an adapter capable to adapt the qualifiers for the social media posts into CSTA specifications to create adapted posts; a router capable to dynamically route the adapted posts, wherein the router routes the adapted posts by leveraging “one number service” of the CSTA-based infrastructure resulting in routed posts; a social media interworking gateway to adapt the routed posts to the CSTA specification; a CSTA adaptation stack used by the social media interworking gateway to construct an enhanced payload; a CSTA protocol stack capable to receive service execution instructions from the CSTA adaptation stack through established SIP-CSTA sessions or TCP-based CSTA sessions with an option to choose ASN.1 encoding instead of XML encoding; an outbound campaign identifier to identify specific churn treatment based on the enhanced payload by instructing the social media interworking gateway through a REST based mechanism; and a graphical interface to configure prioritization related fields, weightage, and various thresholds for the churn index. | 6. A system for handling social media inputs in an existing multi-channel converged CSTA-based infrastructure, the system comprising: a crawler to obtain social media posts as social media inputs; a sentiment analyzer using the social media inputs capable to generate sentiment analysis inputs based on textual and non-textual content including images describing facial expression and emotions; an indexer to determine a priority and potential churn index from the sentiment analysis inputs based on predetermined parameters and to translate the priority and potential churn index into qualifiers using predetermined combinations of a severity index and an anticipated churn index; an adapter capable to adapt the qualifiers for the social media posts into CSTA specifications to create adapted posts; a router capable to dynamically route the adapted posts, wherein the router routes the adapted posts by leveraging “one number service” of the CSTA-based infrastructure resulting in routed posts; a social media interworking gateway to adapt the routed posts to the CSTA specification; a CSTA adaptation stack used by the social media interworking gateway to construct an enhanced payload; a CSTA protocol stack capable to receive service execution instructions from the CSTA adaptation stack through established SIP-CSTA sessions or TCP-based CSTA sessions with an option to choose ASN.1 encoding instead of XML encoding; an outbound campaign identifier to identify specific churn treatment based on the enhanced payload by instructing the social media interworking gateway through a REST based mechanism; and a graphical interface to configure prioritization related fields, weightage, and various thresholds for the churn index. 7. The system of claim 6 , wherein the qualifiers are selected from a group consisting of a high severity index and a high anticipated churn index, a high severity index and a medium anticipated churn index, severity index and a low anticipated churn index, and combinations thereof. | 0.598011 |
9,495,424 | 1 | 3 | 1. A method comprising: receiving a user-defined parameter for named entity recognition, wherein the user-defined parameter comprises a beginning position and one of a length or an ending position to define a section of a written work on which the named entity recognition is to be performed, wherein an individual position or the length are measured in one of chapters, pages, paragraphs, or words; recognizing, based at least in part on the user-defined parameter, one or more textual strings within the section of the written work, wherein a textual string of the one or more textual strings is associated with a named entity of a plurality of named entities within the portion of the written work; calculating, by one or more hardware processors, a significance value based at least in part on a number of the one or more textual strings; selecting a primary textual string from the one or more textual strings; and providing an ordered list of at least a portion of the plurality of named entities, wherein a position of the primary textual string within the ordered list is based at least in part on the significance value. | 1. A method comprising: receiving a user-defined parameter for named entity recognition, wherein the user-defined parameter comprises a beginning position and one of a length or an ending position to define a section of a written work on which the named entity recognition is to be performed, wherein an individual position or the length are measured in one of chapters, pages, paragraphs, or words; recognizing, based at least in part on the user-defined parameter, one or more textual strings within the section of the written work, wherein a textual string of the one or more textual strings is associated with a named entity of a plurality of named entities within the portion of the written work; calculating, by one or more hardware processors, a significance value based at least in part on a number of the one or more textual strings; selecting a primary textual string from the one or more textual strings; and providing an ordered list of at least a portion of the plurality of named entities, wherein a position of the primary textual string within the ordered list is based at least in part on the significance value. 3. The method of claim 1 , further comprising: determining a position of the textual string within the section of the written work; and calculating the significance value based at least in part on the position of the textual string within the section of the written work. | 0.717119 |
4,320,451 | 53 | 54 | 53. The method as recited in claim 52 including still a further step for determining whether or not said second of said plurality of processes is operating in said preferential mode. | 53. The method as recited in claim 52 including still a further step for determining whether or not said second of said plurality of processes is operating in said preferential mode. 54. The method for interprocess communication and synchronization in a general purpose computer system as recited in claim 53 including still another step for queueing said message of said trap-event occurrence on said second of said plurality of processes when said second of said plurality of processes is operating in said first preferential mode. | 0.5 |
7,711,812 | 21 | 23 | 21. The web service development system according to claim 16 , wherein the monitor tag inserter includes a reference to a monitor document containing a web service description language document for a monitoring service instantiation that executes on a monitoring server. | 21. The web service development system according to claim 16 , wherein the monitor tag inserter includes a reference to a monitor document containing a web service description language document for a monitoring service instantiation that executes on a monitoring server. 23. The web service development system according to claim 21 , wherein the web service development tool defines the functional web service to execute on the monitoring web server. | 0.556931 |
9,767,092 | 18 | 19 | 18. The computer program product claim 13 , wherein the computer usable program code further causes the computer hardware system to perform: selecting an alternate value for at least one feature of the complex information target from a plurality of candidate values based upon a confidence score. | 18. The computer program product claim 13 , wherein the computer usable program code further causes the computer hardware system to perform: selecting an alternate value for at least one feature of the complex information target from a plurality of candidate values based upon a confidence score. 19. The computer program product claim 18 , wherein the alternate value conforms to an allowable complex information target. | 0.5 |
7,890,499 | 10 | 11 | 10. A system comprising: one or more computers; and a computer-readable medium coupled to the one or more computers having instructions stored thereon which, when executed by the one or more computers, cause the one or more computers to perform operations comprising: (A) receiving a search query; (B) providing a first user interface, the first user interface displaying first search results specifying resources that a search engine has identified as being responsive to the search query, the first user interface further displaying a subject matter link identifying a particular subject matter that the search engine has identified based on the first search results, the particular subject matter being associated with a particular collection of records that the search engine has selected from among multiple collections of records, all records in the particular collection having a common attribute structure of data elements that pertain to the particular subject matter, (C) wherein the first user interface further displays a second subject matter link identifying a second subject matter that the search engine has also identified based on the first search results, the second subject matter being associated with a second collection of records that the search engine has selected from among the multiple collections of records, all records in the second collection having a common attribute structure of data elements that pertain to the second subject matter, the common attribute structure of the second collection being different than the common attribute structure of the particular collection; (D) receiving a selection of the subject matter link; (E) formatting second search results based on a template that is associated with the particular subject matter, each of the second search results specifying a respective record, each record being in the particular collection of records associated with the particular subject matter; and (F) providing a second user interface that displays the second search results and an interface element that is associated with the template. | 10. A system comprising: one or more computers; and a computer-readable medium coupled to the one or more computers having instructions stored thereon which, when executed by the one or more computers, cause the one or more computers to perform operations comprising: (A) receiving a search query; (B) providing a first user interface, the first user interface displaying first search results specifying resources that a search engine has identified as being responsive to the search query, the first user interface further displaying a subject matter link identifying a particular subject matter that the search engine has identified based on the first search results, the particular subject matter being associated with a particular collection of records that the search engine has selected from among multiple collections of records, all records in the particular collection having a common attribute structure of data elements that pertain to the particular subject matter, (C) wherein the first user interface further displays a second subject matter link identifying a second subject matter that the search engine has also identified based on the first search results, the second subject matter being associated with a second collection of records that the search engine has selected from among the multiple collections of records, all records in the second collection having a common attribute structure of data elements that pertain to the second subject matter, the common attribute structure of the second collection being different than the common attribute structure of the particular collection; (D) receiving a selection of the subject matter link; (E) formatting second search results based on a template that is associated with the particular subject matter, each of the second search results specifying a respective record, each record being in the particular collection of records associated with the particular subject matter; and (F) providing a second user interface that displays the second search results and an interface element that is associated with the template. 11. The system of claim 10 , wherein the second user interface displays the second search results in a table, wherein each row of the table refers to a single resource and each column of the table refers to a single attribute from the common attribute structure of the data elements. | 0.5 |
9,443,509 | 8 | 12 | 8. At least one non-transitory computer-readable storage medium having encoded thereon computer-executable instructions that, when executed by at least one computer, cause the at least one computer to carry out a method of processing results of a recognition by an automatic speech recognition (ASR) system on an utterance, the results comprising two or more results identified by the ASR system as likely to be accurate recognition results for the utterance, the two or more results comprising a first result and a second result, wherein the first result is identified by the ASR system as most likely among the two or more results to be an accurate recognition result, the method comprising: evaluating the first result using a medical fact extractor to extract a first set of one or more medical facts; evaluating the second result using the medical fact extractor to extract a second set of one or more medical facts; determining whether the first set of one or more medical facts has a meaning that is different in a medically significant way from a meaning of the second set of one or more medical facts; and in response to determining that the first set of one or more medical facts has a meaning that is different in a medically significant way from the meaning of the second set of one or more medical facts, triggering presentation of an alert to a user via a user interface. | 8. At least one non-transitory computer-readable storage medium having encoded thereon computer-executable instructions that, when executed by at least one computer, cause the at least one computer to carry out a method of processing results of a recognition by an automatic speech recognition (ASR) system on an utterance, the results comprising two or more results identified by the ASR system as likely to be accurate recognition results for the utterance, the two or more results comprising a first result and a second result, wherein the first result is identified by the ASR system as most likely among the two or more results to be an accurate recognition result, the method comprising: evaluating the first result using a medical fact extractor to extract a first set of one or more medical facts; evaluating the second result using the medical fact extractor to extract a second set of one or more medical facts; determining whether the first set of one or more medical facts has a meaning that is different in a medically significant way from a meaning of the second set of one or more medical facts; and in response to determining that the first set of one or more medical facts has a meaning that is different in a medically significant way from the meaning of the second set of one or more medical facts, triggering presentation of an alert to a user via a user interface. 12. The at least one computer-readable storage medium of claim 8 , wherein the method further comprises, prior to evaluating the first result or the second result using the medical fact extractor, processing a lattice produced by the ASR system representing the two or more results identified by the ASR system to generate a string of words for the first result and a string of words for the second result. | 0.753641 |
7,949,651 | 14 | 17 | 14. A directory assistance system comprising: an input device for receiving at least one first search term related to a residential listing to be found; a computer readable storage medium comprising a directory database configured to store at least a plurality of residential listings; a computer processor for: executing a search engine configured to search the directory database for a first set of residential listings using the at least one first search term, each of the first set of residential listings including an address or phone number; performing a reverse address or phone number search on the directory database using the addresses or phone numbers of the first set of residential listings to find a second set of residential listings having the same address or phone number as listings in the first set of residential listings; an indexer aggregating listings in the second set of residential listings with listings in the first set of residential listing that share the same address or phone number to form a third set of residential listings; a module configured to receive a second search term different than the first search term which is a cohabitant name of the residential listing to be found and to select at least one of the residential listings from the third set of residential listings using the second search term; and an output device for outputting the at least one selected listing that satisfies the second search term. | 14. A directory assistance system comprising: an input device for receiving at least one first search term related to a residential listing to be found; a computer readable storage medium comprising a directory database configured to store at least a plurality of residential listings; a computer processor for: executing a search engine configured to search the directory database for a first set of residential listings using the at least one first search term, each of the first set of residential listings including an address or phone number; performing a reverse address or phone number search on the directory database using the addresses or phone numbers of the first set of residential listings to find a second set of residential listings having the same address or phone number as listings in the first set of residential listings; an indexer aggregating listings in the second set of residential listings with listings in the first set of residential listing that share the same address or phone number to form a third set of residential listings; a module configured to receive a second search term different than the first search term which is a cohabitant name of the residential listing to be found and to select at least one of the residential listings from the third set of residential listings using the second search term; and an output device for outputting the at least one selected listing that satisfies the second search term. 17. The directory assistance system of claim 14 , further comprising a text-to-speech engine configured to provide the at least one listing that satisfies the second search term. | 0.768831 |
9,659,069 | 15 | 16 | 15. The non-transitory machine-readable storage medium of claim 12 , further including instructions that, when executed on the device, cause the device to: display, in response to the first user input and concurrently with highlighting the subset of the plurality of items, a pop-up menu comprising a list of at least one item having metadata that at least partially matches the first user input; wherein each item listed in the pop-up menu corresponds to one of the highlighted items. | 15. The non-transitory machine-readable storage medium of claim 12 , further including instructions that, when executed on the device, cause the device to: display, in response to the first user input and concurrently with highlighting the subset of the plurality of items, a pop-up menu comprising a list of at least one item having metadata that at least partially matches the first user input; wherein each item listed in the pop-up menu corresponds to one of the highlighted items. 16. The non-transitory machine-readable storage medium of claim 15 , further including instructions that, when executed on the device, cause the device to: receive a third user input selecting one of the items from the pop-up menu; and activate, in response to the user selecting one of the items from the pop-up menu, the corresponding item. | 0.5 |
9,166,714 | 7 | 8 | 7. The method of claim 1 , wherein determining the interaction analytics includes determining a playback position in a third content item of the plurality of content items of the catalog at which a user of the set of users was interacting with the third content item during the specified period of time, and wherein the third content item was specified by at least one specified content identifier. | 7. The method of claim 1 , wherein determining the interaction analytics includes determining a playback position in a third content item of the plurality of content items of the catalog at which a user of the set of users was interacting with the third content item during the specified period of time, and wherein the third content item was specified by at least one specified content identifier. 8. The method of claim 7 , wherein the specified period of time is a current period of time. | 0.678322 |
7,844,466 | 32 | 35 | 32. A system adapted to process language, comprising: an input; at least one processor adapted to: (a) identify known words received through the input; (b) generating a set of candidate parts of speech from each continuous sequence of identified known words; (c) permuting at least a portion of the candidate parts of speech from each continuous sequence; (d) producing a plurality of potentially valid syntactic structures from the permuted candidate parts of speech; (e) for at least one of the potentially valid syntactic structures, generating a conceptual representation thereof based on a database of conceptual information comprising parts of speech and a dictionary; (f) determining at least one anomaly criterion for each conceptual representation; and (g) generating a response to the input, based on one or more of the conceptual representations, in dependence on the determined at least one anomaly criterion, in a conceptually appropriate manner; and at least one of an output presenting a signal sensitive to the response and the processor being adapted to process the response. | 32. A system adapted to process language, comprising: an input; at least one processor adapted to: (a) identify known words received through the input; (b) generating a set of candidate parts of speech from each continuous sequence of identified known words; (c) permuting at least a portion of the candidate parts of speech from each continuous sequence; (d) producing a plurality of potentially valid syntactic structures from the permuted candidate parts of speech; (e) for at least one of the potentially valid syntactic structures, generating a conceptual representation thereof based on a database of conceptual information comprising parts of speech and a dictionary; (f) determining at least one anomaly criterion for each conceptual representation; and (g) generating a response to the input, based on one or more of the conceptual representations, in dependence on the determined at least one anomaly criterion, in a conceptually appropriate manner; and at least one of an output presenting a signal sensitive to the response and the processor being adapted to process the response. 35. The system of claim 32 , wherein the processor is further adapted to use the conceptual representation to formulate a response to a language sample comprising an inquiry. | 0.772846 |
5,383,120 | 1 | 4 | 1. A method for performing thematic part-of-speech tagging for collocations having content-word pairs in a natural language text processing system comprising the steps of: identifying collocations of content-word pairs in a large corpus of text; calculating, for each of said collocation content-word pair identified, a variability factor which is a measure of variability of said collocation content-word pairs occurring in said text; storing said collocation content word pairs and associated variability factors in a collocation database; and using said database to tag collocation content-word pairs according to said variability factors, wherein collocation content-word pairs with high variability factors are tagged as having a verb and a noun thereat and collocation content-word pairs with low variability factors are tagged as having an adjective and a noun thereat or a noun and noun thereat. | 1. A method for performing thematic part-of-speech tagging for collocations having content-word pairs in a natural language text processing system comprising the steps of: identifying collocations of content-word pairs in a large corpus of text; calculating, for each of said collocation content-word pair identified, a variability factor which is a measure of variability of said collocation content-word pairs occurring in said text; storing said collocation content word pairs and associated variability factors in a collocation database; and using said database to tag collocation content-word pairs according to said variability factors, wherein collocation content-word pairs with high variability factors are tagged as having a verb and a noun thereat and collocation content-word pairs with low variability factors are tagged as having an adjective and a noun thereat or a noun and noun thereat. 4. The method of claim 1 comprising the additional step of using local context analysis to tag collocation content word pairs before using said collocation database. | 0.641304 |
9,619,360 | 1 | 3 | 1. A method for creating a library method stub in source code form corresponding to an original library call in machine-executable form, said method comprising: creating, by a computer processor, the library method stub in a predefined programming language by use of a library method signature associated with the original library call, at least one idiom sentence, and a call invoking the original library call, wherein said creating the library method stub comprises composing source code of the library method stub by matching the at least one idiom sentence with idiom-stub mappings predefined for each basic idiom of at least one basic idiom, wherein the original library call appears in sequential code, wherein the library method signature specifies formal arguments of the original library call, wherein the at least one idiom sentence summarizes memory operations performed by the original library call on the formal arguments, and wherein a sentence S of the at least one basic idiom provides at least one rule for generating a composition of literals to generate a complex idiom; and said processor storing the created library method stub in a database. | 1. A method for creating a library method stub in source code form corresponding to an original library call in machine-executable form, said method comprising: creating, by a computer processor, the library method stub in a predefined programming language by use of a library method signature associated with the original library call, at least one idiom sentence, and a call invoking the original library call, wherein said creating the library method stub comprises composing source code of the library method stub by matching the at least one idiom sentence with idiom-stub mappings predefined for each basic idiom of at least one basic idiom, wherein the original library call appears in sequential code, wherein the library method signature specifies formal arguments of the original library call, wherein the at least one idiom sentence summarizes memory operations performed by the original library call on the formal arguments, and wherein a sentence S of the at least one basic idiom provides at least one rule for generating a composition of literals to generate a complex idiom; and said processor storing the created library method stub in a database. 3. The method of claim 1 , said method further comprising: said processor modifying the sequential code by replacing the original library call in the sequential code with a respective call for the library method stub such that the modified sequential code is utilized in profiling the sequential code for data dependency analysis to enable pipeline-parallelization of the sequential code. | 0.817669 |
9,213,693 | 15 | 18 | 15. A system comprising: a reception module that receives, at a language interpretation system, a request for a real time interpretation performed by a human language interpreter of a voice communication between a first voice communication participant speaking a first language and a second voice communication participant speaking a second language during the voice communication, the request being received from the first voice communication participant; a routing module that provides, at the language interpretation system, the request to a human language interpreter; a machine language interpreter that translates the voice communication into a set of text data; and a transmission module that sends the set of text data to a display device that displays the text during a verbal human language interpretation of the voice communication performed by the human language interpreter in real time during the voice communication so that the human language interpreter utilizes the set of text data to perform the verbal human language interpretation, the human language interpretation being communicated by the human language interpreter to the second voice communication participant without the machine language interpreter, the verbal human language interpretation being unmodified prior to and during the communication of the human language interpreter to the second voice communication participant. | 15. A system comprising: a reception module that receives, at a language interpretation system, a request for a real time interpretation performed by a human language interpreter of a voice communication between a first voice communication participant speaking a first language and a second voice communication participant speaking a second language during the voice communication, the request being received from the first voice communication participant; a routing module that provides, at the language interpretation system, the request to a human language interpreter; a machine language interpreter that translates the voice communication into a set of text data; and a transmission module that sends the set of text data to a display device that displays the text during a verbal human language interpretation of the voice communication performed by the human language interpreter in real time during the voice communication so that the human language interpreter utilizes the set of text data to perform the verbal human language interpretation, the human language interpretation being communicated by the human language interpreter to the second voice communication participant without the machine language interpreter, the verbal human language interpretation being unmodified prior to and during the communication of the human language interpreter to the second voice communication participant. 18. The system of claim 15 , wherein the transmission module sends additional text to the display device, the additional text including contextual language corresponding to the voice communication. | 0.514778 |
8,850,415 | 2 | 3 | 2. The processor performed operations of claim 1 , wherein the model checking techniques involve static analysis of the source code. | 2. The processor performed operations of claim 1 , wherein the model checking techniques involve static analysis of the source code. 3. The processor performed operations of claim 2 , wherein static analysis of the source code comprises: identifying presence or absence of a class of software bugs in the source code; finding security vulnerabilities of the source code; and performing worst case execution timing analysis of the source code. | 0.5 |
4,582,441 | 10 | 13 | 10. A keyboard entry device having a display device for displaying characters entered at the keyboard, an entry point moveable relative to said display, comprising: means for moving said display point relative to said display under the control of and responsive to keyboard entry commands, means for recording a series of keyboard commands to control said means for moving, means for recording a series of keyboard commands representative of a description of data to be entered on said keyboard, means for replaying said two series of keyboard commands in the order recorded, voice synthesis means responsive to said second series of keyboard commands upon playback to vocalize said second series of keyboard commands such that the vocalization is understandable as a verbal prompt. | 10. A keyboard entry device having a display device for displaying characters entered at the keyboard, an entry point moveable relative to said display, comprising: means for moving said display point relative to said display under the control of and responsive to keyboard entry commands, means for recording a series of keyboard commands to control said means for moving, means for recording a series of keyboard commands representative of a description of data to be entered on said keyboard, means for replaying said two series of keyboard commands in the order recorded, voice synthesis means responsive to said second series of keyboard commands upon playback to vocalize said second series of keyboard commands such that the vocalization is understandable as a verbal prompt. 13. The keyboard entry device of claim 10 wherein said means for replaying is responsive to further keyboard commands. | 0.695876 |
9,940,932 | 1 | 7 | 1. A method for performing speech to text conversion, the method comprising: receiving, via a processor, an audio data and a video data of a user while the user is speaking; generating, via the processor, a first raw text based on the audio data using a language model and an acoustic model in conjunction with a Hidden Markov Model; generating, via the processor, a second raw text based on the video data using Karhunen-Loeve Transform (KLT) in conjunction with the Hidden Markov Model; determining, via the processor, a plurality of errors by comparing the first raw text and the second raw text, wherein determining the one or more errors comprises comparing a sequence of phonemes in the first raw text with a corresponding sequence of visemes in the second raw text for one or more mismatches; correcting, via the processor, the plurality of errors by applying one or more rules, wherein the one or more rules employ at least one of a domain specific word database, a context of conversation, and a prior communication history; generating a correction to an error of the plurality of errors; automatically generating a rule based on the error, the correction and training; and applying the one or more rules to another error of the plurality of errors to obtain a final text. | 1. A method for performing speech to text conversion, the method comprising: receiving, via a processor, an audio data and a video data of a user while the user is speaking; generating, via the processor, a first raw text based on the audio data using a language model and an acoustic model in conjunction with a Hidden Markov Model; generating, via the processor, a second raw text based on the video data using Karhunen-Loeve Transform (KLT) in conjunction with the Hidden Markov Model; determining, via the processor, a plurality of errors by comparing the first raw text and the second raw text, wherein determining the one or more errors comprises comparing a sequence of phonemes in the first raw text with a corresponding sequence of visemes in the second raw text for one or more mismatches; correcting, via the processor, the plurality of errors by applying one or more rules, wherein the one or more rules employ at least one of a domain specific word database, a context of conversation, and a prior communication history; generating a correction to an error of the plurality of errors; automatically generating a rule based on the error, the correction and training; and applying the one or more rules to another error of the plurality of errors to obtain a final text. 7. The method of claim 1 , wherein correcting the one or more errors comprises applying the one or more rules in a pre-defined order. | 0.789557 |
8,307,279 | 13 | 19 | 13. A method, comprising: receiving a structured document defining a plurality of display elements, the plurality of display elements including a resizable container element and a scalable element defined to be located at least partially within the resizable container element; executing a rendering function that calculates a display position for each of the plurality of display elements; producing rendered content, the rendered content based at least in part on the display position for each of the plurality of display elements; outputting a viewable area of the rendered content; receiving a scaling input; redefining the size of the scalable element according to the scaling input; and selectively redefining the size of the resizable container element based on the display position of the resizable container element with respect to the viewable area of the rendered content by determining whether the resizable container element will be located within the viewable area if resized to completely contain the scalable element, resizing the resizable container element if it will be located within the viewable area, and maintaining the size of the resizable container element if it will not be located within the viewable area. | 13. A method, comprising: receiving a structured document defining a plurality of display elements, the plurality of display elements including a resizable container element and a scalable element defined to be located at least partially within the resizable container element; executing a rendering function that calculates a display position for each of the plurality of display elements; producing rendered content, the rendered content based at least in part on the display position for each of the plurality of display elements; outputting a viewable area of the rendered content; receiving a scaling input; redefining the size of the scalable element according to the scaling input; and selectively redefining the size of the resizable container element based on the display position of the resizable container element with respect to the viewable area of the rendered content by determining whether the resizable container element will be located within the viewable area if resized to completely contain the scalable element, resizing the resizable container element if it will be located within the viewable area, and maintaining the size of the resizable container element if it will not be located within the viewable area. 19. The method of claim 13 , wherein resizing the resizable container element changes the size of the resizable container element with respect to the structured document. | 0.791155 |
9,881,003 | 15 | 16 | 15. A non-transitory computer-readable storage medium storing for instructions executable by at least one processor for: receiving digital graphic novel content; producing, a numerical map that represents an image extracted from the digital graphic novel content; responsive to inputting the numerical map into a first artificial neural network of a machine learning model configured to determine regions of the digital graphic novel content that are likely to include speech bubbles, receiving, from the first artificial neural network, a plurality of candidate regions of the digital graphic novel content that are likely to include speech bubbles; and responsive to inputting the plurality of candidate regions into a second artificial neural network of the machine learning model, receiving, from the second artificial neural network, features of the digital graphic novel content that include a plurality of speech bubbles containing text; generating, based on the identified features of the digital graphic novel content, contextual information associated with the features of the digital graphic novel content, the contextual information including the text of the plurality of speech bubbles in an intended reading order of the plurality of speech bubbles; and automatically translating, based at least in part on the contextual information, from a first natural language to a second natural language, the text contained in the plurality of speech bubbles to produce translated text. | 15. A non-transitory computer-readable storage medium storing for instructions executable by at least one processor for: receiving digital graphic novel content; producing, a numerical map that represents an image extracted from the digital graphic novel content; responsive to inputting the numerical map into a first artificial neural network of a machine learning model configured to determine regions of the digital graphic novel content that are likely to include speech bubbles, receiving, from the first artificial neural network, a plurality of candidate regions of the digital graphic novel content that are likely to include speech bubbles; and responsive to inputting the plurality of candidate regions into a second artificial neural network of the machine learning model, receiving, from the second artificial neural network, features of the digital graphic novel content that include a plurality of speech bubbles containing text; generating, based on the identified features of the digital graphic novel content, contextual information associated with the features of the digital graphic novel content, the contextual information including the text of the plurality of speech bubbles in an intended reading order of the plurality of speech bubbles; and automatically translating, based at least in part on the contextual information, from a first natural language to a second natural language, the text contained in the plurality of speech bubbles to produce translated text. 16. The non-transitory computer-readable storage medium of claim 15 , wherein automatically translating the text comprises: after extracting the text contained in the plurality of speech bubbles, compiling the text into a single piece of text based on the intended reading order; and translating the single piece of text from the first natural language to the second natural language to produce the translated text. | 0.691679 |
9,594,750 | 1 | 12 | 1. A method for language translation comprising: providing program code to launch a translation window associated with a primary window, wherein when the primary window is displayed on a screen, the translation window will be positioned so that the translation window does not overlap the primary window; in the translation window, indicating input information in a first language; translating the input information from the first language to information in a second language; in the translation window, displaying the information in the second language; and permitting scrolling of the primary window independently from the translation window. | 1. A method for language translation comprising: providing program code to launch a translation window associated with a primary window, wherein when the primary window is displayed on a screen, the translation window will be positioned so that the translation window does not overlap the primary window; in the translation window, indicating input information in a first language; translating the input information from the first language to information in a second language; in the translation window, displaying the information in the second language; and permitting scrolling of the primary window independently from the translation window. 12. The method of claim 1 comprising: displaying a translation direction field in the translation window, wherein the translation direction field provides a selection between at least two options and is implemented using a radio button graphical user interface element. | 0.596096 |
8,032,418 | 15 | 16 | 15. The method according to claim 14 , further comprising storing the logos in a plurality of different formats. | 15. The method according to claim 14 , further comprising storing the logos in a plurality of different formats. 16. The method according to claim 15 , wherein the formats are appropriate for at least one of the group comprising a personal computer, a web television, a mobile phone, a hand-held computer and any combination thereof. | 0.5 |
9,143,603 | 1 | 7 | 1. A method employing a portable user device having at least one microphone that captures audio, and at least one image sensor for capturing imagery, the method comprising the acts: (a) capturing imagery with the image sensor, the captured image depicting one or more physical subjects within an environment of said user, and capturing user speech with the microphone; (b) sending, to a speech recognition module, audio data corresponding to the user speech, and receiving recognized user speech data corresponding thereto; (c) applying a computer-implemented cognition process to the imagery, said cognition process also employing information from the recognized user speech data as a clue to help identify a physical subject within the captured imagery that is of interest to said user; and (d) presenting a set of plural response options to the user, for user selection therebetween; wherein the set of plural response options presented to the user varies based on said identified physical subject. | 1. A method employing a portable user device having at least one microphone that captures audio, and at least one image sensor for capturing imagery, the method comprising the acts: (a) capturing imagery with the image sensor, the captured image depicting one or more physical subjects within an environment of said user, and capturing user speech with the microphone; (b) sending, to a speech recognition module, audio data corresponding to the user speech, and receiving recognized user speech data corresponding thereto; (c) applying a computer-implemented cognition process to the imagery, said cognition process also employing information from the recognized user speech data as a clue to help identify a physical subject within the captured imagery that is of interest to said user; and (d) presenting a set of plural response options to the user, for user selection therebetween; wherein the set of plural response options presented to the user varies based on said identified physical subject. 7. The method of claim 1 in which the cognition process comprises analysis of captured imagery to recognize a vehicle depicted therein. | 0.850664 |
8,065,143 | 5 | 6 | 5. The system of claim 1 , wherein retrieving text data from the speech input includes communicating with a speech recognition module located remotely from the computing devices. | 5. The system of claim 1 , wherein retrieving text data from the speech input includes communicating with a speech recognition module located remotely from the computing devices. 6. The system of claim 5 , further comprising a communications interface, the communications interface operable to provide a wireless connection to the speech recognition module. | 0.5 |
5,555,409 | 12 | 13 | 12. A method as defined in claim 11 which includes the process that allows for export of data in said array comprises the steps of: tracing a hierarchy path consisting of all directly related data sets in said array; expanding the hierarchy path into a set of all paths between the key data set and the related data sets, each said path being a list of key and related data sets with only one data set at any level of the hierarchy; creating a relational table with a column heading for each of the data types in said data set array; decomposing the data sets in said set of paths path-by-path into a data type and a data value; and entering the data values into the relation table under the appropriate heading. | 12. A method as defined in claim 11 which includes the process that allows for export of data in said array comprises the steps of: tracing a hierarchy path consisting of all directly related data sets in said array; expanding the hierarchy path into a set of all paths between the key data set and the related data sets, each said path being a list of key and related data sets with only one data set at any level of the hierarchy; creating a relational table with a column heading for each of the data types in said data set array; decomposing the data sets in said set of paths path-by-path into a data type and a data value; and entering the data values into the relation table under the appropriate heading. 13. A method as defined in claim 12 which includes the step of converting the entries in each row of the non-normalized table into a concatenated key of non-null values such that said values define a set of fully normalized tables corresponding to the columns in the non-normalized table. | 0.5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.