patent_num
int64 3.93M
10.2M
| claim_num1
int64 1
519
| claim_num2
int64 2
520
| sentence1
stringlengths 40
15.9k
| sentence2
stringlengths 88
20k
| label
float64 0.5
0.99
|
---|---|---|---|---|---|
8,429,619 | 1 | 4 | 1. A system, comprising a processor coupled to a memory for storing program instructions for execution by the processor, wherein the program instructions comprise program instructions for: controlling execution of an instruction stream that invokes a sequence of high-level procedures automatically and without user intervention, wherein the executable instruction stream is a binary instruction stream and the invocation of the sequence of high-level procedures is a sequence of function calls performed by the executable instruction stream; and generating a trace file during execution of the executable instruction stream that represents invocations of the high-level procedure invocations as a human-readable script that can be executed to repeat the invocation of the sequence of high-level procedures, wherein the trace file is generated such that return parameters supplied from the high-level procedures to the executable instruction stream are documented in the trace as comments; editing the trace file to alter a behavior specified by the trace file to alter the sequence of the high-level procedures; and responsive to a subsequent command, executing the trace file as a script to reproduce at least part of the behavior of the executable instruction stream as modified by the editing of the trace file. | 1. A system, comprising a processor coupled to a memory for storing program instructions for execution by the processor, wherein the program instructions comprise program instructions for: controlling execution of an instruction stream that invokes a sequence of high-level procedures automatically and without user intervention, wherein the executable instruction stream is a binary instruction stream and the invocation of the sequence of high-level procedures is a sequence of function calls performed by the executable instruction stream; and generating a trace file during execution of the executable instruction stream that represents invocations of the high-level procedure invocations as a human-readable script that can be executed to repeat the invocation of the sequence of high-level procedures, wherein the trace file is generated such that return parameters supplied from the high-level procedures to the executable instruction stream are documented in the trace as comments; editing the trace file to alter a behavior specified by the trace file to alter the sequence of the high-level procedures; and responsive to a subsequent command, executing the trace file as a script to reproduce at least part of the behavior of the executable instruction stream as modified by the editing of the trace file. 4. The system of claim 1 , further comprising program instructions for receiving a sequence of user input commands invoking the high-level procedures via a command interface, wherein the program instructions for controlling execution of the executable instruction stream are executed in response to the sequence of user input commands, and wherein the program instructions for generating generate a record of the sequence of user input commands that is subsequently repeated by the program instructions for executing controlling execution of the trace file. | 0.5 |
9,984,165 | 5 | 6 | 5. The method of claim 1 further comprising: validating the pre-computed priced travel recommendations for which the confidence factor is less than the given threshold by querying the primary data source for valid database query results; and returning the validated pre-computed priced travel recommendations associated with confidence factor values less than the given threshold to the client. | 5. The method of claim 1 further comprising: validating the pre-computed priced travel recommendations for which the confidence factor is less than the given threshold by querying the primary data source for valid database query results; and returning the validated pre-computed priced travel recommendations associated with confidence factor values less than the given threshold to the client. 6. The method of claim 5 wherein the set of pre-computed priced travel recommendations is returned to the client before the set of pre-computed priced travel recommendations is validated with the primary data source, and further comprising: updating the set of pre-computed priced travel recommendations at the client with the validated pre-computed priced travel recommendations. | 0.5 |
4,841,387 | 4 | 5 | 4. An arrangement according to claim 3, wherein the areas on the writing surface corresponding to each correlation vector are of a predetermined size and shape. | 4. An arrangement according to claim 3, wherein the areas on the writing surface corresponding to each correlation vector are of a predetermined size and shape. 5. An arrangement according to claim 4, wherein said areas are rectangular in shape. | 0.655738 |
9,838,485 | 8 | 9 | 8. The method of claim 1 , the method further comprising: generating, by the computer system, a third request formatted specifically for the first social media content provider, the third request specifying a third geographically definable location; transmitting, by the computer system, the third request to the first social media content provider; obtaining, by the computer system, third content from the first social media content provider based on the third request, wherein the third content does not include ambient condition information; automatically determining, by the computer system, third ambient condition information after the third content is received from the first social media content provider, wherein the third ambient condition information indicates at least a value associated with a third ambient condition that existed when the third content was created and/or posted; and communicating, by the computer system, the third ambient condition in association with the third location. | 8. The method of claim 1 , the method further comprising: generating, by the computer system, a third request formatted specifically for the first social media content provider, the third request specifying a third geographically definable location; transmitting, by the computer system, the third request to the first social media content provider; obtaining, by the computer system, third content from the first social media content provider based on the third request, wherein the third content does not include ambient condition information; automatically determining, by the computer system, third ambient condition information after the third content is received from the first social media content provider, wherein the third ambient condition information indicates at least a value associated with a third ambient condition that existed when the third content was created and/or posted; and communicating, by the computer system, the third ambient condition in association with the third location. 9. The method of claim 8 , wherein automatically determining the third ambient condition information comprises: identifying, by the computer system, a date and/or time associated with the third content; identifying, by the computer system, a geo-location associated with the third content; and determining, by the computer system, at least a value associated with an ambient condition that existed at the identified geo-location during the identified date and/or time. | 0.5 |
8,775,447 | 1 | 2 | 1. A method for processing related datasets, the method including: receiving over an input device or port records from multiple datasets, the records of a given dataset having one or more values for one or more respective fields; and processing records from each of the multiple datasets in a data processing system, the processing including analyzing at least one constraint specification stored in a data storage system to determine a processing order for the multiple datasets, the constraint specification specifying one or more constraints for preserving referential integrity or statistical consistency among a group of related datasets that includes the multiple datasets, applying one or more transformations to records from each of the multiple datasets in the determined processing order, where the transformations are applied to records from a first dataset of the multiple datasets before the transformations are applied to records from a second dataset of the multiple datasets, and the transformations applied to the records from the second dataset are applied based at least in part on results of applying the transformations to the records from the first dataset and at least one constraint between the first dataset and the second dataset specified by the constraint specification, and storing or outputting results of the transformations to the records from each of the multiple datasets. | 1. A method for processing related datasets, the method including: receiving over an input device or port records from multiple datasets, the records of a given dataset having one or more values for one or more respective fields; and processing records from each of the multiple datasets in a data processing system, the processing including analyzing at least one constraint specification stored in a data storage system to determine a processing order for the multiple datasets, the constraint specification specifying one or more constraints for preserving referential integrity or statistical consistency among a group of related datasets that includes the multiple datasets, applying one or more transformations to records from each of the multiple datasets in the determined processing order, where the transformations are applied to records from a first dataset of the multiple datasets before the transformations are applied to records from a second dataset of the multiple datasets, and the transformations applied to the records from the second dataset are applied based at least in part on results of applying the transformations to the records from the first dataset and at least one constraint between the first dataset and the second dataset specified by the constraint specification, and storing or outputting results of the transformations to the records from each of the multiple datasets. 2. The method of claim 1 , wherein at least one constraint for preserving referential integrity specified by the constraint specification is based on dependence of values for a field of the second dataset on values for a field of the first dataset. | 0.897605 |
7,616,793 | 4 | 5 | 4. The workstation of claim 1 , wherein said medical image is a mammogram, and wherein said preselected set of computed features includes one or more of size, spiculatedness, margin sharpness, eccentricity, sphericity, average grey level, contrast, cluster characteristics, and breast density characteristics. | 4. The workstation of claim 1 , wherein said medical image is a mammogram, and wherein said preselected set of computed features includes one or more of size, spiculatedness, margin sharpness, eccentricity, sphericity, average grey level, contrast, cluster characteristics, and breast density characteristics. 5. The workstation of claim 4 , wherein said preselected set of features may be altered by the user. | 0.5 |
10,013,482 | 1 | 2 | 1. A method comprising using at least one hardware processor for: receiving a context, wherein the context is a digital text segment; identifying evidence with respect to the context in at least one content resource, wherein the at least one content resource comprises at least one of unstructured digital text data and free-text digital corpora, and wherein the identifying comprises: a) identifying context-free features in the at least one content resource, that generally characterize certain one or more digital text segments in the at least one content resource as evidence, wherein the context-free features comprise at least one of (i) numbers and (ii) keywords that indicate a quantitative analysis of data, and wherein the identifying of the context-free features comprises applying a first machine learning classifier to the at least one content resource, wherein the first machine learning classifier is trained on digital texts in which evidence was manually labeled, and b) identifying, in the certain one or more digital text segments, context features indicative of the relevance of at least some of the certain one or more digital text segments to the context, wherein the relevance is defined as direct support of or opposition to the context, and wherein the identifying of the context features comprises applying a second machine learning classifier to the certain one or more digital text segments, wherein the second machine learning classifier is trained on digital texts that were output by the first machine learning classifier and were manually labeled as evidence that supports or opposes a training context; and outputting a list of said identified evidence. | 1. A method comprising using at least one hardware processor for: receiving a context, wherein the context is a digital text segment; identifying evidence with respect to the context in at least one content resource, wherein the at least one content resource comprises at least one of unstructured digital text data and free-text digital corpora, and wherein the identifying comprises: a) identifying context-free features in the at least one content resource, that generally characterize certain one or more digital text segments in the at least one content resource as evidence, wherein the context-free features comprise at least one of (i) numbers and (ii) keywords that indicate a quantitative analysis of data, and wherein the identifying of the context-free features comprises applying a first machine learning classifier to the at least one content resource, wherein the first machine learning classifier is trained on digital texts in which evidence was manually labeled, and b) identifying, in the certain one or more digital text segments, context features indicative of the relevance of at least some of the certain one or more digital text segments to the context, wherein the relevance is defined as direct support of or opposition to the context, and wherein the identifying of the context features comprises applying a second machine learning classifier to the certain one or more digital text segments, wherein the second machine learning classifier is trained on digital texts that were output by the first machine learning classifier and were manually labeled as evidence that supports or opposes a training context; and outputting a list of said identified evidence. 2. The method of claim 1 , wherein the context comprises at least one of a claim and a Topic Under Consideration (TUC), wherein the claim is a concise textual statement with respect to a topic, and wherein the TUC is a single free-text sentence. | 0.726563 |
8,683,436 | 10 | 11 | 10. A non-transitory computer-readable medium comprising instructions that when performed by a computer result in operations comprising: receiving, by at least one processor, an input representative of a temporal constraint for a task of a graph-process model, the temporal constraint defining at least one of a commencement delay, a completion delay, a commencement deadline, and a completion deadline; associating, by the at least one processor, the task with the temporal constraint created based on the received input, the temporal constraint defined to have a placement on the task of the graph-process model based on a type of temporal constraint, wherein the placement of the temporal constraint is based on a graphical element, the graphical element comprising a left border, a right border, a top border and a bottom border, wherein the left border is configured to accept placement of the temporal constraint when the temporal constraint corresponds to the commencement delay, wherein the right border is configured to accept placement of the temporal constraint when the temporal constraint corresponds to the completion delay, wherein the top border is configured to accept placement of the temporal constraint when the temporal constraint corresponds to the commencement deadline, and wherein the bottom border is configured to accept placement of the temporal constraint when the temporal constraint corresponds to the completion deadline; and providing, by the at least one processor, the task and the temporal constraint to configure the graph-process model. | 10. A non-transitory computer-readable medium comprising instructions that when performed by a computer result in operations comprising: receiving, by at least one processor, an input representative of a temporal constraint for a task of a graph-process model, the temporal constraint defining at least one of a commencement delay, a completion delay, a commencement deadline, and a completion deadline; associating, by the at least one processor, the task with the temporal constraint created based on the received input, the temporal constraint defined to have a placement on the task of the graph-process model based on a type of temporal constraint, wherein the placement of the temporal constraint is based on a graphical element, the graphical element comprising a left border, a right border, a top border and a bottom border, wherein the left border is configured to accept placement of the temporal constraint when the temporal constraint corresponds to the commencement delay, wherein the right border is configured to accept placement of the temporal constraint when the temporal constraint corresponds to the completion delay, wherein the top border is configured to accept placement of the temporal constraint when the temporal constraint corresponds to the commencement deadline, and wherein the bottom border is configured to accept placement of the temporal constraint when the temporal constraint corresponds to the completion deadline; and providing, by the at least one processor, the task and the temporal constraint to configure the graph-process model. 11. The non-transitory computer-readable medium of claim 10 , wherein receiving further comprises: receiving, from a user interface, the input representative of the temporal constraint. | 0.5 |
7,849,148 | 72 | 134 | 72. A computer program product embodied on a non-transitory computer readable medium, comprising: computer code for displaying at least one window in connection with a website; computer code for displaying, utilizing the at least one window, a stock-related field; computer code for receiving a plurality of characters of text from a user as the user is typing the text utilizing the stock-related field; computer code for dynamically determining, after the user types each character in the received text, whether the characters typed so far match any of n text strings in one of a plurality of n-tuples including n>l text strings, each of the plurality of n-tuples including first text representing a stock ticker symbol and second text representing a company name corresponding to the stock ticker symbol; computer code for indicating to the user that a match has been found, utilizing the at least one window, if it is determined that the characters typed so far match any of the n text strings in the one of the plurality of n-tuples; computer code for displaying, utilizing the at least one window, a first set of representations of a first set of hyperlinks; computer code for receiving first input from the user indicating a selection of one of the first set of hyperlink representations; computer code for displaying a second set of representations of a second set of hyperlinks, utilizing the at least one window, in response to receiving the first input; computer code for receiving second input from the user indicating a selection of one of the second set of hyperlink representations; and computer code for navigating to a destination specified by the selected one of the second set of hyperlink representations, in response to receiving the second input. | 72. A computer program product embodied on a non-transitory computer readable medium, comprising: computer code for displaying at least one window in connection with a website; computer code for displaying, utilizing the at least one window, a stock-related field; computer code for receiving a plurality of characters of text from a user as the user is typing the text utilizing the stock-related field; computer code for dynamically determining, after the user types each character in the received text, whether the characters typed so far match any of n text strings in one of a plurality of n-tuples including n>l text strings, each of the plurality of n-tuples including first text representing a stock ticker symbol and second text representing a company name corresponding to the stock ticker symbol; computer code for indicating to the user that a match has been found, utilizing the at least one window, if it is determined that the characters typed so far match any of the n text strings in the one of the plurality of n-tuples; computer code for displaying, utilizing the at least one window, a first set of representations of a first set of hyperlinks; computer code for receiving first input from the user indicating a selection of one of the first set of hyperlink representations; computer code for displaying a second set of representations of a second set of hyperlinks, utilizing the at least one window, in response to receiving the first input; computer code for receiving second input from the user indicating a selection of one of the second set of hyperlink representations; and computer code for navigating to a destination specified by the selected one of the second set of hyperlink representations, in response to receiving the second input. 134. The computer program product of claim 72 , wherein the computer program product is configured such that the receiving the second input from the user comprises receiving input from the user indicating a mouse click on the selected one of the second set of hyperlink representations. | 0.733706 |
8,019,750 | 1 | 5 | 1. A method of tuning a database query, comprising: allowing a user to select a query of a database; parsing the selected database query to determine that the database query includes a first operator; selecting an optimization mode from a plurality of available optimization modes, wherein: a first optimization mode is automatically selected if one or more statistics exist for a table identified by the database query, and a second optimization mode is selected if one or more statistics do not exist for the table identified by the database query; tuning automatically the selected database query based on the structure of the database and the selected optimization mode; and displaying the tuned database query, and wherein automatically tuning the selected database query comprises automatically rewriting the selected database query by converting the first operator to a second operator. | 1. A method of tuning a database query, comprising: allowing a user to select a query of a database; parsing the selected database query to determine that the database query includes a first operator; selecting an optimization mode from a plurality of available optimization modes, wherein: a first optimization mode is automatically selected if one or more statistics exist for a table identified by the database query, and a second optimization mode is selected if one or more statistics do not exist for the table identified by the database query; tuning automatically the selected database query based on the structure of the database and the selected optimization mode; and displaying the tuned database query, and wherein automatically tuning the selected database query comprises automatically rewriting the selected database query by converting the first operator to a second operator. 5. The method as recited in claim 1 , further comprising determining a cost associated with using the tuned database query. | 0.733766 |
8,538,912 | 1 | 8 | 1. An automatic information integration flow optimizer apparatus comprising: an input/output port configured to connect the information integration flow optimizer to extract-transform-load (ETL) tools, and to receive a tool-specific input file; and a processor configured to execute computer-readable instructions, the computer-readable instructions comprising: a parser unit configured to parse the tool-specific input file into semantics, and to create a tool-agnostic input file containing rich semantics of at least one of datasets, implementations, schema, operators, database management systems, or ETL tools; a converter unit configured to transform the tool-agnostic input file into an input directed acyclic graph (DAG); and a quality objective (QoX) driven optimizer unit configured to apply one or more heuristic algorithms to the tool-agnostic input DAG to develop an optimum information integration flow design based on the rich semantics. | 1. An automatic information integration flow optimizer apparatus comprising: an input/output port configured to connect the information integration flow optimizer to extract-transform-load (ETL) tools, and to receive a tool-specific input file; and a processor configured to execute computer-readable instructions, the computer-readable instructions comprising: a parser unit configured to parse the tool-specific input file into semantics, and to create a tool-agnostic input file containing rich semantics of at least one of datasets, implementations, schema, operators, database management systems, or ETL tools; a converter unit configured to transform the tool-agnostic input file into an input directed acyclic graph (DAG); and a quality objective (QoX) driven optimizer unit configured to apply one or more heuristic algorithms to the tool-agnostic input DAG to develop an optimum information integration flow design based on the rich semantics. 8. The apparatus of claim 1 , wherein the QoX-driven optimizer unit is configured to partition a dataset to be processed by the physical information integration flow based on schema properties. | 0.563348 |
8,938,463 | 10 | 13 | 10. A system comprising: data processing apparatus programmed to perform operations comprising: obtaining information regarding selections of search results provided in response to a plurality of search queries, the obtained information for one or more of the selected search results comprising one or more presentation bias features of a presentation of the search result and one or more relevancy features of the search result, wherein at least one of the presentation bias features is a rank of the search result in the search results; training a model using the obtained information, wherein the model is trained to predict a click through rate based on input comprising the one or more presentation bias features and the one or more relevancy features; and providing the model for use with a search engine, wherein the search engine is configured to provide presentation bias and relevancy features of given search results as input to the model and to use predictive outputs of the model to reduce presentation bias in a presentation of the given search results by determining a quality score for each of the given search results and factoring out independent effects of presentation bias from the quality scores using the predictive outputs of the model, wherein the predictive outputs used to reduce the presentation bias in the presentation of the given search results include a predicted click through rate predicted based on the presentation bias and relevancy features of the given search results and the model. | 10. A system comprising: data processing apparatus programmed to perform operations comprising: obtaining information regarding selections of search results provided in response to a plurality of search queries, the obtained information for one or more of the selected search results comprising one or more presentation bias features of a presentation of the search result and one or more relevancy features of the search result, wherein at least one of the presentation bias features is a rank of the search result in the search results; training a model using the obtained information, wherein the model is trained to predict a click through rate based on input comprising the one or more presentation bias features and the one or more relevancy features; and providing the model for use with a search engine, wherein the search engine is configured to provide presentation bias and relevancy features of given search results as input to the model and to use predictive outputs of the model to reduce presentation bias in a presentation of the given search results by determining a quality score for each of the given search results and factoring out independent effects of presentation bias from the quality scores using the predictive outputs of the model, wherein the predictive outputs used to reduce the presentation bias in the presentation of the given search results include a predicted click through rate predicted based on the presentation bias and relevancy features of the given search results and the model. 13. The system of claim 10 wherein the one or more relevancy features include an information retrieval score of the search result, an information retrieval score of another search result returned along with the search result, a language of the query, or a count of words in the query. | 0.640506 |
8,660,836 | 14 | 15 | 14. The method of claim 13 , wherein the conditional value at risk computing step further comprises generating one or more scores based on the risk index. | 14. The method of claim 13 , wherein the conditional value at risk computing step further comprises generating one or more scores based on the risk index. 15. The method of claim 14 , wherein the optimizing step further comprises using the one or more scores to improve the measure of quality of the output of the natural language processing system for subsequently presented data of the first type while maintaining the given measure of quality of the output for subsequently presented data of the second type. | 0.5 |
9,754,020 | 1 | 5 | 1. A device for determining a measure of relevancy for a seed word-keyword pair (d,k) comprising: a word identifier for identifying each unique word in the set of documents as a search word; a word pair identifier in communication with said word identifier and for combining the identified search words to define search word pairs; a unit portioner for portioning the set of documents into user-definable units, said unit portioner in communication with said interface and with said word pair identifier; a co-occurrence matrix generator in communication with said unit portioner and said word pair identifier for determining, for each defined search word pair, the number of units in which the identified search word pair occurs and for storing the number of occurrences in a co-occurrence matrix a probability matrix generator in communication with said co-occurrence matrix generator and for generating a probability matrix as a function of the co-occurrence matrix, a calibrating column vector generator, ψ, in communication with said co-occurrence matrix generator for generating a calibrating column vector; a matrix normalizer in communication with said probability matrix generator for normalizing the probability matrix to form a transition matrix, R; a word pair selector in communication with said word pair identifier for selecting the seed word-keyword pair to be measured, wherein said word pair selector provides a first column vector, {right arrow over (Γ)}(d), relating to the seed word and a second column vector, {right arrow over (Γ)}(k), relating to the keyword; an expected search distance generator in communication with said word pair selector, said probability matrix generator and said matrix normalizer, for calculating the expected search distance of the seed word-keyword pair; a weighted average expected search distance generator in communication with said expected search distance generator, said probability matrix generator, said matrix normalizer, and said calibrating column vector, said weighted average expected search distance generator for determining a weighted average expected search distance for the keyword; and a calibrator in communication with said expected search distance generator and said weighted average expected search distance generator, wherein said calibrator determines the relevancy, s d,k , of the seed word to the key word, based upon said expected search distance and said weighted averaged expected search distance. | 1. A device for determining a measure of relevancy for a seed word-keyword pair (d,k) comprising: a word identifier for identifying each unique word in the set of documents as a search word; a word pair identifier in communication with said word identifier and for combining the identified search words to define search word pairs; a unit portioner for portioning the set of documents into user-definable units, said unit portioner in communication with said interface and with said word pair identifier; a co-occurrence matrix generator in communication with said unit portioner and said word pair identifier for determining, for each defined search word pair, the number of units in which the identified search word pair occurs and for storing the number of occurrences in a co-occurrence matrix a probability matrix generator in communication with said co-occurrence matrix generator and for generating a probability matrix as a function of the co-occurrence matrix, a calibrating column vector generator, ψ, in communication with said co-occurrence matrix generator for generating a calibrating column vector; a matrix normalizer in communication with said probability matrix generator for normalizing the probability matrix to form a transition matrix, R; a word pair selector in communication with said word pair identifier for selecting the seed word-keyword pair to be measured, wherein said word pair selector provides a first column vector, {right arrow over (Γ)}(d), relating to the seed word and a second column vector, {right arrow over (Γ)}(k), relating to the keyword; an expected search distance generator in communication with said word pair selector, said probability matrix generator and said matrix normalizer, for calculating the expected search distance of the seed word-keyword pair; a weighted average expected search distance generator in communication with said expected search distance generator, said probability matrix generator, said matrix normalizer, and said calibrating column vector, said weighted average expected search distance generator for determining a weighted average expected search distance for the keyword; and a calibrator in communication with said expected search distance generator and said weighted average expected search distance generator, wherein said calibrator determines the relevancy, s d,k , of the seed word to the key word, based upon said expected search distance and said weighted averaged expected search distance. 5. The device of claim 1 , wherein said unit portioner portions the set of documents into sentences. | 0.924925 |
9,495,425 | 9 | 10 | 9. A computer program product comprising a non-transitory computer-readable storage medium containing computer program code for: identifying, by a computer system, a plurality of comments associated with a media content item; generating, by the computer system for each of the plurality of comments, a sentiment score indicating a likelihood that the comment expresses a type of sentiment; adjusting, by the computer system, the sentiment score generated for a comment from the plurality of comments based on information associated with a user that provided the comment from the plurality of comments, the information describing sentiment expressed by the user in additional comments for additional media content items; determining, by the computer system, an aggregate score for the media content item based on the sentiment scores for the plurality of comments; receiving, by the computer system from a device, a search query searching for media content associated with the type of sentiment; responsive to receiving the search query, identifying, by the computer system, the media content item based on the aggregate score indicating that comments associated with the media content item express the type of sentiment; and providing, by the computer system to the device, search results including the media content item. | 9. A computer program product comprising a non-transitory computer-readable storage medium containing computer program code for: identifying, by a computer system, a plurality of comments associated with a media content item; generating, by the computer system for each of the plurality of comments, a sentiment score indicating a likelihood that the comment expresses a type of sentiment; adjusting, by the computer system, the sentiment score generated for a comment from the plurality of comments based on information associated with a user that provided the comment from the plurality of comments, the information describing sentiment expressed by the user in additional comments for additional media content items; determining, by the computer system, an aggregate score for the media content item based on the sentiment scores for the plurality of comments; receiving, by the computer system from a device, a search query searching for media content associated with the type of sentiment; responsive to receiving the search query, identifying, by the computer system, the media content item based on the aggregate score indicating that comments associated with the media content item express the type of sentiment; and providing, by the computer system to the device, search results including the media content item. 10. The computer program product of claim 9 , wherein adjusting the sentiment score comprises: responsive to the information associated with the user indicating a high frequency of the type of sentiment in the additional comments, reducing the sentiment score generated for the comment from the plurality of comments. | 0.501572 |
9,575,980 | 6 | 7 | 6. The method of claim 1 further comprising, for each source file in the collection, transforming the respective source file to a respective archive file of a common file type, wherein the respective archive file includes at least all content extracted from the respective source file as tag content. | 6. The method of claim 1 further comprising, for each source file in the collection, transforming the respective source file to a respective archive file of a common file type, wherein the respective archive file includes at least all content extracted from the respective source file as tag content. 7. The method of claim 6 wherein the common file type is a text-readable Portable Document Format (PDF). | 0.5 |
9,536,524 | 7 | 11 | 7. A system comprising: a processor; and a computer-readable storage medium having instructions stored which, when executed by the processor, cause the processor to perform operations comprising: generating a user identifier using a voice request, the voice request received from a device; estimating, via successive comparisons, a transducer noise parameter of the device, wherein a delay between the successive comparisons is increased when successive changes do not exceed a threshold value; comparing stored user identities to the user identifier, to yield a comparison; when, based on the comparison, a user is associated with the device: retrieving a parameterizable speech recognition model associated with the user identifier; and adapting the parameterizable speech recognition model based on the transducer noise parameter to yield an adapted parameterizable speech recognition model; and performing speech recognition on the voice request using the adapted parameterizable speech recognition model. | 7. A system comprising: a processor; and a computer-readable storage medium having instructions stored which, when executed by the processor, cause the processor to perform operations comprising: generating a user identifier using a voice request, the voice request received from a device; estimating, via successive comparisons, a transducer noise parameter of the device, wherein a delay between the successive comparisons is increased when successive changes do not exceed a threshold value; comparing stored user identities to the user identifier, to yield a comparison; when, based on the comparison, a user is associated with the device: retrieving a parameterizable speech recognition model associated with the user identifier; and adapting the parameterizable speech recognition model based on the transducer noise parameter to yield an adapted parameterizable speech recognition model; and performing speech recognition on the voice request using the adapted parameterizable speech recognition model. 11. The system of claim 7 , the computer-readable storage medium having additional instructions which result in operations comprising estimating, via successive comparisons, a background noise parameter. | 0.5 |
8,386,485 | 7 | 8 | 7. A computer-implemented search method, the method comprising: a. receiving at least one query from at least one user; b. accepting at least one user preference from the at least one user using at least one user processor; c. refining the at least one query using the at least one user preference using a refining processor; d. reformulating the at least one query through semantic mediation information using at least one ontology processor; e. retrieving at least one user query case similar to the at least one query from at least one user query case base using at least one case management processor; f. maintaining at least one ontology-based index using the at least one case management processor; g. storing at least one pre-compiled user query, or at least one artifact, or a combination thereof, in at least one repository; h. decomposing the at least one query into at least one subquery using the semantic mediation information utilizing at least one query formulation processor; and i. executing the at least one subquery in order to obtain at least one search result; and wherein the at least one case management processor retrieves the at least one user query case similar to the at least one query from the at least one user query case base using at least one algorithm using at least one ontology-based index; and wherein the at least one algorithm limits a number of user query cases retrieved to a predefined maximum. | 7. A computer-implemented search method, the method comprising: a. receiving at least one query from at least one user; b. accepting at least one user preference from the at least one user using at least one user processor; c. refining the at least one query using the at least one user preference using a refining processor; d. reformulating the at least one query through semantic mediation information using at least one ontology processor; e. retrieving at least one user query case similar to the at least one query from at least one user query case base using at least one case management processor; f. maintaining at least one ontology-based index using the at least one case management processor; g. storing at least one pre-compiled user query, or at least one artifact, or a combination thereof, in at least one repository; h. decomposing the at least one query into at least one subquery using the semantic mediation information utilizing at least one query formulation processor; and i. executing the at least one subquery in order to obtain at least one search result; and wherein the at least one case management processor retrieves the at least one user query case similar to the at least one query from the at least one user query case base using at least one algorithm using at least one ontology-based index; and wherein the at least one algorithm limits a number of user query cases retrieved to a predefined maximum. 8. The computer-implemented search method of claim 7 , the method further comprising: a. dispatching at least one subquery to at least one data source using at least one web services processor; b. managing the at least one user preference using a preferences processor; c. ranking at least one subquery result according to at least one user preference using a ranking processor; and d. transmitting the at least one search result to the at least one user for display on at least one graphical user interface. | 0.5 |
8,105,368 | 8 | 11 | 8. The improvement of claim 1 wherein the inner core has a first longitudinal section and an integral second longitudinal section, the first longitudinal section having the slit, the first longitudinal section extending between the first and second bone attachment structures and the second longitudinal section extending between the second bone attachment structure and a third bone attachment structure. | 8. The improvement of claim 1 wherein the inner core has a first longitudinal section and an integral second longitudinal section, the first longitudinal section having the slit, the first longitudinal section extending between the first and second bone attachment structures and the second longitudinal section extending between the second bone attachment structure and a third bone attachment structure. 11. The improvement of claim 8 wherein the inner core member has a third longitudinal section extending between the third bone attachment structure and a fourth bone attachment structure. | 0.5 |
7,970,808 | 8 | 12 | 8. A system that classifies entities, the system comprising: a volatile random-access memory; an entity recognizer that identifies occurrence of a first entity in a document; a feature extractor that identifies a feature that occurs in a context of said occurrence of said first entity; a first entity-feature store that is located in said memory and that stores entity-feature pairs; a second entity-feature store that is not located in said memory and that stores entity-feature pairs; and a store manager that receives a first entity-feature pair obtained from a document processed by collective action of said entity recognizer and said feature extractor, wherein said store manager maintains a first list of entity-feature pairs and defines a subset of said first list as being those entity feature pairs that are estimated to occur at least with a first estimated frequency, a second entity-feature pair being stored in said first entity-feature store and being in said subset, and wherein said store manager determines whether to store said first entity-feature pair in said first entity-feature store or said second entity-feature store based on a first estimated frequency of said second entity-feature pair. | 8. A system that classifies entities, the system comprising: a volatile random-access memory; an entity recognizer that identifies occurrence of a first entity in a document; a feature extractor that identifies a feature that occurs in a context of said occurrence of said first entity; a first entity-feature store that is located in said memory and that stores entity-feature pairs; a second entity-feature store that is not located in said memory and that stores entity-feature pairs; and a store manager that receives a first entity-feature pair obtained from a document processed by collective action of said entity recognizer and said feature extractor, wherein said store manager maintains a first list of entity-feature pairs and defines a subset of said first list as being those entity feature pairs that are estimated to occur at least with a first estimated frequency, a second entity-feature pair being stored in said first entity-feature store and being in said subset, and wherein said store manager determines whether to store said first entity-feature pair in said first entity-feature store or said second entity-feature store based on a first estimated frequency of said second entity-feature pair. 12. The system of claim 8 , wherein said first entity-feature pair comprises said first entity, e wherein said first entity does not appear in said first entity-feature store, and wherein said store manager determines that there is an entity, e′, in said first entity-feature store that has a lower estimated frequency than e, there being k entity-feature pairs in said entity-feature store having feature e′, said store manager determining that a sum of frequencies of the entity-feature pairs in which e′ is the feature is greater than a sum of the k lowest-frequency features in the first entity-feature store, said second entity-feature pair being one of said k lowest-frequency features, said store manager storing said first entity-feature pair in said first entity-feature store and evicting said second entity-feature pair from said first entity-feature store. | 0.5 |
7,574,362 | 1 | 12 | 1. A method for sentence planning in a task classification system that interacts with a user, comprising: recognizing symbols in a user's single input communication to a task classification system; determining whether the user's input communication can be understood, wherein if the user's communication can be understood, understanding data is generated; generating a plurality of communicative goals based on the recognized symbols and understanding data, the generated plurality of communicative goals being related to information needed to be obtained from the user; in response to information from the user's single input communication: generating a plurality of sentence plans based on the plurality of generated communicative goals, each sentence plan in the plurality of sentence plans being a realization comprising elementary speech acts each corresponding to a respective communicative goal and combined into at least one complete sentence that accomplishes the plurality of communicative goals, and wherein each sentence plan of the plurality of sentence plans is a viable and potentially usable prompt in response to the user's single input communication; independent of the user, ranking the plurality of generated sentence plans; and outputting at least one of the ranked sentence plans to the user as a response to the user's single input communication such that one dialog turn occurs starting with the user's single input communication and ending with the outputted sentence plan. | 1. A method for sentence planning in a task classification system that interacts with a user, comprising: recognizing symbols in a user's single input communication to a task classification system; determining whether the user's input communication can be understood, wherein if the user's communication can be understood, understanding data is generated; generating a plurality of communicative goals based on the recognized symbols and understanding data, the generated plurality of communicative goals being related to information needed to be obtained from the user; in response to information from the user's single input communication: generating a plurality of sentence plans based on the plurality of generated communicative goals, each sentence plan in the plurality of sentence plans being a realization comprising elementary speech acts each corresponding to a respective communicative goal and combined into at least one complete sentence that accomplishes the plurality of communicative goals, and wherein each sentence plan of the plurality of sentence plans is a viable and potentially usable prompt in response to the user's single input communication; independent of the user, ranking the plurality of generated sentence plans; and outputting at least one of the ranked sentence plans to the user as a response to the user's single input communication such that one dialog turn occurs starting with the user's single input communication and ending with the outputted sentence plan. 12. The method of claim 1 , further comprising: converting at least one of the ranked sentence plans from text to speech. | 0.814417 |
8,977,642 | 17 | 21 | 17. A non-transitory computer readable storage medium comprising software instructions for providing a keyword recommendation for a user to access content from a global textsite platform (GTP), that when executed, comprise functionality for: obtaining a first registered unique keyword, of a plurality of registered unique keywords, from a user message sent to the GTP by the user, wherein the plurality of registered unique keywords are used by a plurality of GTP users to access content from the GTP based on a text messaging service (TMS); selecting a keyword recommendation algorithm from a plurality of keyword recommendation algorithms based on a pre-determined selection sequence assigned to the plurality of keyword recommendation algorithms and a previously selected keyword recommendation algorithm; analyzing, using the keyword recommendation algorithm and based at least on the first registered unique keyword, a GTP usage pattern to select a recommended keyword from the plurality of registered unique keywords, wherein the GTP usage pattern comprises statistical information of the plurality of GTP users using the plurality of registered unique keywords to access content from the GTP; and sending the recommended keyword to the GTP, wherein the GTP sends, to the user in response to the user message, a GTP message comprising a keyword recommendation that identifies the recommended keyword, and wherein the user message and the GTP message comprise a TMS message. | 17. A non-transitory computer readable storage medium comprising software instructions for providing a keyword recommendation for a user to access content from a global textsite platform (GTP), that when executed, comprise functionality for: obtaining a first registered unique keyword, of a plurality of registered unique keywords, from a user message sent to the GTP by the user, wherein the plurality of registered unique keywords are used by a plurality of GTP users to access content from the GTP based on a text messaging service (TMS); selecting a keyword recommendation algorithm from a plurality of keyword recommendation algorithms based on a pre-determined selection sequence assigned to the plurality of keyword recommendation algorithms and a previously selected keyword recommendation algorithm; analyzing, using the keyword recommendation algorithm and based at least on the first registered unique keyword, a GTP usage pattern to select a recommended keyword from the plurality of registered unique keywords, wherein the GTP usage pattern comprises statistical information of the plurality of GTP users using the plurality of registered unique keywords to access content from the GTP; and sending the recommended keyword to the GTP, wherein the GTP sends, to the user in response to the user message, a GTP message comprising a keyword recommendation that identifies the recommended keyword, and wherein the user message and the GTP message comprise a TMS message. 21. The non-transitory computer readable storage medium of claim 17 , the instructions, when executed, comprise functionality for: generating an emerging usage pattern of the plurality of GTP users using the plurality of registered unique keywords to access content from the GTP, wherein the GTP usage pattern further comprises the emerging usage pattern; wherein analyzing the GTP usage pattern comprises: determining, based on the emerging usage pattern, a rate of usage increase for a registered unique keyword of the plurality of registered unique keywords, wherein the rate of usage increase represents increase in usage levels of the plurality of GTP users using the registered unique keyword over a pre-determined recent time period; and selecting, in response to the rate of usage increase meeting a pre-determined criterion, the registered unique keyword as the recommended keyword. | 0.55 |
9,606,987 | 8 | 9 | 8. The method of claim 1 , further comprising: generating a raw sentence from the internationalized sentence syntax; exporting the raw sentence to a third party for translations from the primary language to the one or more languages; and importing the translations from the third party. | 8. The method of claim 1 , further comprising: generating a raw sentence from the internationalized sentence syntax; exporting the raw sentence to a third party for translations from the primary language to the one or more languages; and importing the translations from the third party. 9. The method of claim 8 , wherein the translations imported from the third party include translations to the variations of the internationalized sentence syntax. | 0.5 |
8,375,312 | 1 | 9 | 1. A computer-implemented method for classifying digital content, the method comprising: displaying one or more poster frames in a user interface, wherein a poster frame corresponds to an item of digital content; in response to receiving an input, displaying a plurality of first level classification panes adjacent to a poster frame corresponding to an item of digital content, wherein each of the first level classification panes is associated with a corresponding keyword; detecting a selection and positioning of the poster frame at an at least partially common location with a classification pane of the plurality of first level classification panes; and in response to the detecting, associating the item of digital content to which the selected poster frame corresponds with a keyword associated with the first level classification pane on which the selected poster frame that corresponds to the item of digital content is positioned. | 1. A computer-implemented method for classifying digital content, the method comprising: displaying one or more poster frames in a user interface, wherein a poster frame corresponds to an item of digital content; in response to receiving an input, displaying a plurality of first level classification panes adjacent to a poster frame corresponding to an item of digital content, wherein each of the first level classification panes is associated with a corresponding keyword; detecting a selection and positioning of the poster frame at an at least partially common location with a classification pane of the plurality of first level classification panes; and in response to the detecting, associating the item of digital content to which the selected poster frame corresponds with a keyword associated with the first level classification pane on which the selected poster frame that corresponds to the item of digital content is positioned. 9. The method of claim 1 wherein associating the keyword with the item comprises including the keyword in metadata associated with the item. | 0.925134 |
8,099,287 | 1 | 9 | 1. A method comprising: operating at least one processor programmed to perform receiving an original command for a user-defined speech command; determining whether the original command is likely to be confused with a set of existing speech commands; when confusion is unlikely, automatically storing the original command as the user-defined speech command; and when confusion is likely, automatically determining at least one substitute command that is unlikely to be confused with the set, presenting the substitute command as an alternative to the original command, and selectively storing the substitute command as the user-defined speech command. | 1. A method comprising: operating at least one processor programmed to perform receiving an original command for a user-defined speech command; determining whether the original command is likely to be confused with a set of existing speech commands; when confusion is unlikely, automatically storing the original command as the user-defined speech command; and when confusion is likely, automatically determining at least one substitute command that is unlikely to be confused with the set, presenting the substitute command as an alternative to the original command, and selectively storing the substitute command as the user-defined speech command. 9. The method of claim 1 , wherein the steps of claim 1 are performed by at least one of a service agent and a computing device, comprising the at least one processor, manipulated by the service agents, the steps being performed in response to a service request. | 0.654354 |
7,885,905 | 9 | 11 | 9. A system that determines a number of non-spurious arcs associated with a learned graphical model, comprising: means for utilizing learning algorithms and datasets to generate the learned graphical model; means for creating multiple null distribution datasets based on the datasets; means for utilizing the learning algorithms and the multiple null distribution datasets to provide multiple graphical models associated with the multiple null distribution datasets; means for ascertaining an average number of arcs associated with the multiple graphical models associated with the multiple null distribution datasets; means for enumerating a total number of arcs associated with the learned graphical model; and means for presenting the average number of arcs and the total number of arcs, the total number of arcs provides a denominator value. | 9. A system that determines a number of non-spurious arcs associated with a learned graphical model, comprising: means for utilizing learning algorithms and datasets to generate the learned graphical model; means for creating multiple null distribution datasets based on the datasets; means for utilizing the learning algorithms and the multiple null distribution datasets to provide multiple graphical models associated with the multiple null distribution datasets; means for ascertaining an average number of arcs associated with the multiple graphical models associated with the multiple null distribution datasets; means for enumerating a total number of arcs associated with the learned graphical model; and means for presenting the average number of arcs and the total number of arcs, the total number of arcs provides a denominator value. 11. The system of claim 9 , the learned graphical model and the multiple models represented as bi-partite graphs with noisy-OR distributions. | 0.5 |
7,917,497 | 1 | 5 | 1. A computer-based method of transforming a natural language query into a representation of the natural language query wherein the representation is usable for purposes of an input into a search system that extracts answers based on the input, the computer-based method comprising: utilizing a computer processor for comparing the natural language query against common terms of a core knowledge pack to identifying semantic units in the natural language query such that each semantic unit is a respective portion of the natural language query; utilizing the computer processor for associating a token with each uniquely identified semantic unit by recognizing the respective uniquely identified semantic unit in a dictionary having the token associated with the uniquely identified semantic unit; utilizing the computer processor for identifying a stem for at least a first one of the tokens as part of a token processing operation, the stem being identified by replacing the first token with a stem corresponding to the token in the dictionary wherein the token associated with the stem is also associated with a plurality of semantic units in the dictionary; utilizing the computer processor for identifying a lexical phrase for at least a second one of the tokens as part of the token processing operation, wherein the lexical phrase is obtained by combining one of the uniquely identified semantic units with the second token; and utilizing the computer processor for representing the query as an ordered combination of the identified stems and lexical phrases identified in the token processing operation. | 1. A computer-based method of transforming a natural language query into a representation of the natural language query wherein the representation is usable for purposes of an input into a search system that extracts answers based on the input, the computer-based method comprising: utilizing a computer processor for comparing the natural language query against common terms of a core knowledge pack to identifying semantic units in the natural language query such that each semantic unit is a respective portion of the natural language query; utilizing the computer processor for associating a token with each uniquely identified semantic unit by recognizing the respective uniquely identified semantic unit in a dictionary having the token associated with the uniquely identified semantic unit; utilizing the computer processor for identifying a stem for at least a first one of the tokens as part of a token processing operation, the stem being identified by replacing the first token with a stem corresponding to the token in the dictionary wherein the token associated with the stem is also associated with a plurality of semantic units in the dictionary; utilizing the computer processor for identifying a lexical phrase for at least a second one of the tokens as part of the token processing operation, wherein the lexical phrase is obtained by combining one of the uniquely identified semantic units with the second token; and utilizing the computer processor for representing the query as an ordered combination of the identified stems and lexical phrases identified in the token processing operation. 5. The method of claim 1 , wherein the act of identifying lexical phrases comprises identifying misspellings, synonyms or hyponyms for at least one of the identified semantic units. | 0.5 |
8,832,731 | 3 | 4 | 3. A method in accordance with claim 2 , wherein the plurality of messages is interleaved in the single channel. | 3. A method in accordance with claim 2 , wherein the plurality of messages is interleaved in the single channel. 4. A method in accordance with claim 3 , wherein each location indicator denotes a respective location of a corresponding one of the plurality of messages within the single channel. | 0.5 |
7,716,580 | 4 | 5 | 4. The method of claim 1 , wherein removing the at least one word further comprises, removing at least one word from an end portion of the current title until the title fits the display area. | 4. The method of claim 1 , wherein removing the at least one word further comprises, removing at least one word from an end portion of the current title until the title fits the display area. 5. The method of claim 4 further comprising inserting a pre-determined indicator in the end portion to indicate the at least one removed word. | 0.5 |
6,052,666 | 1 | 2 | 1. A method for controlling a plurality of consumer electronic target devices interconnected via a digital bus to a central authority device, said central authority performing the steps of: receiving a speech command from a user; evaluating said received speech command to determine whether one of said target devices is identified; issuing, in response to said identified target device, a corresponding command from said central authority to said target device; evaluating said received speech command to determine whether said received speech command is unique to a specific target device; issuing, in response to said unique received speech command, a corresponding command from said central device to said specific target device; and thereafter generating, in response to said speech command not identifying said target device and not being unique to one of said target devices, a list of target devices capable of being controlled by said received speech command. | 1. A method for controlling a plurality of consumer electronic target devices interconnected via a digital bus to a central authority device, said central authority performing the steps of: receiving a speech command from a user; evaluating said received speech command to determine whether one of said target devices is identified; issuing, in response to said identified target device, a corresponding command from said central authority to said target device; evaluating said received speech command to determine whether said received speech command is unique to a specific target device; issuing, in response to said unique received speech command, a corresponding command from said central device to said specific target device; and thereafter generating, in response to said speech command not identifying said target device and not being unique to one of said target devices, a list of target devices capable of being controlled by said received speech command. 2. The method according to claim 1 wherein each of said target devices identified on said list performs the step of: generating uniquely identifiable speech information in response to said received speech command. | 0.5 |
9,471,604 | 1 | 2 | 1. A computer-implemented method comprising: performing a first database image similarity search for images similar to a first query image; providing to a client machine a first plurality of images that result from the first database search, the first plurality of images selectable for providing a second query image; responsive to selection of one of the first plurality of images, performing a second database image similarity search for images similar to the second query image; providing to the client machine the second plurality of images that result from the second image similarity search and at least one selectable icon to find more images of products like at least one of the second plurality of images of products; responsive to selection of one of the at least one selectable icon, performing a third database image similarity search for images similar to the at least one of the second plurality of images of products; and providing to the client machine the third plurality of images that result from the third database image similarity search. | 1. A computer-implemented method comprising: performing a first database image similarity search for images similar to a first query image; providing to a client machine a first plurality of images that result from the first database search, the first plurality of images selectable for providing a second query image; responsive to selection of one of the first plurality of images, performing a second database image similarity search for images similar to the second query image; providing to the client machine the second plurality of images that result from the second image similarity search and at least one selectable icon to find more images of products like at least one of the second plurality of images of products; responsive to selection of one of the at least one selectable icon, performing a third database image similarity search for images similar to the at least one of the second plurality of images of products; and providing to the client machine the third plurality of images that result from the third database image similarity search. 2. The method of claim 1 wherein one of the first query image, the second query image or the third query image is provided by a user. | 0.869094 |
8,560,372 | 1 | 2 | 1. A computer-implemented method for implementation by one or more data processors comprising: receiving, by at least one of the data processors, data characterizing a workflow of a process; and generating, by at least one of the data processors, a network representation of event-condition-action rules representing the workflow; wherein the network representation of event-condition-action rules comprises a combination of source nodes representing events, operator nodes representing conditions, and action nodes representing transactions, and wherein the events of the source nodes are represented as types of objects of a type language. | 1. A computer-implemented method for implementation by one or more data processors comprising: receiving, by at least one of the data processors, data characterizing a workflow of a process; and generating, by at least one of the data processors, a network representation of event-condition-action rules representing the workflow; wherein the network representation of event-condition-action rules comprises a combination of source nodes representing events, operator nodes representing conditions, and action nodes representing transactions, and wherein the events of the source nodes are represented as types of objects of a type language. 2. The method of claim 1 , wherein the workflow of the process is modeled in accordance with a modeling language. | 0.808475 |
7,644,354 | 12 | 13 | 12. The computer readable medium of claim 11 wherein the structural phase includes aggregation of content, filtering of content and schema transformations. | 12. The computer readable medium of claim 11 wherein the structural phase includes aggregation of content, filtering of content and schema transformations. 13. The computer readable medium of claim 12 , wherein the presentation phase renders the transformed help content into one of rich text, plain text and DHTML (“Dynamic Hypertext Markup Language”). | 0.5 |
8,856,125 | 1 | 3 | 1. A method performed by data processing apparatus, the method comprising: identifying a non-text content item that is associated with each of a plurality of web pages; receiving label data that includes a set of initial labels for the non-text content item, wherein each initial label includes one or more words; grouping, for each of two or more sets of matching web pages among the plurality of web pages, initial labels that are associated with the set of matching web pages into a label group, the initial labels for different set of matching web pages being grouped to different label groups; grouping different sets of matching labels from the set of initial labels into different label groups; and selecting, as a final label for the non-text content item, an n-gram of one or more words that is included in at least a threshold number of different label groups. | 1. A method performed by data processing apparatus, the method comprising: identifying a non-text content item that is associated with each of a plurality of web pages; receiving label data that includes a set of initial labels for the non-text content item, wherein each initial label includes one or more words; grouping, for each of two or more sets of matching web pages among the plurality of web pages, initial labels that are associated with the set of matching web pages into a label group, the initial labels for different set of matching web pages being grouped to different label groups; grouping different sets of matching labels from the set of initial labels into different label groups; and selecting, as a final label for the non-text content item, an n-gram of one or more words that is included in at least a threshold number of different label groups. 3. The method of claim 1 , wherein grouping initial labels that are associated with the set of matching web pages into a label group comprises: identifying, among the set of initial labels, a first set of at least two initial labels that were obtained from a same domain; grouping the first set of at least two initial labels into a label group; identifying, among the set of initial labels, a second set of at least two initial labels that were obtained from a same domain; and grouping the second set of at least two initial labels into a label group. | 0.546721 |
9,286,575 | 1 | 8 | 1. A computer-implemented method comprising: determining, by a computer, a plurality of demographic groups of users of a social networking system, the determining comprising, for each demographic group of users, selecting as plurality of users based on demographic characteristics of the users; for each demographic group of users, generating a model configured to rank news feed stories for presentation to users from the demographic group, the model configured to receive as input, one or more user attributes describing a viewing user and ranking newsfeed stories for the viewing user based on the one or more user attributes, the generating of the model comprising: selecting a set of features for the demographic group based on the characteristics of the users of the demographic group; training the model the training utilizing training sets obtained from the demographic group of users, the model comprising the selected set of features; identifying stories for presentation to a user belonging to a demographic group; providing one or more attributes describing the user as input to the model; and ranking the stories identified for presentation to the user using the model and sending the stories for presentation to the user based on the ranking. | 1. A computer-implemented method comprising: determining, by a computer, a plurality of demographic groups of users of a social networking system, the determining comprising, for each demographic group of users, selecting as plurality of users based on demographic characteristics of the users; for each demographic group of users, generating a model configured to rank news feed stories for presentation to users from the demographic group, the model configured to receive as input, one or more user attributes describing a viewing user and ranking newsfeed stories for the viewing user based on the one or more user attributes, the generating of the model comprising: selecting a set of features for the demographic group based on the characteristics of the users of the demographic group; training the model the training utilizing training sets obtained from the demographic group of users, the model comprising the selected set of features; identifying stories for presentation to a user belonging to a demographic group; providing one or more attributes describing the user as input to the model; and ranking the stories identified for presentation to the user using the model and sending the stories for presentation to the user based on the ranking. 8. The method of claim 1 , wherein the features used for the model comprise interactions of users of the demographic group with news feed stories presented to the users of the demographic group. | 0.578261 |
8,070,775 | 10 | 14 | 10. A spine implant comprising: an anchor adapted to be inserted into the bone of a patient the anchor having a longitudinal axis; an anchor head extending from said anchor; said anchor head including a deflection cavity aligned with the longitudinal axis of the anchor; said deflection cavity having a deflection guide cavity wall, a first end which opens through the anchor head, and a second end internal to the anchor head; a deflection rod provided in said deflection cavity; said deflection rod having a distal portion secured within the deflection guide cavity and a proximal portion extending out of the opening of the deflection guide cavity; wherein the spine implant is configured such that, in the absence of a load applied to the proximal portion of the deflection rod, the deflection rod is aligned with the longitudinal axis of the anchor; wherein the spine implant is configured such that a load applied to the proximal portion of the deflection rod causes resilient deflection of the proximal portion of the deflection rod away from alignment with the longitudinal axis of the bone anchor; and wherein resilient deflection of the proximal end of the deflection rod is controlled by contact between the proximal portion of the deflection rod and the deflection guide cavity wall. | 10. A spine implant comprising: an anchor adapted to be inserted into the bone of a patient the anchor having a longitudinal axis; an anchor head extending from said anchor; said anchor head including a deflection cavity aligned with the longitudinal axis of the anchor; said deflection cavity having a deflection guide cavity wall, a first end which opens through the anchor head, and a second end internal to the anchor head; a deflection rod provided in said deflection cavity; said deflection rod having a distal portion secured within the deflection guide cavity and a proximal portion extending out of the opening of the deflection guide cavity; wherein the spine implant is configured such that, in the absence of a load applied to the proximal portion of the deflection rod, the deflection rod is aligned with the longitudinal axis of the anchor; wherein the spine implant is configured such that a load applied to the proximal portion of the deflection rod causes resilient deflection of the proximal portion of the deflection rod away from alignment with the longitudinal axis of the bone anchor; and wherein resilient deflection of the proximal end of the deflection rod is controlled by contact between the proximal portion of the deflection rod and the deflection guide cavity wall. 14. The implant of claim 10 wherein said deflection rod comprises a inner metal rod and an outer polymer shell. | 0.731884 |
8,347,267 | 19 | 20 | 19. The method of claim 15 further comprising generating, from the test thread tree, a data structure enumerating a plurality of test cases indicative of the windows, data elements and objects in the AUT process, the data structure enabling different values for objects and data elements in the process and different types of actions for respective objects; and wherein automatically generating the test description comprises generating a test description for each of multiple rows of the data structure relevant to the scenario. | 19. The method of claim 15 further comprising generating, from the test thread tree, a data structure enumerating a plurality of test cases indicative of the windows, data elements and objects in the AUT process, the data structure enabling different values for objects and data elements in the process and different types of actions for respective objects; and wherein automatically generating the test description comprises generating a test description for each of multiple rows of the data structure relevant to the scenario. 20. The method of claim 19 wherein generating a parameterized script further includes creating a variable name in the parameterized script for string variables referring to the respective objects, and automatically linking the string variable to an array represented by a column in the data structure. | 0.5 |
8,745,725 | 5 | 10 | 5. The system of claim 1 , wherein said transfer determining module configured to determine that a computing device, that was presenting an item, has been transferred from a first user to a second user, the transfer determining module including at least a visual cue detecting module configured to determine that the computing device has been transferred from the first user to the second user when the visual cue detecting module at least detects presence or absence of one or more visual cues in proximate vicinity of the computing device that when detected as occurring at least suggested transfer of the computing device between the first and second users, the visual cue detecting module including at least a gesture detecting module configured to detect the presence or absence of the one or more visual cues in the proximate vicinity of the computing device when the gesture detecting module at least detects visually one or more gestures exhibited by the first user that when detected as occurring at least suggested transfer of the computing device from the first user to the second user at least in part by the first user moving the computing device at least in part with the one or more gestures comprises: a particular movement detecting module configured to determine that the computing device has been transferred from the first user to the second user when the particular movement detecting module at least detects that the computing device has moved in a particular manner that when detected as occurring at least suggested that the computing device has been transferred between the first and second users. | 5. The system of claim 1 , wherein said transfer determining module configured to determine that a computing device, that was presenting an item, has been transferred from a first user to a second user, the transfer determining module including at least a visual cue detecting module configured to determine that the computing device has been transferred from the first user to the second user when the visual cue detecting module at least detects presence or absence of one or more visual cues in proximate vicinity of the computing device that when detected as occurring at least suggested transfer of the computing device between the first and second users, the visual cue detecting module including at least a gesture detecting module configured to detect the presence or absence of the one or more visual cues in the proximate vicinity of the computing device when the gesture detecting module at least detects visually one or more gestures exhibited by the first user that when detected as occurring at least suggested transfer of the computing device from the first user to the second user at least in part by the first user moving the computing device at least in part with the one or more gestures comprises: a particular movement detecting module configured to determine that the computing device has been transferred from the first user to the second user when the particular movement detecting module at least detects that the computing device has moved in a particular manner that when detected as occurring at least suggested that the computing device has been transferred between the first and second users. 10. The system of claim 5 , wherein said particular movement detecting module comprises: a spin rotation detecting module configured to detect that the computing device has moved in the particular manner when the spin rotation detecting module at least detects that the computing device has been spin rotated from a first orientation to a second orientation, the first orientation being an orientation associated with the computing device when the computing device was in possession of the first user prior to said transfer. | 0.5 |
9,377,999 | 5 | 6 | 5. The development system of claim 4 , wherein the user search query identifies a character string and the particular element type. | 5. The development system of claim 4 , wherein the user search query identifies a character string and the particular element type. 6. The development system of claim 5 , wherein the returned element comprises an element of the particular element type that has a property value that matches the character string. | 0.5 |
9,785,712 | 9 | 12 | 9. A system, comprising: a memory that stores computer-executable instructions; and a processor configured to access the memory and execute the computer-executable instructions to collectively at least: obtain first search results for a search query using a first search index; determine a second search index from among a plurality of indices based at least in part on the first search results; obtain second search results, from among the first search results, using the second search index; rank at least a part of the second search results with respect to each other based at least in part on a ranking algorithm of the second search index; and modify the second search index based at least in part on the ranked part of the second search results, wherein the plurality of indices are modified more frequently than the first search index. | 9. A system, comprising: a memory that stores computer-executable instructions; and a processor configured to access the memory and execute the computer-executable instructions to collectively at least: obtain first search results for a search query using a first search index; determine a second search index from among a plurality of indices based at least in part on the first search results; obtain second search results, from among the first search results, using the second search index; rank at least a part of the second search results with respect to each other based at least in part on a ranking algorithm of the second search index; and modify the second search index based at least in part on the ranked part of the second search results, wherein the plurality of indices are modified more frequently than the first search index. 12. The system of claim 9 , wherein the first search index includes a set of document identifiers, and each of the plurality of indices includes a subset of the document identifiers included in the first search index. | 0.526201 |
7,801,912 | 35 | 38 | 35. A computer-implemented method, comprising: receiving service requests from a plurality of client applications on a web service interface to a searchable data service, wherein the service requests comprise query requests and storage requests, and wherein said receiving comprises receiving the query requests and storage requests at a common message endpoint provided by the web service interface to the plurality of client applications to send the query requests and storage requests; forwarding each service request from the web service interface to one of a plurality of nodes configured to participate in the searchable data service; processing received storage requests on the plurality of nodes to store searchable data service objects specified in the storage requests in respective searchable indexes for a plurality of independent data stores used by the client applications, wherein the searchable indexes are on the plurality of nodes, wherein the data stores are on one or more storage devices each on a network and separate from the one or more computer devices that implement the plurality of nodes configured to participate in the searchable data service, wherein each searchable index stores searchable data service objects for a particular one of the plurality of independent data stores such that each searchable index provides a complete index for only one of the independent data stores, wherein each searchable data service object specifies two or more attributes of a particular entity in a particular data store, and wherein the attributes include a unique entity identifier for locating the particular entity in the particular data store; processing each received query request on the plurality of nodes to locate a set of one or more searchable data service objects from the searchable indexes that satisfy the query request, wherein the received query requests specify one of the searchable indexes; and returning at least the entity identifiers from the set of one or more searchable data service objects that satisfy the query request to the client applications in accordance with the web service interface. | 35. A computer-implemented method, comprising: receiving service requests from a plurality of client applications on a web service interface to a searchable data service, wherein the service requests comprise query requests and storage requests, and wherein said receiving comprises receiving the query requests and storage requests at a common message endpoint provided by the web service interface to the plurality of client applications to send the query requests and storage requests; forwarding each service request from the web service interface to one of a plurality of nodes configured to participate in the searchable data service; processing received storage requests on the plurality of nodes to store searchable data service objects specified in the storage requests in respective searchable indexes for a plurality of independent data stores used by the client applications, wherein the searchable indexes are on the plurality of nodes, wherein the data stores are on one or more storage devices each on a network and separate from the one or more computer devices that implement the plurality of nodes configured to participate in the searchable data service, wherein each searchable index stores searchable data service objects for a particular one of the plurality of independent data stores such that each searchable index provides a complete index for only one of the independent data stores, wherein each searchable data service object specifies two or more attributes of a particular entity in a particular data store, and wherein the attributes include a unique entity identifier for locating the particular entity in the particular data store; processing each received query request on the plurality of nodes to locate a set of one or more searchable data service objects from the searchable indexes that satisfy the query request, wherein the received query requests specify one of the searchable indexes; and returning at least the entity identifiers from the set of one or more searchable data service objects that satisfy the query request to the client applications in accordance with the web service interface. 38. The computer-implemented method as recited in claim 35 , wherein the plurality of nodes comprises one or more query nodes each configured to maintain a local query cache of responses to previous query requests. | 0.896818 |
8,886,676 | 13 | 19 | 13. A non-transitory machine readable medium storing a program which when executed by at least one processing unit analyzes a document comprising a plurality of primitive elements, the program comprising sets of instructions for: identifying different sets of lists for different columns of the document, each column ordered within the document based on a reading order; identifying a first list in a first column of the document that has an open end state; identifying a second list, in a second column of the document subsequent to the first column in the reading order, that has an open start state; determining that the first list in the first column continues as the second list in the second column of the document; and storing the first list and the second list as a single list structure associated with the document. | 13. A non-transitory machine readable medium storing a program which when executed by at least one processing unit analyzes a document comprising a plurality of primitive elements, the program comprising sets of instructions for: identifying different sets of lists for different columns of the document, each column ordered within the document based on a reading order; identifying a first list in a first column of the document that has an open end state; identifying a second list, in a second column of the document subsequent to the first column in the reading order, that has an open start state; determining that the first list in the first column continues as the second list in the second column of the document; and storing the first list and the second list as a single list structure associated with the document. 19. The non-transitory machine readable medium of claim 13 , wherein the program further comprises a set of instructions for determining that a third list in the first column continues as a fourth list in the second column. | 0.779208 |
7,853,866 | 1 | 7 | 1. A document conversion apparatus for converting document image data to an electronic document, said document conversion apparatus comprising: a memory that stores program instructions to perform a document conversion function; and a processor that executes the program instructions stored in the memory to perform the document conversion function; wherein the processor executes the program instructions as functional sections, the functional sections including: a character region extraction section that extracts character regions from the document image data, a table of contents data generation section that generates table of contents data based on the extracted character regions and page numbers of the character regions, an electronic document generation section that generates the electronic document having a table of contents based on the document image data and the generated table of contents data, a character recognition section that performs character recognition on the extracted character regions, a keyword extraction section that extracts keywords from a result of the character recognition, and an index data generation section that generates index data based on the extracted keywords and page numbers thereof, wherein said table of contents data generation section comprises a table of contents link information adding section that adds link information to respective ones of items in the generated table of contents data for linking the items in the generated table of contents data with corresponding positions in the electronic document in which the items are described, wherein said index data generation section comprises an index link information adding section that adds links information to respective ones of items in the index data for linking the items in the generated index data with corresponding positions in the electronic document in which these items are described, and wherein said electronic document generation section generates the electronic document having the table of contents and an index based on the document image data, the table of contents data, and the index data. | 1. A document conversion apparatus for converting document image data to an electronic document, said document conversion apparatus comprising: a memory that stores program instructions to perform a document conversion function; and a processor that executes the program instructions stored in the memory to perform the document conversion function; wherein the processor executes the program instructions as functional sections, the functional sections including: a character region extraction section that extracts character regions from the document image data, a table of contents data generation section that generates table of contents data based on the extracted character regions and page numbers of the character regions, an electronic document generation section that generates the electronic document having a table of contents based on the document image data and the generated table of contents data, a character recognition section that performs character recognition on the extracted character regions, a keyword extraction section that extracts keywords from a result of the character recognition, and an index data generation section that generates index data based on the extracted keywords and page numbers thereof, wherein said table of contents data generation section comprises a table of contents link information adding section that adds link information to respective ones of items in the generated table of contents data for linking the items in the generated table of contents data with corresponding positions in the electronic document in which the items are described, wherein said index data generation section comprises an index link information adding section that adds links information to respective ones of items in the index data for linking the items in the generated index data with corresponding positions in the electronic document in which these items are described, and wherein said electronic document generation section generates the electronic document having the table of contents and an index based on the document image data, the table of contents data, and the index data. 7. The document conversion apparatus according to claim 1 , wherein the generated electronic document has a data structure that presents the table of contents, document pages, and the index in this order when the electronic document is opened by an application. | 0.816197 |
9,929,747 | 19 | 22 | 19. One or more computer-readable storage media comprising a plurality of instructions that in response to being executed cause a computing device to: update an index data structure based on an input data stream, wherein the index data structure includes index data associated with offsets in the input data stream; process a plurality of chunks of the input data stream in parallel to generate a plurality of token streams using the index data, wherein each chunk has a first length and each chunk overlaps a previous chunk by a second length, and wherein each token stream is generated from a corresponding disjoint subset of the plurality of chunks; and merge the plurality of token streams to generate an output token stream. | 19. One or more computer-readable storage media comprising a plurality of instructions that in response to being executed cause a computing device to: update an index data structure based on an input data stream, wherein the index data structure includes index data associated with offsets in the input data stream; process a plurality of chunks of the input data stream in parallel to generate a plurality of token streams using the index data, wherein each chunk has a first length and each chunk overlaps a previous chunk by a second length, and wherein each token stream is generated from a corresponding disjoint subset of the plurality of chunks; and merge the plurality of token streams to generate an output token stream. 22. The one or more computer-readable storage media of claim 19 , wherein to merge the plurality of token streams to generate the output token stream comprises to: read a previous token and a next token from the plurality of token streams, wherein the previous token and the next token are consecutive with respect to the input data stream; determine whether the previous token and the next token originate from the same token stream; output the previous token to the output token stream in response to determining that the previous token and the next token originate from the same token stream; copy the next token to the previous token in response to outputting the previous token; read the next token from the plurality of token streams in response to copying the next token; and merge the previous token and the next token to generate one or more synchronized tokens in response to determining that the previous token and the next token do not originate from the same token stream. | 0.5 |
9,171,097 | 30 | 31 | 30. The computing device of claim 25 , wherein the processor is configured with processor-executable instructions to perform operations further comprising: completing HTML code computations for the generated DOM tree when it is determined that the generated DOM tree is not isomorphic with any of the one or more portions of the particular stored DOM tree; and storing the results of the HTML computations indexed with the generated DOM tree in the memory. | 30. The computing device of claim 25 , wherein the processor is configured with processor-executable instructions to perform operations further comprising: completing HTML code computations for the generated DOM tree when it is determined that the generated DOM tree is not isomorphic with any of the one or more portions of the particular stored DOM tree; and storing the results of the HTML computations indexed with the generated DOM tree in the memory. 31. The computing device of claim 30 , wherein the processor is configured with processor-executable instructions such that storing the generated DOM tree comprises storing at least a portion of the generated DOM tree in a key-value data structure in which DOM tree elements are stored in association with the corresponding HTML computation results. | 0.5 |
10,133,551 | 3 | 4 | 3. The method of claim 1 , further comprising the step of storing an indication of said selected prediction algorithm. | 3. The method of claim 1 , further comprising the step of storing an indication of said selected prediction algorithm. 4. The method of claim 3 , wherein said indication comprises a disambiguation index identifying said selected prediction algorithm among a set of potential prediction algorithms that potentially generated said selected prediction. | 0.5 |
9,292,621 | 1 | 2 | 1. A computer-implemented method of managing automatically corrected text, comprising: determining a set of terms specific to an environment; processing each term in the set of terms using at least one text-correction algorithm to determine a subset of terms that are able to be auto-corrected by the at least one text-correction algorithm, as well as autocorrected terms for the subset of terms that were generated by the at least one text-correction algorithm, wherein an autocorrected term is generated by autocorrecting a term using the at least one text-correction algorithm, wherein processing each term in the set of terms using the at least one text-correction algorithm includes processing each term in the set of terms with a plurality of text-correction algorithms, each text-correction algorithm being utilized by at least one type of computing device; storing the autocorrected terms with the set of terms as a set of synonyms; receiving a communication including a first term that matches one of the autocorrected terms in the set of synonyms; determining a matching term from the set of terms for the first term; calculating, via at least one processor, a likelihood that the matching term was auto-corrected to the first term; and reverting, via the at least one processor, the first term to the matching term in the communication when the calculated likelihood meets at least one correction criterion. | 1. A computer-implemented method of managing automatically corrected text, comprising: determining a set of terms specific to an environment; processing each term in the set of terms using at least one text-correction algorithm to determine a subset of terms that are able to be auto-corrected by the at least one text-correction algorithm, as well as autocorrected terms for the subset of terms that were generated by the at least one text-correction algorithm, wherein an autocorrected term is generated by autocorrecting a term using the at least one text-correction algorithm, wherein processing each term in the set of terms using the at least one text-correction algorithm includes processing each term in the set of terms with a plurality of text-correction algorithms, each text-correction algorithm being utilized by at least one type of computing device; storing the autocorrected terms with the set of terms as a set of synonyms; receiving a communication including a first term that matches one of the autocorrected terms in the set of synonyms; determining a matching term from the set of terms for the first term; calculating, via at least one processor, a likelihood that the matching term was auto-corrected to the first term; and reverting, via the at least one processor, the first term to the matching term in the communication when the calculated likelihood meets at least one correction criterion. 2. The computer-implemented method of claim 1 , wherein the communication is received from a client device, and where the set of synonyms is specific to a type of the client device. | 0.5 |
7,729,655 | 21 | 30 | 21. A computer-implemented method for providing feedback an essay, the method comprising: receiving an essay prepared by a writer, wherein the essay is received in an electronic format using a computer; automatically determining with the computer a first value for each sentence in the essay that reflects the probability that each sentence in the essay is a member of a discourse element category, wherein the probability is based on the presence of each of a predetermined set of features in each sentence of the essay; utilizing the first value to determine with the computer whether each sentence in the essay should be assigned a discourse element category; and providing with the computer feedback to the writer related to any discourse elements identified in the essay. | 21. A computer-implemented method for providing feedback an essay, the method comprising: receiving an essay prepared by a writer, wherein the essay is received in an electronic format using a computer; automatically determining with the computer a first value for each sentence in the essay that reflects the probability that each sentence in the essay is a member of a discourse element category, wherein the probability is based on the presence of each of a predetermined set of features in each sentence of the essay; utilizing the first value to determine with the computer whether each sentence in the essay should be assigned a discourse element category; and providing with the computer feedback to the writer related to any discourse elements identified in the essay. 30. The computer-readable storage medium of claim 21 wherein the probability is calculated utilizing a LaPlace estimator. | 0.674731 |
9,495,458 | 15 | 17 | 15. A computer readable medium storing computer program instructions, which, when executed on a processor, cause the processor to perform operations comprising: identifying, at a server, based on keywords identified by an intercept module monitoring user input and user communications, items of interest to present to a user via a webpage, each of the keywords having a date and time stamp and ranked according to occurrence in the user input and the user communications; reducing a ranking of one of the keywords identified by the intercept module based on expiration of a user defined period of time that begins on a date and a time identified by a time stamp of the one of the keywords; and generating the webpage including information related to at least one of the items of interest. | 15. A computer readable medium storing computer program instructions, which, when executed on a processor, cause the processor to perform operations comprising: identifying, at a server, based on keywords identified by an intercept module monitoring user input and user communications, items of interest to present to a user via a webpage, each of the keywords having a date and time stamp and ranked according to occurrence in the user input and the user communications; reducing a ranking of one of the keywords identified by the intercept module based on expiration of a user defined period of time that begins on a date and a time identified by a time stamp of the one of the keywords; and generating the webpage including information related to at least one of the items of interest. 17. The computer readable medium of claim 15 , wherein the keywords are generated based on content of email communications of the user. | 0.717573 |
9,336,116 | 20 | 33 | 20. A non-transitory computer-readable medium storing instructions, the instructions comprising: one or more instructions that, when executed by one or more first processors of a device, cause the one or more first processors to: access a first response file generated during a first execution of a first recording of a base script on a system that includes at least one second processor and at least one memory and a second response file generated during a second execution of a second recording of the base script on the system, the base script defining operations to be executed in testing performance of the system; determine first dynamic value data that describes one or more first dynamic values stored in the first response file and second dynamic value data that describes one or more second dynamic values stored in the second response file; analyze the first dynamic value data and the second dynamic value data to identify candidate parameters for correlation within the base script; generate a correlated script using the identified candidate parameters and the base script; and store the correlated script. | 20. A non-transitory computer-readable medium storing instructions, the instructions comprising: one or more instructions that, when executed by one or more first processors of a device, cause the one or more first processors to: access a first response file generated during a first execution of a first recording of a base script on a system that includes at least one second processor and at least one memory and a second response file generated during a second execution of a second recording of the base script on the system, the base script defining operations to be executed in testing performance of the system; determine first dynamic value data that describes one or more first dynamic values stored in the first response file and second dynamic value data that describes one or more second dynamic values stored in the second response file; analyze the first dynamic value data and the second dynamic value data to identify candidate parameters for correlation within the base script; generate a correlated script using the identified candidate parameters and the base script; and store the correlated script. 33. The computer-readable medium of claim 20 , wherein analyzing the first dynamic value data and the second dynamic value data to identify candidate parameters for correlation within the base script comprises: generating a correlation log using the first dynamic value data and the second dynamic value data. | 0.715993 |
8,291,237 | 1 | 10 | 1. A method of privately searching for keyword criteria in an electronic document, the method comprising: a) initializing an encryption variable to a value being an encryption of an identity element under a homomorphic and probabilistic Chosen Plaintext Attack-secure (CPA-secure) encryption scheme; b) receiving an electronic document comprising a plurality of document words; c) looking up each of the plurality of document words in a dictionary of known words, each of the known words being either a keyword or an irrelevant word, each of the irrelevant words having associated therewith a correspondingly unique cipher-text having a value that is an encryption of the identity element under the encryption scheme, and each of the keywords having associated therewith a correspondingly unique cipher-text having a value that is an encryption of a non-identity element under the encryption scheme; d) for each of the plurality of document words found during the lookup step, performing an operation associated with the encryption scheme on a first operand and a second operand, the first operand being the cipher-text corresponding to the found document word and the second operand being the encryption variable, each time changing the value of the encryption variable to be the result of the operation, the encryption variable ending with a final value; e) encrypting the electronic document using the encryption variable; f) writing the encrypted electronic document to a first slot in a buffer comprising a plurality of slots, each of the plurality of slots having been initialized to contain an encryption of the identity element prior to the encrypted electronic document being written to the first slot. | 1. A method of privately searching for keyword criteria in an electronic document, the method comprising: a) initializing an encryption variable to a value being an encryption of an identity element under a homomorphic and probabilistic Chosen Plaintext Attack-secure (CPA-secure) encryption scheme; b) receiving an electronic document comprising a plurality of document words; c) looking up each of the plurality of document words in a dictionary of known words, each of the known words being either a keyword or an irrelevant word, each of the irrelevant words having associated therewith a correspondingly unique cipher-text having a value that is an encryption of the identity element under the encryption scheme, and each of the keywords having associated therewith a correspondingly unique cipher-text having a value that is an encryption of a non-identity element under the encryption scheme; d) for each of the plurality of document words found during the lookup step, performing an operation associated with the encryption scheme on a first operand and a second operand, the first operand being the cipher-text corresponding to the found document word and the second operand being the encryption variable, each time changing the value of the encryption variable to be the result of the operation, the encryption variable ending with a final value; e) encrypting the electronic document using the encryption variable; f) writing the encrypted electronic document to a first slot in a buffer comprising a plurality of slots, each of the plurality of slots having been initialized to contain an encryption of the identity element prior to the encrypted electronic document being written to the first slot. 10. The method of claim 1 , wherein encrypting the electronic document comprises bit-wise encrypting the document using the final value of the encryption variable to represent the non-identity element and using the initial value of the encryption variable to represent the identity element. | 0.589235 |
8,108,386 | 16 | 17 | 16. The method of claim 15 wherein the formulating step includes the steps of: screening information supplied by the requestor with a live screening agent; and formulating the search query based upon at least one of, information from a speaker independent voice recognition system and information from the live screening agent. | 16. The method of claim 15 wherein the formulating step includes the steps of: screening information supplied by the requestor with a live screening agent; and formulating the search query based upon at least one of, information from a speaker independent voice recognition system and information from the live screening agent. 17. The method of claim 16 wherein the information from the speaker independent voice recognition originates from a requestor utterance; and the information from the live screening agent includes an utterance processed by a speaker dependent voice recognition system associated with utterances specific to the live screening agent. | 0.5 |
6,124,864 | 1 | 24 | 1. A method for developing a scene model from a visual image sequence containing a sequence of visual image frames, the visual image sequence including a visual representation of one or more visual objects, the method comprising the steps of: (a) analyzing portions of the visual image sequence in accordance with input parameters using a machine vision process by performing the steps of: (i) defining an image-based data object in an image-based model of the visual image sequence containing a pixel representation corresponding to a portion of at least one frame of the visual image sequence; (ii) defining an abstraction-based data object in an abstraction-based model of the visual objects containing an abstract representation of at least a portion of one of the visual objects represented by the visual image sequence; and (iii) storing a link data object in a correlation mesh data structure indicating a correspondence between the image-based data object and the abstraction-based data object; (b) refining an abstraction-based model of the visual imago sequence by performing the steps of: (i) accepting input parameters to define in the abstraction-based model of the visual objects a new abstraction-based data object containing a new abstract representation of at least a portion of one of the visual objects contained in the visual image sequence that differs from the abstract representations contained in abstraction-based objects defined in the analysis step (a); and (ii) adding a link object in the correlation mesh data structure indicating a correspondence between the new abstraction-based data object and another data object defined in the scene model; and (c) iteratively improving the scene model by performing certain selected ones of steps (a) through (b) in an order as specified by user input until a desired level of refinement is obtained in the scene model such that selected link objects in the correlation mesh data structure added in iterations of the refining step (b) are used to provide additional input parameters to subsequent iterations thereby allowing the scene model to converge. | 1. A method for developing a scene model from a visual image sequence containing a sequence of visual image frames, the visual image sequence including a visual representation of one or more visual objects, the method comprising the steps of: (a) analyzing portions of the visual image sequence in accordance with input parameters using a machine vision process by performing the steps of: (i) defining an image-based data object in an image-based model of the visual image sequence containing a pixel representation corresponding to a portion of at least one frame of the visual image sequence; (ii) defining an abstraction-based data object in an abstraction-based model of the visual objects containing an abstract representation of at least a portion of one of the visual objects represented by the visual image sequence; and (iii) storing a link data object in a correlation mesh data structure indicating a correspondence between the image-based data object and the abstraction-based data object; (b) refining an abstraction-based model of the visual imago sequence by performing the steps of: (i) accepting input parameters to define in the abstraction-based model of the visual objects a new abstraction-based data object containing a new abstract representation of at least a portion of one of the visual objects contained in the visual image sequence that differs from the abstract representations contained in abstraction-based objects defined in the analysis step (a); and (ii) adding a link object in the correlation mesh data structure indicating a correspondence between the new abstraction-based data object and another data object defined in the scene model; and (c) iteratively improving the scene model by performing certain selected ones of steps (a) through (b) in an order as specified by user input until a desired level of refinement is obtained in the scene model such that selected link objects in the correlation mesh data structure added in iterations of the refining step (b) are used to provide additional input parameters to subsequent iterations thereby allowing the scene model to converge. 24. A method as in claim 1 wherein the step of refining an image-based model additionally comprises the steps of: creating multiple pixel representation versions of a given visual image frame, the multiple pixel representation versions being at different levels of image resolution; and using different ones of the pixel representation versions in given iterations. | 0.5 |
8,631,036 | 21 | 22 | 21. The method of claim 19 , comprising customizing, based on the user profile data, the first search result for receipt by the user. | 21. The method of claim 19 , comprising customizing, based on the user profile data, the first search result for receipt by the user. 22. The method of claim 21 , wherein customizing, based on the user profile data, the first search result comprises at least one of aurally enhancing the first search result, visually enhancing the first search result, textually enhancing the first search result, anecdotally enhancing the first search result, and logically enhancing the first search result. | 0.5 |
9,881,330 | 2 | 3 | 2. The method of claim 1 , wherein the computer server is configured to populate the recommendation region with one or more recommended stationery/card designs based on an upcoming event and an identity of the contact associated with the highest priority event in the reminder list. | 2. The method of claim 1 , wherein the computer server is configured to populate the recommendation region with one or more recommended stationery/card designs based on an upcoming event and an identity of the contact associated with the highest priority event in the reminder list. 3. The method of claim 2 , wherein the computer server is configured to populate the recommendation region with one or more recommended stationery/card designs based on a preference, interest, and taste of the contact and the upcoming event associated with the highest priority event in the reminder list. | 0.5 |
8,832,655 | 12 | 21 | 12. A system of calculating whether program applications have similarities, comprising: a non-transitory memory storing instructions; and a processor executing the instructions to cause the system to perform a method comprising: receiving, by a computer, source code for a plurality of applications; associating, for each application, semantic anchors found in the source code for that application with the application, wherein associating semantic anchors comprises building at least one weighted term document matrix from the semantic anchors and source code, the at least one weighted term document matrix comprising at least a first term weighted based on at least a number of the plurality of applications in which a first semantic anchor is present in the source code for those applications; comparing, based on the semantic anchors, a similarity of the first application to a second application; and assigning, based on the comparison, a number representing the similarity of the first and second applications. | 12. A system of calculating whether program applications have similarities, comprising: a non-transitory memory storing instructions; and a processor executing the instructions to cause the system to perform a method comprising: receiving, by a computer, source code for a plurality of applications; associating, for each application, semantic anchors found in the source code for that application with the application, wherein associating semantic anchors comprises building at least one weighted term document matrix from the semantic anchors and source code, the at least one weighted term document matrix comprising at least a first term weighted based on at least a number of the plurality of applications in which a first semantic anchor is present in the source code for those applications; comparing, based on the semantic anchors, a similarity of the first application to a second application; and assigning, based on the comparison, a number representing the similarity of the first and second applications. 21. The system of claim 12 , wherein the associating comprises: associating each semantic anchor with rows of the weighted term document matrix and each application with columns of the weighted term document matrix; and calculating a normalized metric of each semantic anchor in each application's source code, by, for each application, dividing a number of times a particular semantic anchor appears in the application by a number of semantic anchors that appear in the application, and multiplying that quotient by a logarithm of a quotient resulting from dividing a total number of applications by a number of applications where the particular semantic anchor appears. | 0.5 |
9,880,801 | 1 | 6 | 1. A method comprising: at a computer system with a display and an input device: displaying a user interface on the display; while displaying the user interface on the display, detecting an input on the input device, wherein the input includes a motion component and a pressure component; and in response to detecting the input: determining whether the pressure component of the input is above a pressure threshold; in accordance with a determination that the pressure component of the input is above the pressure threshold, performing a first operation in the user interface displayed on the display in accordance with the motion component of the input, wherein the first operation is scrolling content in the user interface at a variable scroll rate that increases and then decays over time, and performing the first operation in the user interface includes causing the user interface to rapidly scroll through content for an initial predetermined time interval and subsequently reduce a scroll rate over a second subsequent predetermined time interval gradually decaying the scroll rate to zero; and in accordance with a determination that the pressure component of the input is below the pressure threshold, performing a second operation in the user interface displayed on the display in accordance with the motion component of the input, wherein the second operation is different from the first operation and is scrolling content in the user interface at a predetermined scroll rate. | 1. A method comprising: at a computer system with a display and an input device: displaying a user interface on the display; while displaying the user interface on the display, detecting an input on the input device, wherein the input includes a motion component and a pressure component; and in response to detecting the input: determining whether the pressure component of the input is above a pressure threshold; in accordance with a determination that the pressure component of the input is above the pressure threshold, performing a first operation in the user interface displayed on the display in accordance with the motion component of the input, wherein the first operation is scrolling content in the user interface at a variable scroll rate that increases and then decays over time, and performing the first operation in the user interface includes causing the user interface to rapidly scroll through content for an initial predetermined time interval and subsequently reduce a scroll rate over a second subsequent predetermined time interval gradually decaying the scroll rate to zero; and in accordance with a determination that the pressure component of the input is below the pressure threshold, performing a second operation in the user interface displayed on the display in accordance with the motion component of the input, wherein the second operation is different from the first operation and is scrolling content in the user interface at a predetermined scroll rate. 6. The method of claim 1 , wherein the user interface includes a playlist of songs, and wherein performing the first operation in the user interface includes scrolling through the playlist of songs rapidly and subsequently reducing a scroll rate to enable a song selection. | 0.5 |
7,673,249 | 7 | 12 | 7. A system of customizing a GUI display, said GUI display including one or more types of selection elements, each of said of selection elements types having customizable attributes, said system comprising: means for selecting a selection element in a menu of the GUI, wherein the selection element is one of a plurality of selection elements in the GUI; and means for modifying at least one customizable attribute of said selected selection element, wherein said means for modification of said at least one customizable attribute modifies the at least one customizable attribute only of said selected selection element and does not affect customizable attributes of other selection elements in the plurality of selection elements in the GUI, and wherein the at least one customizable attribute of the selected selection element comprises a character set for text displayed as part of the selected selection element, and wherein at least one other selection element in the plurality of selection elements in the GUI utilizes a different character set from that of the selected selection element. | 7. A system of customizing a GUI display, said GUI display including one or more types of selection elements, each of said of selection elements types having customizable attributes, said system comprising: means for selecting a selection element in a menu of the GUI, wherein the selection element is one of a plurality of selection elements in the GUI; and means for modifying at least one customizable attribute of said selected selection element, wherein said means for modification of said at least one customizable attribute modifies the at least one customizable attribute only of said selected selection element and does not affect customizable attributes of other selection elements in the plurality of selection elements in the GUI, and wherein the at least one customizable attribute of the selected selection element comprises a character set for text displayed as part of the selected selection element, and wherein at least one other selection element in the plurality of selection elements in the GUI utilizes a different character set from that of the selected selection element. 12. The system of claim 7 , wherein the at least one customizable attribute further comprises a duration of a modification to another customizable attribute of the selected selection element. | 0.507732 |
8,495,490 | 1 | 9 | 1. A method comprising: providing scanned document analysis data including classification of at least one of a term, a subject, and a theme used in a plurality of scanned documents; generating a summary output from said analyzed scanned document data; rendering a visualization of said summary output; saving said summary output as metadata; mining said metadata for comparison with other summary output for archiving and retrieving said plurality of scanned documents according to said summary output; providing scanned document analysis data including frequency of usage of each one of a plurality of terms per page of said document(s); generating a searchable histogram electronic file for each respective term(s) of said plurality of terms; selecting at least one term from said plurality of terms for viewing as a histogram; selecting a particular searchable histogram for said selected term(s) from said generated plurality of searchable histograms; said selected searchable histogram having a first axis representing frequency of usage of said selected term(s) and a second axis representing page number of said document(s); rendering said searchable histogram on a graphical user interface, receiving a first clicking or scrolling signal representing selection of a first page of said document(s) by a user clicking or scrolling a visual icon/indicator on said graphical user interface; rendering concurrently or sequentially on said graphical user interface said searchable histogram and content of said selected first page in response to receiving said first clicking or scrolling signal; receiving a second clicking or scrolling signal representing selection of a second page of said document(s) by a user clicking or scrolling a visual icon/indicator on said graphical user interface; and rendering concurrently or sequentially on said graphical user interface said searchable histogram and content of said second page in response to receiving said second clicking or scrolling signal. | 1. A method comprising: providing scanned document analysis data including classification of at least one of a term, a subject, and a theme used in a plurality of scanned documents; generating a summary output from said analyzed scanned document data; rendering a visualization of said summary output; saving said summary output as metadata; mining said metadata for comparison with other summary output for archiving and retrieving said plurality of scanned documents according to said summary output; providing scanned document analysis data including frequency of usage of each one of a plurality of terms per page of said document(s); generating a searchable histogram electronic file for each respective term(s) of said plurality of terms; selecting at least one term from said plurality of terms for viewing as a histogram; selecting a particular searchable histogram for said selected term(s) from said generated plurality of searchable histograms; said selected searchable histogram having a first axis representing frequency of usage of said selected term(s) and a second axis representing page number of said document(s); rendering said searchable histogram on a graphical user interface, receiving a first clicking or scrolling signal representing selection of a first page of said document(s) by a user clicking or scrolling a visual icon/indicator on said graphical user interface; rendering concurrently or sequentially on said graphical user interface said searchable histogram and content of said selected first page in response to receiving said first clicking or scrolling signal; receiving a second clicking or scrolling signal representing selection of a second page of said document(s) by a user clicking or scrolling a visual icon/indicator on said graphical user interface; and rendering concurrently or sequentially on said graphical user interface said searchable histogram and content of said second page in response to receiving said second clicking or scrolling signal. 9. The method of claim 1 , further comprising providing scanned document analysis data for each of a plurality of document stacks; each of said stacks of scanned documents comprising at least one document; wherein said scanned analysis data, for each document stack, includes frequency of usage of at least one term per page; and further comprising generating histogram output summaries for each document stack from said scanned document analysis data, each of said histograms output summaries representing frequency of usage of at least one term per page; selecting at least one particular term, determining from among said generated histogram output summaries document stacks that make frequent reference to said selected term(s); rendering histograms for said determined respective document stacks; and comparing said rendered histograms against each other for identifying trends in the selected term usage in each document stack. | 0.5 |
8,060,455 | 5 | 6 | 5. The method of claim 4 , further comprising indexing the array of news terms with a term frequency score for each term of the array of news terms. | 5. The method of claim 4 , further comprising indexing the array of news terms with a term frequency score for each term of the array of news terms. 6. The method of claim 5 , further comprising calculating vector space scores between the array of query terms and the indexed array of news terms. | 0.5 |
8,103,510 | 1 | 5 | 1. A device control device comprising: speech recognition means which acquires speech data representing a speech and specifies words candidates included in the speech by performing speech recognition on the speech data and calculates a likelihood of each of the specified words candidates; specifying means which specifies words included in the speech based on the likelihoods calculated by the speech recognition means and specifies a content of the speech uttered by an utterer based on the words specified; a database which stores preceding controls, subsequent controls, and weighting factors, each of which is associated with one another; and process execution means which specifies content of a subsequent control to be performed on an external device to be a control target based on a currently executed control, a weighting factor stored in association with the currently executed control and the content of the uttered speech specified by the specifying means, and performs the subsequent control, wherein the process execution means obtains the weighting factor by calculating a product of transition constants defined on routes from the currently executed control to the subsequent control associated with the currently executed control, writes the obtained weighting factor into the database, and, among the subsequent controls stored in the database associated with the currently executed control, identifies a control in which a product is a largest product of the weighting factor and the calculated likelihood. | 1. A device control device comprising: speech recognition means which acquires speech data representing a speech and specifies words candidates included in the speech by performing speech recognition on the speech data and calculates a likelihood of each of the specified words candidates; specifying means which specifies words included in the speech based on the likelihoods calculated by the speech recognition means and specifies a content of the speech uttered by an utterer based on the words specified; a database which stores preceding controls, subsequent controls, and weighting factors, each of which is associated with one another; and process execution means which specifies content of a subsequent control to be performed on an external device to be a control target based on a currently executed control, a weighting factor stored in association with the currently executed control and the content of the uttered speech specified by the specifying means, and performs the subsequent control, wherein the process execution means obtains the weighting factor by calculating a product of transition constants defined on routes from the currently executed control to the subsequent control associated with the currently executed control, writes the obtained weighting factor into the database, and, among the subsequent controls stored in the database associated with the currently executed control, identifies a control in which a product is a largest product of the weighting factor and the calculated likelihood. 5. The device control device according to claim 1 , wherein the specifying means holds correlation information which associates words of different meanings or different categories with each process of the process execution means, and specifies a content of the speech uttered by the utterer based on a combination of those words or categories which are specified by the speech recognition means, and the correlation information. | 0.565041 |
8,756,207 | 40 | 41 | 40. The computer-program product of claim 30 , wherein the operations further include: within each of the first matchcode equivalence clusters, further grouping at least one of the records with at least one of the other records; evaluating each of the groupings of the records within first matchcode equivalence clusters, wherein evaluating includes: providing a first score component to each of the groupings that includes exclusively first records; providing a second score component to each of the groupings that includes one of the first records and one of the second records, wherein the second score component is lower than the first score component; and providing a third score component to each of the groupings that includes exclusively second records, wherein the third score component is lower than the second score component. | 40. The computer-program product of claim 30 , wherein the operations further include: within each of the first matchcode equivalence clusters, further grouping at least one of the records with at least one of the other records; evaluating each of the groupings of the records within first matchcode equivalence clusters, wherein evaluating includes: providing a first score component to each of the groupings that includes exclusively first records; providing a second score component to each of the groupings that includes one of the first records and one of the second records, wherein the second score component is lower than the first score component; and providing a third score component to each of the groupings that includes exclusively second records, wherein the third score component is lower than the second score component. 41. The computer-program product of claim 40 , wherein identifying matches amongst the records is such that: each of the identified matches is between records in one of the evaluated groupings; and for each of the records, being grouped with any other of the records is a condition precedent for being identified as matching that record. | 0.5 |
9,495,956 | 8 | 10 | 8. The method of claim 1 , wherein the voice command is user selectable. | 8. The method of claim 1 , wherein the voice command is user selectable. 10. The method of claim 8 , wherein the user inputs the voice command by typing. | 0.52381 |
9,292,495 | 1 | 10 | 1. A method for updating an existing document using natural language processing (NLP), the method comprising: receiving information about a subject-matter domain; identifying a portion of the existing document, wherein the portion corresponds to the subject-matter domain by including at least a threshold number of references to a category identified in the subject matter domain; lemmatizing, using a processor and a memory, a group of words from the portion to use in a search query, wherein the search query returns a result set, the result set including current information corresponding to the subject-matter domain, the current information being recent as compared to an age of the portion; forming, using the processor and the memory, natural language (NL) update content by processing the current information through an NLP application; associating with the NL update content a confidence rating, the confidence rating being indicative of a provenance of a data source that supplied the current information; and updating, by changing the portion of the existing document in a document repository, the existing document using the NL update content and the confidence rating. | 1. A method for updating an existing document using natural language processing (NLP), the method comprising: receiving information about a subject-matter domain; identifying a portion of the existing document, wherein the portion corresponds to the subject-matter domain by including at least a threshold number of references to a category identified in the subject matter domain; lemmatizing, using a processor and a memory, a group of words from the portion to use in a search query, wherein the search query returns a result set, the result set including current information corresponding to the subject-matter domain, the current information being recent as compared to an age of the portion; forming, using the processor and the memory, natural language (NL) update content by processing the current information through an NLP application; associating with the NL update content a confidence rating, the confidence rating being indicative of a provenance of a data source that supplied the current information; and updating, by changing the portion of the existing document in a document repository, the existing document using the NL update content and the confidence rating. 10. The method of claim 1 , further comprises: associating with the portion, an indication, wherein the indication identifies the portion as being a candidate for the updating. | 0.761518 |
9,596,349 | 10 | 11 | 10. The computer-implemented method of claim 9 , further comprising, via the one or more processors, decreasing the third score for every occurrence when the response time is less than a lower threshold or greater than an upper threshold. | 10. The computer-implemented method of claim 9 , further comprising, via the one or more processors, decreasing the third score for every occurrence when the response time is less than a lower threshold or greater than an upper threshold. 11. The computer-implemented method of claim 10 , further comprising, via the one or more processors, displaying on the display the third score and a list of occurrences when the response time is less than a lower threshold or greater than an upper threshold. | 0.5 |
8,640,104 | 9 | 13 | 9. A computer debugging system comprising: a diagramming member configured to form a diagram illustrating class relationships between objects at runtime by generating a class relationship diagram of a subject object from a dynamic computer language, the diagramming member effectively diagramming class relationships of the subject objects, the subject object in the dynamic computer language having an anonymous nature limiting visualization of objects' runtime relationships, wherein the dynamic computer language makes no distinction between function definition and method definition except for during function calling, and wherein the subject object in the dynamic computer language is integrated and loosely typed such that the subject object is able to be hosted in various environments, wherein the generated class relationship diagram shows class relationships between the subject object and objects that the subject object contains or inherits from, said class relationships being distinct from parent-child dependency; wherein the diagramming member forms the diagram by: traversing an inheritance chain of the subject object; for each level in the inheritance chain, reading methods of inherited objects; adding the read methods to the subject object as dynamically generated; and a display unit displaying on output to a user the generated class relationship diagram as the formed diagram. | 9. A computer debugging system comprising: a diagramming member configured to form a diagram illustrating class relationships between objects at runtime by generating a class relationship diagram of a subject object from a dynamic computer language, the diagramming member effectively diagramming class relationships of the subject objects, the subject object in the dynamic computer language having an anonymous nature limiting visualization of objects' runtime relationships, wherein the dynamic computer language makes no distinction between function definition and method definition except for during function calling, and wherein the subject object in the dynamic computer language is integrated and loosely typed such that the subject object is able to be hosted in various environments, wherein the generated class relationship diagram shows class relationships between the subject object and objects that the subject object contains or inherits from, said class relationships being distinct from parent-child dependency; wherein the diagramming member forms the diagram by: traversing an inheritance chain of the subject object; for each level in the inheritance chain, reading methods of inherited objects; adding the read methods to the subject object as dynamically generated; and a display unit displaying on output to a user the generated class relationship diagram as the formed diagram. 13. A computer system as claimed in claim 9 wherein the dynamic computer language is Java Script. | 0.734973 |
6,073,095 | 14 | 16 | 14. A fast vocabulary independent method for spotting words in speech for use in voice mail retrieval systems and browsing and searching audio/video content, the method comprising the steps of: receiving a search query from a user; determining a phonetic baseform for each word of the search query and converting each baseform to phone-ngrams; identifying the locations of the search query words in an audio/video database by comparing phone-ngrams of the search query words and phone-ngrams of at least one audio waveform in the audio/video database; and retrieving segments of the at least one audio waveform and corresponding video segments that are relevant to the received query. | 14. A fast vocabulary independent method for spotting words in speech for use in voice mail retrieval systems and browsing and searching audio/video content, the method comprising the steps of: receiving a search query from a user; determining a phonetic baseform for each word of the search query and converting each baseform to phone-ngrams; identifying the locations of the search query words in an audio/video database by comparing phone-ngrams of the search query words and phone-ngrams of at least one audio waveform in the audio/video database; and retrieving segments of the at least one audio waveform and corresponding video segments that are relevant to the received query. 16. The method according to claim 14, wherein said step of identifying the locations further comprises the steps of: converting the at least one audio waveform into a table of phone-ngrams; performing a coarse match by implementing the table to identify time intervals of the at least one audio waveform having phone-ngrams associated therewith that correspond to the phone-ngrams of the query words; and performing a detailed acoustic match at each of the identified time intervals of the at least one audio waveform to determine whether the search query words were actually uttered in the identified time intervals. | 0.5 |
8,140,556 | 12 | 16 | 12. A non-transitory computer-readable storage medium storing a plurality of instructions for controlling a processor to generate a query for querying an ontology, the plurality of instructions comprising: instructions that cause the processor to receive a first query in a first language, wherein the first language is a natural language; instructions that cause the processor to check the first query to determine if the first query complies with a predefined grammar and to determine if the first query comprises one or more terms from a vocabulary used in the ontology; in response to determining that at least one aspect of the first query does not comply with the predetermined grammar and does not comprise one or more terms from the vocabulary used in the ontology, instructions that cause the processor to provide guiding formulation of the first query by providing one or more constraints, wherein the one or more constraints are based upon the predefined grammar and the vocabulary comprising terms used in the ontology, and the predefined grammar is based upon a set of one or more rules; based on the one or more constraints, instructions that cause the processor to constrain the first query to comply with the predetermined grammar and the vocabulary used in the ontology; and instructions that cause the processor to generate, based upon the constrained first query, a second query in a second language, wherein the second query complies with the predetermined grammar and the vocabulary used in the ontology, and wherein the second language is different from the first language and the ontology is capable of being queried using the second query in the second language. | 12. A non-transitory computer-readable storage medium storing a plurality of instructions for controlling a processor to generate a query for querying an ontology, the plurality of instructions comprising: instructions that cause the processor to receive a first query in a first language, wherein the first language is a natural language; instructions that cause the processor to check the first query to determine if the first query complies with a predefined grammar and to determine if the first query comprises one or more terms from a vocabulary used in the ontology; in response to determining that at least one aspect of the first query does not comply with the predetermined grammar and does not comprise one or more terms from the vocabulary used in the ontology, instructions that cause the processor to provide guiding formulation of the first query by providing one or more constraints, wherein the one or more constraints are based upon the predefined grammar and the vocabulary comprising terms used in the ontology, and the predefined grammar is based upon a set of one or more rules; based on the one or more constraints, instructions that cause the processor to constrain the first query to comply with the predetermined grammar and the vocabulary used in the ontology; and instructions that cause the processor to generate, based upon the constrained first query, a second query in a second language, wherein the second query complies with the predetermined grammar and the vocabulary used in the ontology, and wherein the second language is different from the first language and the ontology is capable of being queried using the second query in the second language. 16. The non-transitory computer-readable storage medium of claim 12 wherein the plurality of instructions further comprises: instructions that cause the processor to execute the one or more queries in the second language against the ontology; and instructions that cause the processor to output results obtained from executing the one or more queries in the second language against the ontology. | 0.716236 |
9,836,456 | 19 | 22 | 19. A computing system having one or more processors configured to perform operations comprising: receiving, from a camera in electronic communication with the computing system, a first image of an object comprising a text in a source language, the first image having been captured by the camera; performing optical character recognition (OCR) on the first image to obtain an OCR text that is a machine-encoded text representation of the text; in response to obtaining the OCR text, automatically obtaining, from a machine translation system, a first translated OCR text and a translation score indicative of a degree of likelihood that the first translated OCR text is an appropriate translation of the OCR text to a target language; and when the translation score is less than a translation score threshold indicative of an acceptable degree of likelihood: outputting a user instruction to capture a set of second images of at least a portion of the object using the camera; receiving, from the camera, the set of second images; performing OCR on at least one of the set of second images to obtain a modified OCR text corresponding to the text; in response to obtaining the modified OCR text, obtaining, from the machine translation system, a second translated OCR text representing a translation of the modified OCR text from the source language to the target language; and outputting the second translated OCR text. | 19. A computing system having one or more processors configured to perform operations comprising: receiving, from a camera in electronic communication with the computing system, a first image of an object comprising a text in a source language, the first image having been captured by the camera; performing optical character recognition (OCR) on the first image to obtain an OCR text that is a machine-encoded text representation of the text; in response to obtaining the OCR text, automatically obtaining, from a machine translation system, a first translated OCR text and a translation score indicative of a degree of likelihood that the first translated OCR text is an appropriate translation of the OCR text to a target language; and when the translation score is less than a translation score threshold indicative of an acceptable degree of likelihood: outputting a user instruction to capture a set of second images of at least a portion of the object using the camera; receiving, from the camera, the set of second images; performing OCR on at least one of the set of second images to obtain a modified OCR text corresponding to the text; in response to obtaining the modified OCR text, obtaining, from the machine translation system, a second translated OCR text representing a translation of the modified OCR text from the source language to the target language; and outputting the second translated OCR text. 22. The computing system of claim 19 , wherein the operations further comprise: when the translation score is less than the translation score threshold, identifying a portion of the first image causing the OCR text and the corresponding first translated OCR text to have the translation score less than the translation score threshold, wherein the user instruction is to capture the set of second images with respect to the identified portion of the first image. | 0.522727 |
7,499,910 | 18 | 19 | 18. The computer-readable program code embedded in the memory of claim 1 , wherein obtaining the set of cached queries for the select list using aggregate rewrite further comprises: obtaining a set of aggregates for that select list in the new query without an exact match in the select list of the at least one cached query; and performing a union on the set of cached queries for all aggregates in the set of aggregates. | 18. The computer-readable program code embedded in the memory of claim 1 , wherein obtaining the set of cached queries for the select list using aggregate rewrite further comprises: obtaining a set of aggregates for that select list in the new query without an exact match in the select list of the at least one cached query; and performing a union on the set of cached queries for all aggregates in the set of aggregates. 19. The computer-readable program code embedded in the memory of claim 18 , wherein the union is the set of cached queries that supports the select list in the new query without an exact match in the select list of the at least one cached query. | 0.5 |
8,370,126 | 17 | 26 | 17. A computer program product for incorporating variable values into textual content, the computer program product comprising a non-transitory computer-readable storage medium containing computer program code for: translating an abstract phrase from a first language to a second language; receiving the translated abstract phrase comprising a text phrase and a variable at a particular position in the text phrase; receiving a text value for the variable; combining the text phrase of the abstract phrase and the text value according to the particular position of the variable; applying an integration rule at a boundary of the text phrase of the abstract phrase and the text value to produce an integrated phrase, the integration rule based on a language rule for the second language, wherein the integration rule modifies a portion of the text phrase of the abstract phrase or a portion of the text value. | 17. A computer program product for incorporating variable values into textual content, the computer program product comprising a non-transitory computer-readable storage medium containing computer program code for: translating an abstract phrase from a first language to a second language; receiving the translated abstract phrase comprising a text phrase and a variable at a particular position in the text phrase; receiving a text value for the variable; combining the text phrase of the abstract phrase and the text value according to the particular position of the variable; applying an integration rule at a boundary of the text phrase of the abstract phrase and the text value to produce an integrated phrase, the integration rule based on a language rule for the second language, wherein the integration rule modifies a portion of the text phrase of the abstract phrase or a portion of the text value. 26. The computer program product of claim 17 , wherein the integration rule is based on an orthographic rule of a language. | 0.932269 |
8,281,149 | 1 | 2 | 1. A computer-implemented method of allowing user-selected anonymous and pseudonymous access for a user to a relying party (RP), mediated by an identity provider (IdP), comprising: registering with an IdP to establish a first pseudonym; upon successful proof of possession of the first pseudonym to the IdP, receiving a first representation of an access token from the IdP for accessing the RP; transforming, by a processor, the first representation of the access token to obtain a second representation of the access token, the second representation of the access token being a valid access token and is unlinkable to the first representation of the access token by the IdP; receiving a request from the user to access the RP; determining whether the request is for accessing the RP anonymously or pseudonymously; if the request is for anonymous access, providing the second representation of the access token to the RP anonymously; and gaining access to the RP upon verification of the second representation of the access token, the anonymous access being unlinkable to any previous and any future access at the RP, and unlinkable to the IdP's interaction with any particular user; if the request is for pseudonymous access, providing to the RP the second representation of the access token and proof of possession of a second pseudonym that is previously registered with the RP; and gaining access to the RP upon successful verification of the second representation of the access token and proof of possession of the second pseudonym, wherein the pseudonymous access is linkable to the second pseudonym, unlinkable to the IdP's interaction with any particular user, and unlinkable to any past and future access to the RP that does not employ the second pseudonym. | 1. A computer-implemented method of allowing user-selected anonymous and pseudonymous access for a user to a relying party (RP), mediated by an identity provider (IdP), comprising: registering with an IdP to establish a first pseudonym; upon successful proof of possession of the first pseudonym to the IdP, receiving a first representation of an access token from the IdP for accessing the RP; transforming, by a processor, the first representation of the access token to obtain a second representation of the access token, the second representation of the access token being a valid access token and is unlinkable to the first representation of the access token by the IdP; receiving a request from the user to access the RP; determining whether the request is for accessing the RP anonymously or pseudonymously; if the request is for anonymous access, providing the second representation of the access token to the RP anonymously; and gaining access to the RP upon verification of the second representation of the access token, the anonymous access being unlinkable to any previous and any future access at the RP, and unlinkable to the IdP's interaction with any particular user; if the request is for pseudonymous access, providing to the RP the second representation of the access token and proof of possession of a second pseudonym that is previously registered with the RP; and gaining access to the RP upon successful verification of the second representation of the access token and proof of possession of the second pseudonym, wherein the pseudonymous access is linkable to the second pseudonym, unlinkable to the IdP's interaction with any particular user, and unlinkable to any past and future access to the RP that does not employ the second pseudonym. 2. The method of claim 1 , wherein receiving the first representation of the access token from the IdP further comprises: generating an original token; modifying the original token to obtain a modified token; and providing the modified token to the IdP to obtain an access token for accessing the RP. | 0.64455 |
9,367,625 | 6 | 7 | 6. The method of claim 1 , wherein identifying the first set of nodes matching the inner query constraint and at least in part matching the outer query constraint comprises: identifying a first number of nodes matching at least the inner query constraint; and identifying a second number of nodes matching both the inner query constraint and the outer query constraint. | 6. The method of claim 1 , wherein identifying the first set of nodes matching the inner query constraint and at least in part matching the outer query constraint comprises: identifying a first number of nodes matching at least the inner query constraint; and identifying a second number of nodes matching both the inner query constraint and the outer query constraint. 7. The method of claim 6 , wherein the first number is a first percentage and the second number is a second percentage. | 0.5 |
7,818,171 | 11 | 12 | 11. The speech recognition apparatus of claim 8 , comprising: a selection unit for selecting the word string from a sequence of word strings that starts from a leaf node word toward a root and ends with a node other than the root of the tree structure, wherein the forward speech comparison unit compares, in order of the sequence, the input speech with a forward acoustic model corresponding to a speech resulting from chronologically reproducing the intermediate word string selected by the selection unit. | 11. The speech recognition apparatus of claim 8 , comprising: a selection unit for selecting the word string from a sequence of word strings that starts from a leaf node word toward a root and ends with a node other than the root of the tree structure, wherein the forward speech comparison unit compares, in order of the sequence, the input speech with a forward acoustic model corresponding to a speech resulting from chronologically reproducing the intermediate word string selected by the selection unit. 12. The speech recognition apparatus of claim 11 , comprising: a place name specification unit for specifying an address name corresponding to a current position of the speech recognition apparatus, wherein the tree structured dictionary data contains a sequence of words from a root to a leaf node of a tree structure, and each word represents one address name, and wherein the selection unit selects the intermediate word string based on the address name specified by the place name specification unit. | 0.5 |
4,829,577 | 1 | 4 | 1. A speech recognition method based on recognizing words, comprising the steps of: defining, for each word, a probabilistic model including (i) a plurality of states, (ii) at least one transition, each transition extending from a state to a state, (iii) a plurality of generated labels indicative of time between states, and (iv) probabilities of outputting each label in each of said transitions; generating a first label string of said labels for each of said words from initial data thereof; for each of said words, iteratively updating the probabilities of the corresponding probabilistic model, comprising the steps of: (a) inputting a first label string into a corresponding probabilistic model; (b) obtaining a first frequency of each of said labels being output at each of said transitions over the time in which the corresponding first label string is input into the corresponding probabilistic model; (c) obtaining a second frequency of each of said states occurring over the time in which the corresponding first label string is inputted into the corresponding probabilistic model; and (d) obtaining each of a plurality of new probabilities of said corresponding probabilistic model by dividing the corresponding first frequency by the corresponding second frequency; storing the first and second frequencies obtained in the last step of said iterative updating; determining which of said words require adaptation to recognize different speakers or the same speaker at different times; generating, for each of said words requiring adaptation, a second label string from adaptation data comprising the probabilistic model of the word to be adapted; obtaining, for each of said words requiring adaptation, a third frequency of each of said labels being outputted at each of said transitions over the time in which the corresponding second label string is inputted into the corresponding probabilistic model; obtaining, for each of said words requiring adaptation, a fourth frequency of each of said states occurring over the time in which the corresponding second label string is outputted into the corresponding probabilistic model; obtaining fifth frequencies by interpolation of the corresponding first and third frequencies; obtaining sixth frequencies by interpolation of the corresponding second and third frequencies; and obtaining adapted probabilities for said adaptation data by dividing the corresponding fifth frequency by the corresponding sixth frequency. | 1. A speech recognition method based on recognizing words, comprising the steps of: defining, for each word, a probabilistic model including (i) a plurality of states, (ii) at least one transition, each transition extending from a state to a state, (iii) a plurality of generated labels indicative of time between states, and (iv) probabilities of outputting each label in each of said transitions; generating a first label string of said labels for each of said words from initial data thereof; for each of said words, iteratively updating the probabilities of the corresponding probabilistic model, comprising the steps of: (a) inputting a first label string into a corresponding probabilistic model; (b) obtaining a first frequency of each of said labels being output at each of said transitions over the time in which the corresponding first label string is input into the corresponding probabilistic model; (c) obtaining a second frequency of each of said states occurring over the time in which the corresponding first label string is inputted into the corresponding probabilistic model; and (d) obtaining each of a plurality of new probabilities of said corresponding probabilistic model by dividing the corresponding first frequency by the corresponding second frequency; storing the first and second frequencies obtained in the last step of said iterative updating; determining which of said words require adaptation to recognize different speakers or the same speaker at different times; generating, for each of said words requiring adaptation, a second label string from adaptation data comprising the probabilistic model of the word to be adapted; obtaining, for each of said words requiring adaptation, a third frequency of each of said labels being outputted at each of said transitions over the time in which the corresponding second label string is inputted into the corresponding probabilistic model; obtaining, for each of said words requiring adaptation, a fourth frequency of each of said states occurring over the time in which the corresponding second label string is outputted into the corresponding probabilistic model; obtaining fifth frequencies by interpolation of the corresponding first and third frequencies; obtaining sixth frequencies by interpolation of the corresponding second and third frequencies; and obtaining adapted probabilities for said adaptation data by dividing the corresponding fifth frequency by the corresponding sixth frequency. 4. The method in accordance with claim 1 wherein each of probabilities of the said probabilistic model into which adaptation data is to be inputted have been subjected to a smoothing operation. | 0.524631 |
8,847,884 | 3 | 4 | 3. The method as described in claim 1 , further comprising: determining whether an audio file is being played; and determining the type of the facial expression of the user and the audio service which corresponds to the feature of the facial expression of the user in the images stored in the service database, when an audio file is being played. | 3. The method as described in claim 1 , further comprising: determining whether an audio file is being played; and determining the type of the facial expression of the user and the audio service which corresponds to the feature of the facial expression of the user in the images stored in the service database, when an audio file is being played. 4. The method as described in claim 3 , wherein the audio service is to add sound effects corresponding to a user expression to a currently played audio file. | 0.5 |
8,775,365 | 64 | 69 | 64. A system for providing interactive knowledge discovery service to a client or user comprising: a receiving module configured to receive an input from a client or user over a data network; an access module, comprised of at least one non-transitory computer-readable storage medium having computer executable instructions thereon and/or one or more processing apparatuses and/or one or more data communication devices, providing access to at least one processing device and/or at least one non-transitory computer-readable storage medium over a first network; a facilitating module facilitating access to at least one content corresponding to the client's or user's input, said at least one content is an output of at least one software module executed using one or more processing devices and/or one or more computer-readable storage medium over a second network to perform: accessing or building a first one or more data structures corresponding to at least one participation matrix representing participation of ontological subjects of a first predefined order into partitions or ontological subjects of a second predefined order of a body of knowledge; accessing, or building in real time, a second one or more data structures corresponding to association strengths between a plurality of ontological subjects of a predefined order; wherein said association strength is a function of: i. probability of occurrences of some of the ontological subjects of the first order in partitions or ontological subjects of a predefined order of the body of knowledge, and ii. co-occurrences of some ontological subjects of the first order in some of partitions or ontological subjects of a predefined order; accessing evaluated, or evaluating in real time, value significances for one or more partitions or one or more ontological subjects of the body of knowledge, based on data of one or more of said first and second one or more data structures and in respect to at least one significance aspect of the one or more partitions or one or more ontological subjects of the body of knowledge; and providing, using one or more data processing or computing devices, a content according to the client's or user's input using one or more partitions of the body of knowledge based on the evaluated value significances of the one or more partitions and/or one or more ontological subjects of the body of knowledge. | 64. A system for providing interactive knowledge discovery service to a client or user comprising: a receiving module configured to receive an input from a client or user over a data network; an access module, comprised of at least one non-transitory computer-readable storage medium having computer executable instructions thereon and/or one or more processing apparatuses and/or one or more data communication devices, providing access to at least one processing device and/or at least one non-transitory computer-readable storage medium over a first network; a facilitating module facilitating access to at least one content corresponding to the client's or user's input, said at least one content is an output of at least one software module executed using one or more processing devices and/or one or more computer-readable storage medium over a second network to perform: accessing or building a first one or more data structures corresponding to at least one participation matrix representing participation of ontological subjects of a first predefined order into partitions or ontological subjects of a second predefined order of a body of knowledge; accessing, or building in real time, a second one or more data structures corresponding to association strengths between a plurality of ontological subjects of a predefined order; wherein said association strength is a function of: i. probability of occurrences of some of the ontological subjects of the first order in partitions or ontological subjects of a predefined order of the body of knowledge, and ii. co-occurrences of some ontological subjects of the first order in some of partitions or ontological subjects of a predefined order; accessing evaluated, or evaluating in real time, value significances for one or more partitions or one or more ontological subjects of the body of knowledge, based on data of one or more of said first and second one or more data structures and in respect to at least one significance aspect of the one or more partitions or one or more ontological subjects of the body of knowledge; and providing, using one or more data processing or computing devices, a content according to the client's or user's input using one or more partitions of the body of knowledge based on the evaluated value significances of the one or more partitions and/or one or more ontological subjects of the body of knowledge. 69. The system of claim 64 , wherein further includes computer-readable storage media, over the first and/or over the second network, to store one or more of the following: i. at least one composition as a body of knowledge, ii. at least some of the partitions of the at least one composition, iii. at least some ontological subjects, iv. at least one set of data respective of a value significances of the partitions and/or the ontological subjects of the body of knowledge, v. one or more index list of the partitions and the ontological subjects of the body of knowledge, vi. at least one pre-made content composition from the body of knowledge, vii. at least some of the user's input. | 0.5 |
10,089,557 | 7 | 8 | 7. An electronic device for recognizing characters, the electronic device comprising: a camera; a display unit; and a controller configured to: control to activate the camera based on receiving a user input, control to obtain a preview image using the camera, the preview image comprising a plurality of images being sequentially displayed on the display, control to display, on the display, the preview image obtained using the camera, while the preview image is displayed on the display, control to perform an auto focus function of the camera to obtain at least one image having a clarity value greater than or equal to a reference value, while the preview image is displayed on the display, control to process the at least one image having the clarity value greater than or equal to the reference value to recognize characters within the at least one image, and control to display, on the display, the recognized characters along with the preview image. | 7. An electronic device for recognizing characters, the electronic device comprising: a camera; a display unit; and a controller configured to: control to activate the camera based on receiving a user input, control to obtain a preview image using the camera, the preview image comprising a plurality of images being sequentially displayed on the display, control to display, on the display, the preview image obtained using the camera, while the preview image is displayed on the display, control to perform an auto focus function of the camera to obtain at least one image having a clarity value greater than or equal to a reference value, while the preview image is displayed on the display, control to process the at least one image having the clarity value greater than or equal to the reference value to recognize characters within the at least one image, and control to display, on the display, the recognized characters along with the preview image. 8. The electronic device of claim 7 , wherein the controller is further configured to: control to compare two or more images obtained, while the preview image is displayed, to determine whether movement exists, and control to perform the auto focus function of the camera based on determining that no movement exists. | 0.631395 |
8,516,012 | 8 | 12 | 8. A non-transitory computer readable storage device comprising a resource management software module that is operative, when executed by a processor, to perform a method, the method comprising: defining a plurality of translating references for an object; generating a common information model (CIM), the CIM comprising one or more functional object attributes of the object; generating a first instantiation of a user information model (UIM), the first instantiation of the UIM comprising one or more user-associated attributes of the object; interfacing with the CIM using the first instantiation of the UIM; translating one or more user-associated attributes of the first instantiation of the UIM to the one or more functional object attributes of the CIM using the plurality of translating references; generating a second instantiation of a user information model (UIM); interfacing with the CIM using the second instantiation of the UIM; translating one or more user-associated attributes of the second instantiation of the UIM to the one or more functional object attributes of the CIM using the plurality of translating references; and providing at least a portion of the CIM. | 8. A non-transitory computer readable storage device comprising a resource management software module that is operative, when executed by a processor, to perform a method, the method comprising: defining a plurality of translating references for an object; generating a common information model (CIM), the CIM comprising one or more functional object attributes of the object; generating a first instantiation of a user information model (UIM), the first instantiation of the UIM comprising one or more user-associated attributes of the object; interfacing with the CIM using the first instantiation of the UIM; translating one or more user-associated attributes of the first instantiation of the UIM to the one or more functional object attributes of the CIM using the plurality of translating references; generating a second instantiation of a user information model (UIM); interfacing with the CIM using the second instantiation of the UIM; translating one or more user-associated attributes of the second instantiation of the UIM to the one or more functional object attributes of the CIM using the plurality of translating references; and providing at least a portion of the CIM. 12. The computer readable storage device of claim 8 , wherein the first instantiation of the UIM is one of a service management model, a transport resources management model, and a communication protocol management model. | 0.5 |
8,689,060 | 6 | 8 | 6. A system for providing corrections for semantic errors in a process model, the system comprising: a memory for storing instructions; and at least one hardware processor configured to execute instructions, the instructions comprising: identifying a change in the process model, the process model including one or more process model elements; identifying one or more constraint violations associated with at least one process model element in response to identifying the change in the process model; identifying one or more correction proposals for each constraint violation identified; creating a first bit string representative of the at least one process model element; receiving a user selection of a correction proposal from the one or more identified correction proposals; creating a second bit string of a current version of the at least one process model element; applying the selected correction proposal in response to determining that the first bit string is the same as the second bit string; and discarding the selected correction proposal in response to determining that the second bit string differs from the first bit string. | 6. A system for providing corrections for semantic errors in a process model, the system comprising: a memory for storing instructions; and at least one hardware processor configured to execute instructions, the instructions comprising: identifying a change in the process model, the process model including one or more process model elements; identifying one or more constraint violations associated with at least one process model element in response to identifying the change in the process model; identifying one or more correction proposals for each constraint violation identified; creating a first bit string representative of the at least one process model element; receiving a user selection of a correction proposal from the one or more identified correction proposals; creating a second bit string of a current version of the at least one process model element; applying the selected correction proposal in response to determining that the first bit string is the same as the second bit string; and discarding the selected correction proposal in response to determining that the second bit string differs from the first bit string. 8. The system of claim 6 , wherein the instructions further comprise: identifying a severity of the constraint violation; and requesting approval of at least one correction proposal for constraint violation seventies that indicate that a run-time error is possible. | 0.5 |
7,664,849 | 1 | 3 | 1. A method for graphically defining an alert condition for a signal waveform in a policy-based automation system, wherein the signal waveform corresponds to a metric for an object monitored by the policy-based automation system, the method comprising: pictorially displaying on a display device a portion of the signal waveform including one or more impulses for which the alert condition is to be defined, wherein the portion of the signal waveform is displayed on a graph in an alert definition graphical user interface (GUI) on the display device; displaying a plurality of alert parameter user interface elements with the portion of the signal waveform displayed on the graph in the alert definition GUI, wherein each alert parameter user interface element represents a different one of a plurality of alert parameters for the signal waveform, wherein a position of each alert parameter user interface element relative to the displayed portion of the signal waveform in the alert definition GUI corresponds to a particular value for the respective alert parameter, and wherein the position of at least one of the plurality of alert parameter user interface elements relative to the displayed portion of the signal waveform in the alert definition GUI specifies a state change of the signal waveform at which the alert condition is raised to alert the policy-based automation system that the metric for the object monitored by the policy-based automation system indicates a condition of the object to which the policy-based automation system is to respond; displaying a plurality of alert parameter control user interface elements in the alert definition GUI, wherein each of the plurality of alert parameter control user interface elements corresponds to a different one of the plurality of alert parameter user interface elements, wherein each of the plurality of alert parameter control user interface elements is configured to receive user input to manipulate the position of a corresponding one of the plurality of alert parameter user interface elements relative to the displayed portion of the signal waveform in the alert definition GUI; receiving user input to at least one of the plurality of alert parameter control user interface elements to manipulate the positions of corresponding ones of the plurality of alert parameter user interface elements relative to the displayed portion of the signal waveform in the alert definition GUI, wherein manipulating the position of an alert parameter user interface element causes a corresponding change in the value of the associated alert parameter; and generating an alert definition for the policy-based automation system that specifies the alert condition from the values of the plurality of alert parameters associated with the plurality of alert parameter user interface elements; wherein the policy-based automation system is a computing environment management system. | 1. A method for graphically defining an alert condition for a signal waveform in a policy-based automation system, wherein the signal waveform corresponds to a metric for an object monitored by the policy-based automation system, the method comprising: pictorially displaying on a display device a portion of the signal waveform including one or more impulses for which the alert condition is to be defined, wherein the portion of the signal waveform is displayed on a graph in an alert definition graphical user interface (GUI) on the display device; displaying a plurality of alert parameter user interface elements with the portion of the signal waveform displayed on the graph in the alert definition GUI, wherein each alert parameter user interface element represents a different one of a plurality of alert parameters for the signal waveform, wherein a position of each alert parameter user interface element relative to the displayed portion of the signal waveform in the alert definition GUI corresponds to a particular value for the respective alert parameter, and wherein the position of at least one of the plurality of alert parameter user interface elements relative to the displayed portion of the signal waveform in the alert definition GUI specifies a state change of the signal waveform at which the alert condition is raised to alert the policy-based automation system that the metric for the object monitored by the policy-based automation system indicates a condition of the object to which the policy-based automation system is to respond; displaying a plurality of alert parameter control user interface elements in the alert definition GUI, wherein each of the plurality of alert parameter control user interface elements corresponds to a different one of the plurality of alert parameter user interface elements, wherein each of the plurality of alert parameter control user interface elements is configured to receive user input to manipulate the position of a corresponding one of the plurality of alert parameter user interface elements relative to the displayed portion of the signal waveform in the alert definition GUI; receiving user input to at least one of the plurality of alert parameter control user interface elements to manipulate the positions of corresponding ones of the plurality of alert parameter user interface elements relative to the displayed portion of the signal waveform in the alert definition GUI, wherein manipulating the position of an alert parameter user interface element causes a corresponding change in the value of the associated alert parameter; and generating an alert definition for the policy-based automation system that specifies the alert condition from the values of the plurality of alert parameters associated with the plurality of alert parameter user interface elements; wherein the policy-based automation system is a computing environment management system. 3. The method as recited in claim 1 , wherein the policy-based automation system is a Storage Area Network (SAN) management system. | 0.909904 |
9,405,841 | 15 | 19 | 15. A system for providing dynamic and category-specific search suggestions to a user, comprising: a processor; and a memory device including instructions that, when executed by the processor, cause the processor to: in response to receiving one or more characters associated with a partial search query to be executed against a set of data, determine a plurality of search queries relevant to the one or more characters; associate at least one search category with each of the plurality of relevant search query suggestions; select a subset of the at least one associated search category based at least in part a relevance value for each category meeting a threshold relevance value, the relevance value indicating a strength of an association of each category in the subset of the at least one associated search category with the plurality of relevant search query suggestions; provide for display at least the subset of the at least one associated search category and the plurality of relevant search query suggestions, the plurality of relevant search query suggestions including the one or more characters of the partial search query; determine an ordered set of some of the plurality of relevant search query suggestions and the subset of the at least one associated search category based at least in part on the relevance value of each category; and provide for display, within an allowable deviation from being simultaneous to receiving the one or more characters, a search suggestion window including the ordered set, wherein the some of the plurality of relevant search query suggestions and the subset of the at least one associated search category in the ordered set are displayed concurrently in the search suggestion window, the some of the plurality of relevant search query suggestions selectable to be executed against the set of data in the at least one associated search category. | 15. A system for providing dynamic and category-specific search suggestions to a user, comprising: a processor; and a memory device including instructions that, when executed by the processor, cause the processor to: in response to receiving one or more characters associated with a partial search query to be executed against a set of data, determine a plurality of search queries relevant to the one or more characters; associate at least one search category with each of the plurality of relevant search query suggestions; select a subset of the at least one associated search category based at least in part a relevance value for each category meeting a threshold relevance value, the relevance value indicating a strength of an association of each category in the subset of the at least one associated search category with the plurality of relevant search query suggestions; provide for display at least the subset of the at least one associated search category and the plurality of relevant search query suggestions, the plurality of relevant search query suggestions including the one or more characters of the partial search query; determine an ordered set of some of the plurality of relevant search query suggestions and the subset of the at least one associated search category based at least in part on the relevance value of each category; and provide for display, within an allowable deviation from being simultaneous to receiving the one or more characters, a search suggestion window including the ordered set, wherein the some of the plurality of relevant search query suggestions and the subset of the at least one associated search category in the ordered set are displayed concurrently in the search suggestion window, the some of the plurality of relevant search query suggestions selectable to be executed against the set of data in the at least one associated search category. 19. The system of claim 15 , wherein the memory device further includes instructions that, when executed by the processor, cause the processor to: provide a selection element adjacent to the display of each search query in the ordered set of search queries so as to enable selection of the associated search categories. | 0.792857 |
7,509,259 | 1 | 7 | 1. A method of operating a pattern recognition system for refining a plurality of statistical pattern recognition models that are used for statistical pattern recognition, the method including: reading in initial values of a set of parameters for said plurality of statistical pattern recognition models; reading a training data set that includes a plurality of training data items including training data items for each of said plurality of statistical pattern recognition models, along with a transcribed identity for each of said plurality of training data items; obtaining feature vectors from each of the plurality of training data items; using a processor to perform an optimization routine for optimizing an objective function in order to find refined values of said set of parameters for said plurality of said statistical pattern recognition models corresponding to an extremum of said objective function, wherein said objective function is dynamically defined for each of a succession of iterations of said optimization routine to include a subexpression for each k th item of training data in, at least, a subset of said plurality of training data items that is defined by, at least, a first criterion that requires that said transcribed identity does not match a recognized identity for said k th item of training data, and a second criterion that requires that there is not a gross discrepancy between said transcribed identity and said recognized identity, wherein each subexpression depends on a relative magnitude of a first probability score compared to a second probability score, wherein said first probability score is based on a value of a first statistical pattern recognition model corresponding to said recognized identity of said k th item of training data evaluated with said one or more feature vectors obtained from said k th item of training data and said second probability score is based on a value of a second statistical pattern recognition model corresponding to said transcribed identity of said k th item of training data evaluated with said one or more feature vectors obtained from said k th item of training data; and using the refined statistical pattern recognition models to recognize a pattern. | 1. A method of operating a pattern recognition system for refining a plurality of statistical pattern recognition models that are used for statistical pattern recognition, the method including: reading in initial values of a set of parameters for said plurality of statistical pattern recognition models; reading a training data set that includes a plurality of training data items including training data items for each of said plurality of statistical pattern recognition models, along with a transcribed identity for each of said plurality of training data items; obtaining feature vectors from each of the plurality of training data items; using a processor to perform an optimization routine for optimizing an objective function in order to find refined values of said set of parameters for said plurality of said statistical pattern recognition models corresponding to an extremum of said objective function, wherein said objective function is dynamically defined for each of a succession of iterations of said optimization routine to include a subexpression for each k th item of training data in, at least, a subset of said plurality of training data items that is defined by, at least, a first criterion that requires that said transcribed identity does not match a recognized identity for said k th item of training data, and a second criterion that requires that there is not a gross discrepancy between said transcribed identity and said recognized identity, wherein each subexpression depends on a relative magnitude of a first probability score compared to a second probability score, wherein said first probability score is based on a value of a first statistical pattern recognition model corresponding to said recognized identity of said k th item of training data evaluated with said one or more feature vectors obtained from said k th item of training data and said second probability score is based on a value of a second statistical pattern recognition model corresponding to said transcribed identity of said k th item of training data evaluated with said one or more feature vectors obtained from said k th item of training data; and using the refined statistical pattern recognition models to recognize a pattern. 7. The method according to claim 1 wherein reading said training data set that includes said plurality of training data items comprises: reading a set of speech samples wherein said transcribed identity identifies words spoken in said set of speech samples; and recognizing each particular item of training data using a search algorithm to find a highly likely path of a Hidden Markov Model. | 0.842593 |
9,633,657 | 9 | 10 | 9. The system of claim 1 , wherein the hearing assistance processor is configured to derive metadata or parameters for the audio data and determine, for the audio data, a recognition index as an estimation of accuracy using the metadata or the parameters. | 9. The system of claim 1 , wherein the hearing assistance processor is configured to derive metadata or parameters for the audio data and determine, for the audio data, a recognition index as an estimation of accuracy using the metadata or the parameters. 10. The system of claim 9 , wherein the hearing assistance processor is configured to compare the recognition index to a threshold and transmit a feedback notification to the hearing assistance application for display on the display screen of the mobile device. | 0.5 |
5,473,326 | 21 | 22 | 21. The system of claim 20, wherein the first comparison window includes a plurality of first comparison window byte positions arranged in a linear array from an oldest received first comparison window byte position to a newest received first comparison window byte position, and wherein said comparison window updating means includes means for deleting a first comparison window byte in the oldest position and shifting the other first comparison window bytes toward the oldest first comparison window byte position to open the newest first comparison window byte position, and transferring a deleted buffer byte into said open newest first comparison window byte position. | 21. The system of claim 20, wherein the first comparison window includes a plurality of first comparison window byte positions arranged in a linear array from an oldest received first comparison window byte position to a newest received first comparison window byte position, and wherein said comparison window updating means includes means for deleting a first comparison window byte in the oldest position and shifting the other first comparison window bytes toward the oldest first comparison window byte position to open the newest first comparison window byte position, and transferring a deleted buffer byte into said open newest first comparison window byte position. 22. The system of claim 21, further comprising means for generating a length of match symbol if a plurality of consecutive buffer bytes match first comparison window bytes, whereby the length of match symbol indicates the number of bytes in said plurality of consecutive buffer bytes. | 0.5 |
9,892,112 | 13 | 16 | 13. A method employing a knowledge engine for processing natural language input, comprising: parsing a phrase into subcomponents using the knowledge engine; identifying a category for each parsed subcomponent and a syntactic structure of the phrase; generating a list of definitions for each parsed subcomponent, the list corresponding to the identified category; ranking the definitions in the list according to relevance; identifying an outcome base on ranked relevancy, the outcome being a definition with the highest relevance in the list; searching a corpus for evidence of a pattern associated with the list; scoring each definition in the list according to a weighted calculation based on congruence of corpus evidence with the pattern; and generating an outcome, wherein the outcome is a definition with a strongest congruence to the pattern. | 13. A method employing a knowledge engine for processing natural language input, comprising: parsing a phrase into subcomponents using the knowledge engine; identifying a category for each parsed subcomponent and a syntactic structure of the phrase; generating a list of definitions for each parsed subcomponent, the list corresponding to the identified category; ranking the definitions in the list according to relevance; identifying an outcome base on ranked relevancy, the outcome being a definition with the highest relevance in the list; searching a corpus for evidence of a pattern associated with the list; scoring each definition in the list according to a weighted calculation based on congruence of corpus evidence with the pattern; and generating an outcome, wherein the outcome is a definition with a strongest congruence to the pattern. 16. The method of claim 13 , wherein each definition is an explanation of each parsed subcomponent. | 0.713873 |
9,442,976 | 15 | 17 | 15. The related-word registration device according to claim 14 , wherein the second search query specifying code causes at least one of said at least one processor to specify, as a second search query, a search query whose acquisition time is earlier than that of the first search query, having continuity based on the acquisition time, and whose number of search results is equal to or less than a predetermined value. | 15. The related-word registration device according to claim 14 , wherein the second search query specifying code causes at least one of said at least one processor to specify, as a second search query, a search query whose acquisition time is earlier than that of the first search query, having continuity based on the acquisition time, and whose number of search results is equal to or less than a predetermined value. 17. The related-word registration device according to claim 15 , wherein the first search query specifying code causes at least one of said at least one processor to specify, as a first search query, a search query whose acquisition time is latest among search queries extracted. | 0.533445 |
9,563,665 | 2 | 8 | 2. The method as described in claim 1 , further comprising: establishing the list of candidate product words comprises: for at least one product information entry contained in a database: performing a coarse granularity segmentation by the largest semantic units; and extracting a third core product word contained in segmented results; determining whether the third core product word has been extracted from the segmented results; in the event that the third core product word has been extracted from the segmented results, performing a fine granularity segmentation by the smallest semantic units: determining whether at least two of the words obtained are product words; in the event that at least two of the words obtained are product words; using the first product word as a key product word; and using the last product word as a candidate product word of the key product word; computing correlations of at least one key product word and at least one candidate product word; determining whether the correlation of the at least one key product word and the at least one candidate product word meets a threshold value; selecting a candidate product word having a correlation that meets the threshold value; and for the same key product word, generating the list of candidate product words based on the selected candidate product word. | 2. The method as described in claim 1 , further comprising: establishing the list of candidate product words comprises: for at least one product information entry contained in a database: performing a coarse granularity segmentation by the largest semantic units; and extracting a third core product word contained in segmented results; determining whether the third core product word has been extracted from the segmented results; in the event that the third core product word has been extracted from the segmented results, performing a fine granularity segmentation by the smallest semantic units: determining whether at least two of the words obtained are product words; in the event that at least two of the words obtained are product words; using the first product word as a key product word; and using the last product word as a candidate product word of the key product word; computing correlations of at least one key product word and at least one candidate product word; determining whether the correlation of the at least one key product word and the at least one candidate product word meets a threshold value; selecting a candidate product word having a correlation that meets the threshold value; and for the same key product word, generating the list of candidate product words based on the selected candidate product word. 8. The method as described in claim 2 , wherein: the computing correlations of the at least one key product word and the at least one candidate product word comprises: vectorizing the at least one key product word based on a category click through rate, an attribute click through rate, and a product word click through rate of the each key product word; vectorizing the at least one candidate product word based on the category click through rate, the attribute click through rate, and the product word click through rate of the at least one candidate product word; and computing an angle value between the vector corresponding to the at least one key product word and the vector corresponding to the at least one candidate product word; and the selecting of the candidate product word having the correlation that meets the threshold value comprises: determining whether the at least one candidate product word has the correlation meeting the threshold value based on the obtained angle value; and selecting the candidate product word having the correlation meeting the threshold value. | 0.567621 |
7,657,433 | 14 | 15 | 14. The method of claim 10 , wherein the feature is a continuous feature. | 14. The method of claim 10 , wherein the feature is a continuous feature. 15. The method of claim 14 , wherein the continuous feature is one or more of an utterance audio duration, a latency of producing speech recognition results, or a time of day. | 0.5 |
7,996,763 | 7 | 11 | 7. A method for assessing complexity levels of data representations, comprising: a computer processor obtaining a first document having information associated with a first data representation being used to model a concept and a second document having information associated with a second data representation being used to model the same concept; the computer processor prompting a user to provide individual element values for individual element objects contained in the data representation, individual attribute values for individual attribute objects contained in the data representation, and nesting values for nesting levels contained in the data representation; the computer processor inputting the element values, the attribute values, and the nesting values received from the user in a table of values; the computer processor analyzing structural components of the first document and the second document to assess a complexity score for the first data representation associated with the first document and a complexity score for the second data representation associated with the second document, wherein nesting levels of structural components in a respective document being analyzed and individual values assigned to different types of structural components in the respective document being analyzed are factored into computing the complexity score for the respective document, a customizable nesting value assigned to a nesting level being multiplied against all of the individual values of the structural components residing at the nesting level, the complexity score of the respective document being impacted by an attribute structural component of the respective document; and the computer processor determining which of the first data representation of the first document and the second data representation of the second document has a smaller complexity score, wherein the structural components comprise element objects and attribute objects, the element objects comprising a first element object having a first individual element value and a second element object having a second individual element value that is different than the first individual element value, wherein, to determine the complexity score for the respective document, the customizable nesting value of a nesting level for each element object within a respective data representation is multiplied against an element value for the element object and the customizable nesting value of the nesting level for each attribute object within the respective data representation is multiplied against an attribute value for the attribute object to determine complexity levels of the structural components of the respective data representation, the complexity levels of the structural components being aggregated. | 7. A method for assessing complexity levels of data representations, comprising: a computer processor obtaining a first document having information associated with a first data representation being used to model a concept and a second document having information associated with a second data representation being used to model the same concept; the computer processor prompting a user to provide individual element values for individual element objects contained in the data representation, individual attribute values for individual attribute objects contained in the data representation, and nesting values for nesting levels contained in the data representation; the computer processor inputting the element values, the attribute values, and the nesting values received from the user in a table of values; the computer processor analyzing structural components of the first document and the second document to assess a complexity score for the first data representation associated with the first document and a complexity score for the second data representation associated with the second document, wherein nesting levels of structural components in a respective document being analyzed and individual values assigned to different types of structural components in the respective document being analyzed are factored into computing the complexity score for the respective document, a customizable nesting value assigned to a nesting level being multiplied against all of the individual values of the structural components residing at the nesting level, the complexity score of the respective document being impacted by an attribute structural component of the respective document; and the computer processor determining which of the first data representation of the first document and the second data representation of the second document has a smaller complexity score, wherein the structural components comprise element objects and attribute objects, the element objects comprising a first element object having a first individual element value and a second element object having a second individual element value that is different than the first individual element value, wherein, to determine the complexity score for the respective document, the customizable nesting value of a nesting level for each element object within a respective data representation is multiplied against an element value for the element object and the customizable nesting value of the nesting level for each attribute object within the respective data representation is multiplied against an attribute value for the attribute object to determine complexity levels of the structural components of the respective data representation, the complexity levels of the structural components being aggregated. 11. The method of claim 7 , further comprising: outputting a result indicating which of the first data representation of the first document and the second data representation of the second document has the smaller complexity score. | 0.5 |
9,275,026 | 1 | 8 | 1. A method of modifying a digital text reader to constrain text copying, the digital text reader being a system comprising a processor running software for displaying digital text to a user, the method comprising incorporating additional software in the digital text reader to: (a) receive a set of rules limiting the amount of text in a document that may be copied, the amount of text being specified by a function of one or more of: (i) a maximum total number of words; (ii) a maximum percentage of words in a sentence for sentences having at least a specified length; (iii) a maximum percentage of words in a paragraph; and (iv) a maximum percentage of words in the document; (b) receive requests from the user to select portions of the text that is being displayed, the selected text comprising a plurality of noncontiguous blocks; (c) if the selected text conforms to the rules, then in response to the selection of the selected text automatically highlight the selected text in a first manner; (d) if the selected text does not conform to the rules, then in response to the selection of the selected text, identify a sub-portion of the selected text that contains an amount of text that conforms to the rules, and automatically highlight the sub-portion of the selected text in the first manner and provide feedback to the user indicating that the selected text contains an amount of text that violates the rules; and (e) if the user enters a copy request, concatenate the noncontiguous blocks in the portion of the text highlighted in the first manner, automatically adding separation markers between the noncontiguous blocks, and if the user further enters a paste command, paste the concatenated blocks separated by the added separation markers to a computer-readable memory as instructed by the user. | 1. A method of modifying a digital text reader to constrain text copying, the digital text reader being a system comprising a processor running software for displaying digital text to a user, the method comprising incorporating additional software in the digital text reader to: (a) receive a set of rules limiting the amount of text in a document that may be copied, the amount of text being specified by a function of one or more of: (i) a maximum total number of words; (ii) a maximum percentage of words in a sentence for sentences having at least a specified length; (iii) a maximum percentage of words in a paragraph; and (iv) a maximum percentage of words in the document; (b) receive requests from the user to select portions of the text that is being displayed, the selected text comprising a plurality of noncontiguous blocks; (c) if the selected text conforms to the rules, then in response to the selection of the selected text automatically highlight the selected text in a first manner; (d) if the selected text does not conform to the rules, then in response to the selection of the selected text, identify a sub-portion of the selected text that contains an amount of text that conforms to the rules, and automatically highlight the sub-portion of the selected text in the first manner and provide feedback to the user indicating that the selected text contains an amount of text that violates the rules; and (e) if the user enters a copy request, concatenate the noncontiguous blocks in the portion of the text highlighted in the first manner, automatically adding separation markers between the noncontiguous blocks, and if the user further enters a paste command, paste the concatenated blocks separated by the added separation markers to a computer-readable memory as instructed by the user. 8. The method of claim 1 wherein one rule specifies a maximum percentage of selected words per paragraph. | 0.847826 |
9,235,626 | 1 | 10 | 1. A system comprising: at least one processor; and a memory that stores instructions that, when executed by the at least one processor, cause the system to perform operations of: obtaining a document that is responsive to a user query, determining an interest of the user based on stored data associated with the user, wherein the interest of the user is determined based on a search history associated with the user, and wherein the search history is limited to within a predetermined period of time from the user query, determining that a portion of the document relates to the interest of the user, generating a first snippet for the document based on the portion of the document that relates to the interest of the user, and providing the first snippet for the document as part of a result list. | 1. A system comprising: at least one processor; and a memory that stores instructions that, when executed by the at least one processor, cause the system to perform operations of: obtaining a document that is responsive to a user query, determining an interest of the user based on stored data associated with the user, wherein the interest of the user is determined based on a search history associated with the user, and wherein the search history is limited to within a predetermined period of time from the user query, determining that a portion of the document relates to the interest of the user, generating a first snippet for the document based on the portion of the document that relates to the interest of the user, and providing the first snippet for the document as part of a result list. 10. The system of claim 1 , the instructions causing the system to further perform the operations of: scoring the generated first snippet by weighting an amount of overlap between the interest and the snippet. | 0.700573 |
8,015,543 | 20 | 34 | 20. A computer-readable medium comprising instructions, which when executed by a computer system causes the computer system to perform operations for a generating code based on a graphical model, the computer-readable medium comprising: instructions for translating the graphical model into a graphical model code, the graphical model code being compilable into an executable program and including a first graphical model code function, the first graphical model code function being a member of a group of graphical model code functions; instructions for receiving a selection of a first hardware specific library from a plurality of hardware specific libraries, the hardware specific libraries corresponding to one of at least a first target environment and a second target environment, the first hardware specific library corresponding to the first target environment; the hardware specific libraries comprising a plurality of relationships between the group of graphical model code functions and hardware specific functions, the hardware specific functions being compilable into object code for execution in the first target environment, and instructions for performing a lookup of the first graphical model code function in the first hardware specific library; instructions for obtaining a matched hardware specific function based on the lookup, the matched hardware specific function matching at least one property of the graphical model code function and being one of the hardware specific functions from the first hardware specific library; and instructions for modifying the graphical model code based on the matched hardware specific function. | 20. A computer-readable medium comprising instructions, which when executed by a computer system causes the computer system to perform operations for a generating code based on a graphical model, the computer-readable medium comprising: instructions for translating the graphical model into a graphical model code, the graphical model code being compilable into an executable program and including a first graphical model code function, the first graphical model code function being a member of a group of graphical model code functions; instructions for receiving a selection of a first hardware specific library from a plurality of hardware specific libraries, the hardware specific libraries corresponding to one of at least a first target environment and a second target environment, the first hardware specific library corresponding to the first target environment; the hardware specific libraries comprising a plurality of relationships between the group of graphical model code functions and hardware specific functions, the hardware specific functions being compilable into object code for execution in the first target environment, and instructions for performing a lookup of the first graphical model code function in the first hardware specific library; instructions for obtaining a matched hardware specific function based on the lookup, the matched hardware specific function matching at least one property of the graphical model code function and being one of the hardware specific functions from the first hardware specific library; and instructions for modifying the graphical model code based on the matched hardware specific function. 34. The computer-readable medium of claim 20 , wherein the hardware specific library is selected from a plurality of hardware specific libraries, where each hardware specific library comprises a plurality of relationships between graphical model code functions and hardware specific functions. | 0.770736 |
6,073,135 | 16 | 17 | 16. A computer program product for representing the connectivity of Web pages, the web pages including links between the Web pages, the links and Web pages being identified by names, the computer program product for use in conjunction with a computer system, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein, the computer program mechanism comprising instructions that: sort the names of the Web pages in a memory; delta encode the sorted names while periodically storing full names as checkpoints in the memory, each delta encoded name and checkpoint having an assigned unique identification; twice sort a list of pairs of identifications that represent the links between the Web page, each pair of identifications including a first identification and a second identification, first according to the first identification of each pair to produce an inlist, and second according to the second identification of each pair to produce an outlist; store an array of elements in the memory, there being one array element for each Web page, each element including a first pointer to one of the checkpoints, a second pointer to an associated inlist of the Web page, and a third pointer to an associated outlist of the Web page; and index the array by a particular identification to locate connected Web pages. | 16. A computer program product for representing the connectivity of Web pages, the web pages including links between the Web pages, the links and Web pages being identified by names, the computer program product for use in conjunction with a computer system, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein, the computer program mechanism comprising instructions that: sort the names of the Web pages in a memory; delta encode the sorted names while periodically storing full names as checkpoints in the memory, each delta encoded name and checkpoint having an assigned unique identification; twice sort a list of pairs of identifications that represent the links between the Web page, each pair of identifications including a first identification and a second identification, first according to the first identification of each pair to produce an inlist, and second according to the second identification of each pair to produce an outlist; store an array of elements in the memory, there being one array element for each Web page, each element including a first pointer to one of the checkpoints, a second pointer to an associated inlist of the Web page, and a third pointer to an associated outlist of the Web page; and index the array by a particular identification to locate connected Web pages. 17. The computer program product of claim 16 wherein the names are uniform resource locators of the Web pages, and the uniform resource locators are sorted lexicographically. | 0.812095 |
9,501,264 | 1 | 3 | 1. A system, including a mobile device, including a processor and memory maintaining instructions, the instructions being interpretable by the processor to present display elements having a first human-language meaning to a user of the mobile device; a development device, including a development environment suitable to create an app by a designer or programmer, the app including at least some of the instructions, and suitable to distribute the app to the mobile device, the app including at least some of the instructions interpretable by the processor to present the display elements in a form having a second human-language meaning to the user; the mobile device including a communication link responsive to one or more signals delivering the app to the mobile device, the communication link being responsive to the user and suitable to present messages regarding the second human-language meaning from the user to the designer or programmer, the messages including a relation between the second human-language meaning and the first human-language meaning; the app including communication link being responsive to the user to send information to the developer or programmer; the app including a first mode in which it performs a first designated function; instructions executable or interpretable by the processor to receive a signal from the user when performing the first designated function, the signal directing the app to enter a state in which it performs a second designated function; the second designated function including a user interface in which the app receives input from the user and communicates that input to the developer or programmer. | 1. A system, including a mobile device, including a processor and memory maintaining instructions, the instructions being interpretable by the processor to present display elements having a first human-language meaning to a user of the mobile device; a development device, including a development environment suitable to create an app by a designer or programmer, the app including at least some of the instructions, and suitable to distribute the app to the mobile device, the app including at least some of the instructions interpretable by the processor to present the display elements in a form having a second human-language meaning to the user; the mobile device including a communication link responsive to one or more signals delivering the app to the mobile device, the communication link being responsive to the user and suitable to present messages regarding the second human-language meaning from the user to the designer or programmer, the messages including a relation between the second human-language meaning and the first human-language meaning; the app including communication link being responsive to the user to send information to the developer or programmer; the app including a first mode in which it performs a first designated function; instructions executable or interpretable by the processor to receive a signal from the user when performing the first designated function, the signal directing the app to enter a state in which it performs a second designated function; the second designated function including a user interface in which the app receives input from the user and communicates that input to the developer or programmer. 3. A system as in claim 1 , wherein the development device, in response to the messages, packages at least some information from the messages in a pre-determined initialization data structure. | 0.813953 |
7,817,630 | 13 | 18 | 13. A communications node comprising: a first dictionary containing string expressions for use in message compression and decompression; a dynamic dictionary that stores expressions for use in the message compression and decompression for exclusive use in the communications node; and a compression/decompression module that receives a message containing individual expressions, the compression/decompression module further determining all matches between the individual expressions of the message and corresponding expressions in the first dictionary, and yet further, among the determined matches of individual expressions, determining different sequences of matches of individual expressions which are contiguous to each other in the message, each of the different matching sequences including one or more contiguous individual expressions and the different sequences which are determined to be contiguous to each other forming at least one new combined expression, and that creates in the dynamic dictionary the at least one new combined expression. | 13. A communications node comprising: a first dictionary containing string expressions for use in message compression and decompression; a dynamic dictionary that stores expressions for use in the message compression and decompression for exclusive use in the communications node; and a compression/decompression module that receives a message containing individual expressions, the compression/decompression module further determining all matches between the individual expressions of the message and corresponding expressions in the first dictionary, and yet further, among the determined matches of individual expressions, determining different sequences of matches of individual expressions which are contiguous to each other in the message, each of the different matching sequences including one or more contiguous individual expressions and the different sequences which are determined to be contiguous to each other forming at least one new combined expression, and that creates in the dynamic dictionary the at least one new combined expression. 18. The communications node claimed in claim 13 , wherein the communications node is a sending node and the compression/decompression module is a compression module. | 0.756637 |
8,151,186 | 13 | 14 | 13. A non-transitory computer-readable storage medium storing executable computer program instructions for generating a signature for a page of text, wherein the signature serves as an identifier of the text page, the instructions performing steps comprising: determining positions of a plurality of words in the text page; for a first word of the plurality of words having a first position in the text page, determining positions of a plurality of second words in the text page relative to the first word position; generating a signature value that describes the second word positions relative to the first word position; generating additional signature values for the text page, each signature value describing positions of other words in the text page relative to a word in the text page for which the signature value is being generated; comparing a first set of signature values for the text page to a second set of signature values for a second text page, wherein the first set of signature values comprises the signature value that describes the second word positions relative to the first word position and the additional signature values; and generating a measure of similarity that describes a result of the comparison. | 13. A non-transitory computer-readable storage medium storing executable computer program instructions for generating a signature for a page of text, wherein the signature serves as an identifier of the text page, the instructions performing steps comprising: determining positions of a plurality of words in the text page; for a first word of the plurality of words having a first position in the text page, determining positions of a plurality of second words in the text page relative to the first word position; generating a signature value that describes the second word positions relative to the first word position; generating additional signature values for the text page, each signature value describing positions of other words in the text page relative to a word in the text page for which the signature value is being generated; comparing a first set of signature values for the text page to a second set of signature values for a second text page, wherein the first set of signature values comprises the signature value that describes the second word positions relative to the first word position and the additional signature values; and generating a measure of similarity that describes a result of the comparison. 14. The computer-readable storage medium of claim 13 , wherein generating the signature value that describes the second word positions relative to the first word position comprises calculating, for second word positions within the plurality of second word positions, a distance between the first word position and the second word position. | 0.5 |
9,213,771 | 1 | 10 | 1. A computer-implemented question-answering method, comprising: receiving an input question; determining multiple types of knowledge databases available for searching, wherein the multiple types of knowledge databases provide data in different formats and include a question-answer paired knowledge database, a plain text knowledge database, a resource description framework (RDF) knowledge database or a combination thereof; when the question-answer paired knowledge database is available, searching question-answer paired data from the question-answer paired knowledge database to determine a first candidate answer to the input question; when the plain text knowledge database is available, searching plain text data from the plain text knowledge database to determine a second candidate answer to the input question; when the resource description framework (RDF) knowledge database is available, searching RDF data from the RDF knowledge database to determine a third candidate answer to the input question; and evaluating the first, second or third candidate answer to generate a final answer to the input question. | 1. A computer-implemented question-answering method, comprising: receiving an input question; determining multiple types of knowledge databases available for searching, wherein the multiple types of knowledge databases provide data in different formats and include a question-answer paired knowledge database, a plain text knowledge database, a resource description framework (RDF) knowledge database or a combination thereof; when the question-answer paired knowledge database is available, searching question-answer paired data from the question-answer paired knowledge database to determine a first candidate answer to the input question; when the plain text knowledge database is available, searching plain text data from the plain text knowledge database to determine a second candidate answer to the input question; when the resource description framework (RDF) knowledge database is available, searching RDF data from the RDF knowledge database to determine a third candidate answer to the input question; and evaluating the first, second or third candidate answer to generate a final answer to the input question. 10. The method of claim 1 wherein searching the plain text data from the one plain text knowledge database comprises: generating a query statement based on a mining set of search terms associated with the input question, wherein the query statement is executable to perform a search of the plain text data to generate search results. | 0.547554 |
5,559,898 | 23 | 24 | 23. The method of claim 19, wherein the number of element calculation cycles in each interval of calculation cycles between the exclusion check points is variable. | 23. The method of claim 19, wherein the number of element calculation cycles in each interval of calculation cycles between the exclusion check points is variable. 24. The method of claim 23, wherein the number of element calculation cycles in each interval is adaptive as the element by element correlation calculation (s,t.sub.i) proceeds. | 0.5 |
8,838,605 | 16 | 17 | 16. A non-transitory program storage device readable by a machine, embodying a program of instructions executable by the machine to perform a method, the method comprising: parsing patent data to generate a set of nodes; selecting at least one node of the set of nodes; determining initial links from meta data associated with the patent data for the at least one node; creating links among the set of nodes based on the metadata; identifying a set of seed nodes; determining a community structure for the set of seed nodes, the community structure including a plurality of communities; and assigning concepts to the plurality of communities, wherein determining the community structure comprises: initiating a percolation message from a source node of a linked network, the linked network comprising a plurality of nodes and a plurality of edges, each edge connecting at least two of the plurality of nodes, wherein a node is a neighbor if the node is connected to another node in the plurality of nodes by an edge, wherein the percolation message comprises a percolation probability and an identifier of the source node, and wherein initiating a percolation message from the source node comprises transmitting the percolation message to each neighbor of the source node with the percolation probability; propagating the percolation message through the linked network, wherein propagating the percolation message through the linked network comprises: transmitting the percolation message from each node that receives the percolation message to each neighbor of each node that receives the percolation message; and transmitting a response to the source node from each node that receives the percolation message; collecting each response to the percolation message at the source node; and storing a list of nodes that transmitted the response at the source node. | 16. A non-transitory program storage device readable by a machine, embodying a program of instructions executable by the machine to perform a method, the method comprising: parsing patent data to generate a set of nodes; selecting at least one node of the set of nodes; determining initial links from meta data associated with the patent data for the at least one node; creating links among the set of nodes based on the metadata; identifying a set of seed nodes; determining a community structure for the set of seed nodes, the community structure including a plurality of communities; and assigning concepts to the plurality of communities, wherein determining the community structure comprises: initiating a percolation message from a source node of a linked network, the linked network comprising a plurality of nodes and a plurality of edges, each edge connecting at least two of the plurality of nodes, wherein a node is a neighbor if the node is connected to another node in the plurality of nodes by an edge, wherein the percolation message comprises a percolation probability and an identifier of the source node, and wherein initiating a percolation message from the source node comprises transmitting the percolation message to each neighbor of the source node with the percolation probability; propagating the percolation message through the linked network, wherein propagating the percolation message through the linked network comprises: transmitting the percolation message from each node that receives the percolation message to each neighbor of each node that receives the percolation message; and transmitting a response to the source node from each node that receives the percolation message; collecting each response to the percolation message at the source node; and storing a list of nodes that transmitted the response at the source node. 17. The program storage device of claim 16 , further comprising refining weights of the plurality of communities and modifying the community structure. | 0.77924 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.