output
stringlengths
7
3.46k
input
stringclasses
1 value
instruction
stringlengths
129
114k
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: An aspect of the present disclosure aims to reduce problems associated with data acquisition of a rule set. Systems and methods enabling a semantic reasoner to stage acquisition of data objects necessary to bring each of the rules stored in the knowledge base to a conclusion are disclosed. To that end, a dependency chain is constructed, identifying whether and how each rule depends on other rules. Based on the dependency chain, the rules are assigned to difference epochs and reasoning engine is configured to perform machine reasoning over rules of each epoch sequentially. Moreover, when processing rules of each epoch, data objects referenced by the rules assigned to a currently processed epoch are acquired according to a certain order established based on criteria such as e.g. cost of acquisition of data objects. Such an approach provides automatic determination and just-in-time acquisition of data objects required for semantic reasoning.
G06N5025
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An aspect of the present disclosure aims to reduce problems associated with data acquisition of a rule set. Systems and methods enabling a semantic reasoner to stage acquisition of data objects necessary to bring each of the rules stored in the knowledge base to a conclusion are disclosed. To that end, a dependency chain is constructed, identifying whether and how each rule depends on other rules. Based on the dependency chain, the rules are assigned to difference epochs and reasoning engine is configured to perform machine reasoning over rules of each epoch sequentially. Moreover, when processing rules of each epoch, data objects referenced by the rules assigned to a currently processed epoch are acquired according to a certain order established based on criteria such as e.g. cost of acquisition of data objects. Such an approach provides automatic determination and just-in-time acquisition of data objects required for semantic reasoning.
Systems and methods are provided for building a user model. The system includes a processor and a non-transitory storage medium accessible to the processor. The processor is configured to obtain user data from a database, where the user data include user behavior for a plurality of apps installed on one or more user terminals. The processor selects at least one rating parameters using the user data, where the at least one rating parameters indicates a rating of relevant app usage. The system builds the user model based on a rating matrix comprising the at least one rating parameters.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system for building a user model, comprising: a processor and a non-transitory storage medium accessible to the processor, the processor configured to: obtain user data from a database, wherein the user data comprise user behavior for a plurality of apps installed on one or more user terminals; select at least one rating parameters using the user data, wherein the at least one rating parameters indicates a rating of relevant app; and build the user model based on a rating matrix comprising the at least one rating parameters. 2. The system of claim 1, wherein at least one rating parameters comprises at least one of the following: an explicit rating by a user; and an implicit rating, the implicit rating comprising usage time that represents time spent on an app in a preset time period and interaction frequency indicating a frequency of accessing the app. 3. The system of claim 1, wherein at least one rating parameters comprises: normalized usage time that represents a ratio of the usage time compared with aggregate app statistics for a group of users with a preset common character. 4. The system of claim 1, wherein at least one rating parameters comprises: log usage time that represents a log transformation to account for marginal utility. 5. The system of claim 1, wherein the processor is configured to build the user model based on the rating matrix using matrix factorization. 6. The system of claim 1, wherein the processor is configured to build the user model using collaborative filtering. 7. The system of claim 5, wherein the processor is configured to project the rating matrix into a product of a first factor matrix U and a second factor matrix P. 8. The system of claim 7, wherein the processor is configured to: introduce a weight matrix W to calculate a cost function of weighted least square errors; build the user model by minimizing the cost function via alternatively estimating the first factor matrix U using the second factor matrix P and estimating the second factor matrix P using the first factor matrix U; estimate app usage using the user model; and recommend at least one app candidates based on the estimated app usage. 9. A method for building a user model, comprising: obtaining, by one or more devices having a processor, user data from a database, wherein the user data comprise user behavior for a plurality of apps installed on one or more user terminals; selecting, by the one or more devices, at least one rating parameters using the user data, wherein the at least one rating parameters indicates a rating of relevant app; building, by the one or more devices, the user model based on a rating matrix comprising the at least one rating parameters; estimating, by the one or more devices, app usage using the user model; and recommending, by the one or more devices, at least one app candidates based on the app usage. 10. The method of claim 9, wherein at least one rating parameters comprises at least one of the following: an explicit rating by a user; and an implicit rating, the implicit rating comprising usage time that represents time spent on an app in a preset time period and interaction frequency indicating a frequency of accessing the app. 11. The method of claim 9, wherein at least one rating parameters comprises: normalized usage time that represents a ratio of the usage time compared with aggregate app statistics for a group of users with a preset common character. 12. The method of claim 9, wherein at least one rating parameters comprises: log usage time that represents a log transformation to account for marginal utility. 13. The method of claim 9, further comprising: building the user model based on the rating matrix using matrix factorization. 14. The method of claim 13, wherein the matrix factorization comprises collaborative filtering. 15. The method of claim 13, further comprising: projecting the rating matrix into a product of a first factor matrix U and a second factor matrix P. 16. The method of claim 15, further comprising: introducing a weight matrix W to calculate a cost function of weighted least square errors; and building the user model by minimizing the cost function via alternatively estimating the first factor matrix U using the second factor matrix P and estimating the second factor matrix P using the first factor matrix U. 17. A non-transitory storage medium, comprising: instructions executable to obtain user data from a database, wherein the user data comprise user behavior for a plurality of apps installed on one or more user terminals; instructions executable to select at least one rating parameters using the user data, wherein the at least one rating parameters indicates a rating of relevant app; and instructions executable to build a user model based on a rating matrix comprising the at least one rating parameters. 18. The non-transitory storage medium of claim 17, further comprising: instructions executable to build the user model based on the rating matrix using collaborative filtering; wherein at least one rating parameters comprises at least one of the following: usage time that represents time spent on an app in a preset time period; normalized usage time that represents a ratio of the usage time compared with aggregate app statistics for a group of users with a preset common character; and log usage time that represents a log transformation to account for marginal utility. 19. The non-transitory storage medium of claim 17, further comprising: instructions executable to project the rating matrix into a product of a first factor matrix U and a second factor matrix P. 20. The non-transitory storage medium of claim 19, further comprising: instructions executable to introduce a weight matrix W to calculate a cost function of weighted least square errors; instructions executable to build the user model by minimizing the cost function via alternatively estimating the first factor matrix U using the second factor matrix P and estimating the second factor matrix P using the first factor matrix U; instructions executable to estimate app usage using the user model; and instructions executable to recommend at least one app candidates based on the estimated app usage.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Systems and methods are provided for building a user model. The system includes a processor and a non-transitory storage medium accessible to the processor. The processor is configured to obtain user data from a database, where the user data include user behavior for a plurality of apps installed on one or more user terminals. The processor selects at least one rating parameters using the user data, where the at least one rating parameters indicates a rating of relevant app usage. The system builds the user model based on a rating matrix comprising the at least one rating parameters.
G06N5022
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Systems and methods are provided for building a user model. The system includes a processor and a non-transitory storage medium accessible to the processor. The processor is configured to obtain user data from a database, where the user data include user behavior for a plurality of apps installed on one or more user terminals. The processor selects at least one rating parameters using the user data, where the at least one rating parameters indicates a rating of relevant app usage. The system builds the user model based on a rating matrix comprising the at least one rating parameters.
Analyzing a set of policies. A goal comprising a particular outcome is received. An analysis object comprising a data structure maintaining information needed to perform an analysis of the goal is defined. The analysis object is configured to limit a number of calculations needed to achieve the goal. Each member of a set of expressions found in the set of policies has an output. The output is the same for each expression. One of the set of expressions is solved. The solved output is cached in the analysis object such that the solved output is associated with each member of the set of expressions. The analysis object is processed to create a set of values that achieves the goal. Processing includes referencing the cache to retrieve the solved output each time a member of the set of expressions is to be solved during processing of the analysis object.
Please help me write a proper abstract based on the patent claims. CLAIM: 1-20. (canceled) 21. A method implemented in a computer processor, the method comprising: receiving, at the processor, a plurality of policies from a non-transitory computer readable storage medium in communication with the processor, the plurality of policies comprising a corresponding plurality of attributes; calculating, by the processor, a plurality of configurations for the corresponding plurality of attributes; defining, by the processor, a vector space, wherein components of the vector space comprise individual attribute settings and rule outputs, as well as overall policy outputs, for each configuration of the corresponding plurality of attributes; and searching, by the processor, the vector space based on an input received at the processor. 22. The method of claim 21 further comprising: prior to searching, pre-computing and indexing, by the processor, a result space for the plurality of policies and the corresponding plurality of attributes. 23. The method of claim 22 further comprising: outputting, by the processor, values of clauses and rules in the plurality of policies in response to a query received at the processor. 24. The method of claim 22 further comprising: receiving, at the processor, an input comprising a particular policy; and outputting, by the processor, a similar rule that has a similar result relative to the particular policy, the similar rule being in the plurality of policies. 25. The method of claim 21, wherein searching comprises searching for a particular configuration of the corresponding plurality of attributes. 26. The method of claim 21, wherein searching comprises searching for conditions under which a desired result can be achieved from the plurality of policies. 27. The method of claim 21 further comprising: outputting, by the processor, a suggestion regarding which attributes of the corresponding plurality of attributes make a most significant contribution to a complexity of the plurality of policies. 28. The method of claim 21 further comprising: determining, by the processor based on the searching, a side effect of a set of policies within the plurality of policies, wherein the side effect comprises an unintended and unexpected result of applying a particular set of inputs for particular ones of the plurality of policies. 29. The method of claim 21 further comprising: outputting, by the processor based on the searching, rules for which a sufficient number of necessary conditions are satisfied. 30. The method of claim 21 further comprising: outputting, by the processor based on the searching, a reason why conditions of a rule were not satisfied by listing a disjunctive normal form minterm expression that was satisfied or not satisfied in the plurality of polices. 31. The method of claim 21 further comprising: outputting, by the processor based on the searching, an attribute expression that is a sole negative component of a minterm or expression that otherwise would have been satisfied. 32. The method of claim 21 further comprising: outputting, by the processor based on the searching, a configuration in the vector space that is nearby another configuration in the vector space. 33. The method of claim 32 further comprising: determining, by the processor based on the outputting, whether an overall result of applying the plurality of policies could be changed by changing only one attribute value in the corresponding plurality of attribute values. 34. The method of claim 21 further comprising: outputting, by the processor based on the searching, a rule or rules that allow or deny a particular result of implementing the plurality of policies. 35. The method of claim 21 further comprising: outputting, by the processor based on the searching, a rule or rules that could have allowed or denied a particular result of implementing the plurality of policies, but were not satisfied by a selection of attribute settings of the plurality of attributes. 36. The method of claim 21 further comprising: outputting, by the processor based on the searching, a reason why a particular rule in the plurality of rules fires when a particular set of attributes is received for the plurality of attributes. 37. The method of claim 36, wherein outputting is performed by transforming a logic of the particular rule into a disjunctive normal form and then displaying particular disjunctive normal form clauses that were active in firing the particular rule. 38. The method of claim 21 further comprising: outputting, by the processor based on the searching, a sensitivity of a particular attribute in the plurality of attributes by outputting the particular attribute if the particular attribute may be changed without changing a result of applying the plurality of policies. 39. A computer comprising: a bus; a processor connected to the bus; a non-transitory computer readable storage medium connected to the bus, the non-transitory computer readable storage medium storing program code, which when executed by the processor, performs a method, the program code comprising: program code for receiving, at the processor, a plurality of policies from a non-transitory computer readable storage medium in communication with the processor, the plurality of policies comprising a corresponding plurality of attributes; program code for calculating, by the processor, a plurality of configurations for the corresponding plurality of attributes; program code for defining, by the processor, a vector space, wherein components of the vector space comprise individual attribute settings and rule outputs, as well as overall policy outputs, for each configuration of the corresponding plurality of attributes; and program code for searching, by the processor, the vector space based on an input received at the processor. 40. A non-transitory computer readable storage medium storing program code, which when executed by the processor, performs a method, the program code comprising: program code for receiving, at the processor, a plurality of policies from a non-transitory computer readable storage medium in communication with the processor, the plurality of policies comprising a corresponding plurality of attributes; program code for calculating, by the processor, a plurality of configurations for the corresponding plurality of attributes; program code for defining, by the processor, a vector space, wherein components of the vector space comprise individual attribute settings and rule outputs, as well as overall policy outputs, for each configuration of the corresponding plurality of attributes; and program code for searching, by the processor, the vector space based on an input received at the processor.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Analyzing a set of policies. A goal comprising a particular outcome is received. An analysis object comprising a data structure maintaining information needed to perform an analysis of the goal is defined. The analysis object is configured to limit a number of calculations needed to achieve the goal. Each member of a set of expressions found in the set of policies has an output. The output is the same for each expression. One of the set of expressions is solved. The solved output is cached in the analysis object such that the solved output is associated with each member of the set of expressions. The analysis object is processed to create a set of values that achieves the goal. Processing includes referencing the cache to retrieve the solved output each time a member of the set of expressions is to be solved during processing of the analysis object.
G06N5025
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Analyzing a set of policies. A goal comprising a particular outcome is received. An analysis object comprising a data structure maintaining information needed to perform an analysis of the goal is defined. The analysis object is configured to limit a number of calculations needed to achieve the goal. Each member of a set of expressions found in the set of policies has an output. The output is the same for each expression. One of the set of expressions is solved. The solved output is cached in the analysis object such that the solved output is associated with each member of the set of expressions. The analysis object is processed to create a set of values that achieves the goal. Processing includes referencing the cache to retrieve the solved output each time a member of the set of expressions is to be solved during processing of the analysis object.
Determining the value of an item or event at a point in time or over a targeted period of time based on the values of the item over a past era. During the past era, data is obtained to represent the changes to the value during the era. This data is iteratively processed by: smoothing the curve, subtracting the smooth curve from the original curve to obtain a next level curve and repeating the process. After several iterations, a set of smoothed out curves are obtain and the projected time-wise to encompass the targeted time or period of time. Once projected, the curves are then combined together and the value at the targeted time or period of time can then be ascertained.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method, under the control of a processing unit, for identifying the value of characteristics of an item at a point in time based on characteristic information representative of the item during a past era of time, the method comprising the actions of the processing unit: receiving chart data representative of one or more characteristics of an item during a past era of time; iteratively extracting varying degrees of characteristic information from the chart data for n levels; for each degree of characteristic information, project the data to a different point in time; and iteratively combining the projected data to create new chart data representative of one or more characteristics of the item at a different point in time; and further comprising the action of invoking an action based at least in part on the value of the one or more characteristics at the different point in time. 2. The method of claim 1, wherein the action of iteratively extracting varying degrees of characteristic information from the chart data for n levels comprises the actions of: (a) identifying the trend of the data represented in the current level chart data; (b) extracting the characteristic information from the current level chart data as a function of the identified trend of the data and the current level curve data; (c) setting a next level chart data to be the characteristic information for the current level; (d) using the next level chart data and repeating actions (a)-(c); and (e) after n iterations, set the margin of error as a function of the characteristic information and the chart data for the final level. 3. The method of claim 2, wherein the action of projecting the data to a different point in time for each degree of characteristic information comprises the actions of: identifying a target point in time; and projecting a curve represented by the identified trend of data for each level of the chart data at least to that target point in time by continuing the trend and pattern of the data. 4. The method of claim 3, wherein the action of iteratively combining the projected data to create new chart data representative of one or more characteristics of the item at a different point in time comprises the action of combining each level of the projected data at least for the data corresponding with the different point in time. 5. The method of claim 2, wherein the action of identifying the trend of the data represented in the current level chart data comprises the action of smoothing the curve represented by the chart data. 6. The method of claim 5, wherein the action of smoothing the curve comprises calculating a moving average. 7. The method of claim 2, wherein the action of extracting the characteristic information from the current level chart data as a function of the identified trend of the data and the current level curve data comprises subtracting the identified trend of the data from the curve represented by the chart data. 8. The method of claim 2, wherein the action of generating a next level chart data as a function of the characteristic information for the current level and the chart data for the current level comprises the action of subtracting the characteristic information from the current level from the chart data for the current level. 9. The method of claim 1, wherein the action of projecting the data to a different point in time for each degree of characteristic information comprises the actions of: identifying a target point in time; and projecting a curve represented by the characteristic information for each level at least to that target point in time by continuing the trend and pattern of the data. 10. The method of claim 1, wherein the action of iteratively combining the projected data to create new chart data representative of one or more characteristics of the item at a different point in time comprises the actions of: combining each level of the projected data at least for the data corresponding with the different point in time. 11. The method of claim 1, further comprising the action of attributing any remaining characteristic information for the nth iteration as the margin of error. 12. The method of claim 11, wherein the iterative process is repeated until the margin of error is substantially negligible. 13. The method of claim 1, wherein the action of iteratively extracting varying degrees of characteristic information from the chart data for n levels comprises the actions of: (a) smoothing the curve of the data represented in the current level chart data; (b) setting a next level chart data to be the difference between the smoothed curve and the original chart data; and (c) using the next level chart data and repeating actions (a)-(b). 14. The method of claim 13, wherein the action of smoothing the curve comprises calculating a moving average. 15. The method of claim 13, wherein the action of projecting the smoothed data to include the target point or period of time comprises: determining if there is a repeatable pattern within the smoothed data over time period t and repeating the pattern over additional time periods t until the target point or period is included; and if no repeatable pattern is identified, determining if there is a trend within the smoothed data over time period t and continuing the trend over additional time periods t until the target point or period is included. 16. A method, implemented within a computing environment, configured to approximate characteristics of an item for a target point or period time based on characteristic information representative of the item during a past era of time, the method comprising the actions of: receiving data representative of one or more characteristics of an item during a past era of time; extracting varying degrees of characteristic information from the characteristic data by: applying a smoothing algorithm to the characteristic data over at least a portion of the past era; subtract the smoothed data from the characteristic data to obtain remainder data; repeat the applying action and subtracting action two or more times with the remainder data; for each set of smoothed data, projecting the smoothed data to include the target point or period of time; combining the projected smoothed data to create projected characteristic data representative of the one or more characteristics of the item; and invoking an action based at least in part on the value of the one or more characteristics at the different point in time. 17. The method of claim 16, wherein the action of applying a smoothing algorithm comprises calculating a moving average. 18. The method of claim 16, wherein the action of projecting the smoothed data to include the target point or period of time comprises: identifying a pattern within the smoothed data over time period t; repeating the pattern over additional time periods t until the target point or period is included. 19. The method of claim 16, wherein the action of projecting the smoothed data to include the target point or period of time comprises: identifying a trend within the smoothed data over time period t; continuing the trend over additional time periods t until the target point or period is included. 20. The method of claim 16, wherein the action of projecting the smoothed data to include the target point or period of time comprises: determining if there is a repeatable pattern within the smoothed data over time period t and repeating the pattern over additional time periods t until the target point or period is included; and if no repeatable pattern is identified, determining if there is a trend within the smoothed data over time period t and continuing the trend over additional time periods t until the target point or period is included.
REJECTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Determining the value of an item or event at a point in time or over a targeted period of time based on the values of the item over a past era. During the past era, data is obtained to represent the changes to the value during the era. This data is iteratively processed by: smoothing the curve, subtracting the smooth curve from the original curve to obtain a next level curve and repeating the process. After several iterations, a set of smoothed out curves are obtain and the projected time-wise to encompass the targeted time or period of time. Once projected, the curves are then combined together and the value at the targeted time or period of time can then be ascertained.
G06N5048
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Determining the value of an item or event at a point in time or over a targeted period of time based on the values of the item over a past era. During the past era, data is obtained to represent the changes to the value during the era. This data is iteratively processed by: smoothing the curve, subtracting the smooth curve from the original curve to obtain a next level curve and repeating the process. After several iterations, a set of smoothed out curves are obtain and the projected time-wise to encompass the targeted time or period of time. Once projected, the curves are then combined together and the value at the targeted time or period of time can then be ascertained.
Method, system, and programs for estimating interests of a plurality of users with respect to a new piece of information are disclosed. In one example, historical interests of the plurality of users are obtained with respect to one or more existing pieces of information. One or more users are selected from the plurality of users. Historical interests of the one or more users can minimize an objective function over the plurality of users. Interests of the one or more users are obtained with respect to the new piece of information. Estimated interests of the plurality of users are generated with respect to the new piece of information based on the obtained interests of the one or more users.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method, implemented on a machine having at least one processor, storage, and a communication platform connected to a network for estimating interests of a plurality of users with respect to a new piece of information, comprising: obtaining historical interests of the plurality of users with respect to one or more existing pieces of information; selecting one or more users from the plurality of users, wherein historical interests of the one or more users minimize an objective function over the plurality of users; obtaining interests of the one or more users with respect to the new piece of information; and generating estimated interests of the plurality of users with respect to the new piece of information based on the obtained interests of the one or more users. 2. The method of claim 1, wherein quantity of the selected one or more users is a predetermined number. 3. The method of claim 1, wherein the objective function represents an expected mean square error between the estimated interests and real interests of the plurality of users with respect to the new piece of information. 4. The method of claim 1, wherein selecting one or more users from the plurality of users comprises: generating a vector for each of the plurality of users, wherein the vector represents historical interests of the user; determining a plurality of user sets, wherein each user set comprises a same predetermined number of users in the plurality of users; generating a plurality of function values by calculating the objective function based on vectors for each of the plurality of user sets; and selecting one of the plurality of user sets so that the function value generated based on the selected user set is least among the plurality of function values. 5. The method of claim 1, wherein selecting one or more users from the plurality of users comprises: generating a vector for each of the plurality of users, wherein the vector represents historical interests of the user; determining a user set which initially comprises the plurality of users; generating a matrix which initially comprises a plurality of columns, wherein each column of the matrix corresponds to a generated vector for one of the users in the user set; generating a plurality of candidate matrices each of which corresponding to a user in the user set and generated by removing the column corresponding to the user from the matrix; generating a plurality of function values by calculating the objective function based on each of the plurality of candidate matrices; selecting one of the plurality of candidate matrices so that the function value generated based on the selected candidate matrix is least among the plurality of function values; updating the matrix with the selected candidate matrix; and updating the user set by removing the user corresponding to the selected candidate matrix from the user set. 6. The method of claim 1, wherein the estimated interests of the plurality of users are generated based on a least squares model. 7. The method of claim 1, further comprising: identifying a user other than the plurality of users; and estimating interest of the user based on the obtained interests of the one or more users. 8. A system, having at least one processor, storage, and a communication platform connected to a network for estimating interests of a plurality of users with respect to a new piece of information, comprising: a user interest retriever configured to obtain historical interests of the plurality of users with respect to one or more existing pieces of information; a reviewer selection unit configured to select one or more users from the plurality of users, wherein historical interests of the one or more users minimize an objective function over the plurality of users; a review receiver configured to obtain interests of the one or more users with respect to the new piece of information; and a user interest estimation unit configured to generate estimated interests of the plurality of users with respect to the new piece of information based on the obtained interests of the one or more users. 9. The system of claim 8, wherein quantity of the selected one or more users is a predetermined number. 10. The system of claim 8, wherein the objective function represents an expected mean square error between the estimated interests and real interests of the plurality of users with respect to the new piece of information. 11. The system of claim 8, wherein the reviewer selection unit comprises: a reviewer selection controller configured to generate a vector for each of the plurality of users, wherein the vector represents historical interests of the user; a reviewer determiner configured to determine a plurality of user sets, wherein each user set comprises a same predetermined number of users in the plurality of users; and an objective function calculation unit configured to generate a plurality of function values by calculating the objective function based on vectors for each of the plurality of user sets, wherein the reviewer determiner is further configured to select one of the plurality of user sets so that the function value generated based on the selected user set is least among the plurality of function values. 12. The system of claim 8, wherein the reviewer selection unit comprises: a reviewer selection controller configured to generate a vector for each of the plurality of users, wherein the vector represents historical interests of the user; a reviewer determiner configured to determine a user set which initially comprises the plurality of users; and an objective function calculation unit configured to generate a matrix which initially comprises a plurality of columns, wherein each column of the matrix corresponds to a generated vector for one of the users in the user set, wherein the reviewer determiner and the objective function calculation unit are further configured to generate a plurality of candidate matrices each of which corresponding to a user in the user set and generated by removing the column corresponding to the user from the matrix, generate a plurality of function values by calculating the objective function based on each of the plurality of candidate matrices, select one of the plurality of candidate matrices so that the function value generated based on the selected candidate matrix is least among the plurality of function values, update the matrix with the selected candidate matrix, and update the user set by removing the user corresponding to the selected candidate matrix from the user set. 13. The system of claim 8, wherein the estimated interests of the plurality of users are generated based on a least squares model. 14. The system of claim 8, wherein the user interest estimation unit is further configured to: identify a user other than the plurality of users; and estimate interest of the user based on the obtained interests of the one or more users. 15. A machine-readable tangible and non-transitory medium having information recorded thereon for estimating interests of a plurality of users with respect to a new piece of information, wherein the information, when read by the machine, causes the machine to perform the following: obtaining historical interests of the plurality of users with respect to one or more existing pieces of information; selecting one or more users from the plurality of users, wherein historical interests of the one or more users minimize an objective function over the plurality of users; obtaining interests of the one or more users with respect to the new piece of information; and generating estimated interests of the plurality of users with respect to the new piece of information based on the obtained interests of the one or more users. 16. The medium of claim 15, wherein quantity of the selected one or more users is a predetermined number. 17. The medium of claim 15, wherein the objective function represents an expected mean square error between the estimated interests and real interests of the plurality of users with respect to the new piece of information. 18. The medium of claim 15, wherein selecting one or more users from the plurality of users comprises: generating a vector for each of the plurality of users, wherein the vector represents historical interests of the user; determining a plurality of user sets, wherein each user set comprises a same predetermined number of users in the plurality of users; generating a plurality of function values by calculating the objective function based on vectors for each of the plurality of user sets; and selecting one of the plurality of user sets so that the function value generated based on the selected user set is least among the plurality of function values. 19. The medium of claim 15, wherein selecting one or more users from the plurality of users comprises: generating a vector for each of the plurality of users, wherein the vector represents historical interests of the user; determining a user set which initially comprises the plurality of users; generating a matrix which initially comprises a plurality of columns, wherein each column of the matrix corresponds to a generated vector for one of the users in the user set; generating a plurality of candidate matrices each of which corresponding to a user in the user set and generated by removing the column corresponding to the user from the matrix; generating a plurality of function values by calculating the objective function based on each of the plurality of candidate matrices; selecting one of the plurality of candidate matrices so that the function value generated based on the selected candidate matrix is least among the plurality of function values; updating the matrix with the selected candidate matrix; and updating the user set by removing the user corresponding to the selected candidate matrix from the user set. 20. The medium of claim 15, wherein the estimated interests of the plurality of users are generated based on a least squares model. 21. The medium of claim 15, the information, when read by the machine, further causes the machine to perform the following: identifying a user other than the plurality of users; and estimating interest of the user based on the obtained interests of the one or more users.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Method, system, and programs for estimating interests of a plurality of users with respect to a new piece of information are disclosed. In one example, historical interests of the plurality of users are obtained with respect to one or more existing pieces of information. One or more users are selected from the plurality of users. Historical interests of the one or more users can minimize an objective function over the plurality of users. Interests of the one or more users are obtained with respect to the new piece of information. Estimated interests of the plurality of users are generated with respect to the new piece of information based on the obtained interests of the one or more users.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Method, system, and programs for estimating interests of a plurality of users with respect to a new piece of information are disclosed. In one example, historical interests of the plurality of users are obtained with respect to one or more existing pieces of information. One or more users are selected from the plurality of users. Historical interests of the one or more users can minimize an objective function over the plurality of users. Interests of the one or more users are obtained with respect to the new piece of information. Estimated interests of the plurality of users are generated with respect to the new piece of information based on the obtained interests of the one or more users.
A mechanism is provided in a data processing system for performing regression testing on a question answering system instance. The mechanism trains a machine learning model for a question answering system using a ground truth virtual checksum as part of a ground truth including domain-specific ground truth. The ground truth virtual checksum comprises a set of test questions, an answer to each test question, and a confidence level range for each answer to a corresponding test question. The mechanism runs regression test buckets across system nodes with domain-specific corpora and receiving results from the system nodes. Each system node implements a question answering system instance of the question answering system by executing in accordance with the machine learning model and by accessing domain-specific corpora. Each test bucket includes a set of questions matching a subset of questions in the ground truth virtual checksum. The mechanism identifies regressions, inconsistencies, or destabilizations in code behavior in the system nodes based on results of comparing the results to the ground truth virtual checksum and generates a report presenting the identified regressions, inconsistencies, or destabilizations and the affected system nodes.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method, in a data processing system, for performing regression testing on a question answering system instance, the method comprising: training a machine learning model for a question answering system using a ground truth virtual checksum as part of a ground truth including domain-specific ground truth, wherein the ground truth virtual checksum comprises a set of test questions, an answer to each test question, and a confidence level range for each answer to a corresponding test question; running regression test buckets across system nodes with domain-specific corpora and receiving results from the system nodes, wherein each system node implements a question answering system instance of the question answering system by executing in accordance with the machine learning model and by accessing domain-specific corpora and wherein each test bucket includes a set of questions matching a subset of questions in the ground truth virtual checksum; and identifying regressions, inconsistencies, or destabilizations in code behavior in the system nodes based on results of comparing the results to the ground truth virtual checksum; and generating a report presenting the identified regressions, inconsistencies, or destabilizations and the affected system nodes. 2. The method of claim 1, wherein running test buckets across system nodes comprises calling a test script in a given system node to retrieve a designated question from a designated test bucket, submit the designated question to the question answering system instance at the given system node, and receive a result answer and a result confidence value from the question answering system instance. 3. The method of claim 2, wherein the ground truth virtual checksum further comprises a performance metric value range for each test question. 4. The method of claim 3, wherein the performance metric value range comprises a response time range. 5. The method of claim 3, wherein the question answering system instance returns a result performance metric value and wherein identifying regressions, inconsistencies, or destabilizations comprises comparing the result answer, the result confidence value, and the result performance metric value to the ground truth virtual checksum. 6. The method of claim 2, wherein the ground truth virtual checksum is a multi-dimensional ground truth virtual checksum comprising a question identifier and a plurality of three-value n-tuples, each comprising an answer identifier, a confidence level range, and a response time range. 7. The method of claim 6, wherein the question answering system instance returns a plurality of candidate answers, each having an associated answer identifier, confidence value, and performance metric value, and wherein identifying regressions, inconsistencies, or destabilizations comprises comparing the answer identifier, confidence value, and performance metric value of the plurality of candidate answers to the ground truth virtual checksum. 8. The method of claim 1, wherein each test question is selected to exercise predefined components of a question answering pipeline of the question answering system instance. 9. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: train a machine learning model for a question answering system using a ground truth virtual checksum as part of a ground truth including domain-specific ground truth, wherein the ground truth virtual checksum comprises a set of test questions, an answer to each test question, and a confidence level range for each answer to a corresponding test question; run regression test buckets across system nodes with domain-specific corpora and receiving results from the system nodes, wherein each system node implements a question answering system instance of the question answering system by executing in accordance with the machine learning model and by accessing domain-specific corpora and wherein each test bucket includes a set of questions matching a subset of questions in the ground truth virtual checksum; and identify regressions, inconsistencies, or destabilizations in code behavior in the system nodes based on results of comparing the results to the ground truth virtual checksum; and generate a report presenting the identified regressions, inconsistencies, or destabilizations and the affected system nodes. 10. The computer program product of claim 9, wherein running test buckets across system nodes comprises calling a test script in a given system node to retrieve a designated question from a designated test bucket, submit the designated question to the question answering system instance at the given system node, and receive a result answer and a result confidence value from the question answering system instance. 11. The computer program product of claim 10, wherein the ground truth virtual checksum further comprises a performance metric value range for each test question. 12. The computer program product of claim 11, wherein the performance metric value range comprises a response time range. 13. The computer program product of claim 11, wherein the question answering system instance returns a result performance metric value and wherein identifying regressions, inconsistencies, or destabilizations comprises comparing the result answer, the result confidence value, and the result performance metric value to the ground truth virtual checksum. 14. The computer program product of claim 10, wherein the ground truth virtual checksum is a multi-dimensional ground truth virtual checksum comprising a question identifier and a plurality of three-value n-tuples, each comprising an answer identifier, a confidence level range, and a response time range. 15. The computer program product of claim 14, wherein the question answering system instance returns a plurality of candidate answers, each having an associated answer identifier, confidence value, and performance metric value, and wherein identifying regressions, inconsistencies, or destabilizations comprises comparing the answer identifier, confidence value, and performance metric value of the plurality of candidate answers to the ground truth virtual checksum. 16. The computer program product of claim 9, wherein each test question is selected to exercise predefined components of a question answering pipeline of the question answering system instance. 17. An apparatus comprising: a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: train a machine learning model for a question answering system using a ground truth virtual checksum as part of a ground truth including domain-specific ground truth, wherein the ground truth virtual checksum comprises a set of test questions, an answer to each test question, and a confidence level range for each answer to a corresponding test question; run regression test buckets across system nodes with domain-specific corpora and receiving results from the system nodes, wherein each system node implements a question answering system instance of the question answering system by executing in accordance with the machine learning model and by accessing domain-specific corpora and wherein each test bucket includes a set of questions matching a subset of questions in the ground truth virtual checksum; and identify regressions, inconsistencies, or destabilizations in code behavior in the system nodes based on results of comparing the results to the ground truth virtual checksum; and generate a report presenting the identified regressions, inconsistencies, or destabilizations and the affected system nodes. 18. The apparatus of claim 17, wherein running test buckets across system nodes comprises calling a test script in a given system node to retrieve a designated question from a designated test bucket, submit the designated question to the question answering system instance at the given system node, and receive a result answer and a result confidence value from the question answering system instance. 19. The apparatus of claim 18, wherein the ground truth virtual checksum further comprises a performance metric value range for each test question. 20. The apparatus of claim 18, wherein the ground truth virtual checksum is a multi-dimensional ground truth virtual checksum comprising a question identifier and a plurality of three-value n-tuples, each comprising an answer identifier, a confidence level range, and a response time range.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A mechanism is provided in a data processing system for performing regression testing on a question answering system instance. The mechanism trains a machine learning model for a question answering system using a ground truth virtual checksum as part of a ground truth including domain-specific ground truth. The ground truth virtual checksum comprises a set of test questions, an answer to each test question, and a confidence level range for each answer to a corresponding test question. The mechanism runs regression test buckets across system nodes with domain-specific corpora and receiving results from the system nodes. Each system node implements a question answering system instance of the question answering system by executing in accordance with the machine learning model and by accessing domain-specific corpora. Each test bucket includes a set of questions matching a subset of questions in the ground truth virtual checksum. The mechanism identifies regressions, inconsistencies, or destabilizations in code behavior in the system nodes based on results of comparing the results to the ground truth virtual checksum and generates a report presenting the identified regressions, inconsistencies, or destabilizations and the affected system nodes.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A mechanism is provided in a data processing system for performing regression testing on a question answering system instance. The mechanism trains a machine learning model for a question answering system using a ground truth virtual checksum as part of a ground truth including domain-specific ground truth. The ground truth virtual checksum comprises a set of test questions, an answer to each test question, and a confidence level range for each answer to a corresponding test question. The mechanism runs regression test buckets across system nodes with domain-specific corpora and receiving results from the system nodes. Each system node implements a question answering system instance of the question answering system by executing in accordance with the machine learning model and by accessing domain-specific corpora. Each test bucket includes a set of questions matching a subset of questions in the ground truth virtual checksum. The mechanism identifies regressions, inconsistencies, or destabilizations in code behavior in the system nodes based on results of comparing the results to the ground truth virtual checksum and generates a report presenting the identified regressions, inconsistencies, or destabilizations and the affected system nodes.
In a host device, a method for performing an anomaly analysis of a computer environment includes applying a learned behavior function to a data training set and to a set of data elements received from at least one computer environment resource to define at least one learned behavior boundary relative to at least one cluster of data elements of the data training set; applying a sensitivity function to the at least one cluster to define a sensitivity boundary relative to at least one learned behavior boundary, the sensitivity boundary related to a variance associated with the at least one cluster and to a mean value of the at least one cluster; and identifying a data element of the set of data elements as an anomalous data element when the data element of the set of data falls outside of the sensitivity boundary.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. In a host device, a method for performing an anomaly analysis of a computer environment, comprising: applying, by host device, a learned behavior function to a data training set and to a set of data elements received from at least one computer environment resource to define at least one learned behavior boundary relative to at least one cluster of data elements of the data training set, the at least one learned behavior boundary related to a variance associated with the at least one cluster; applying, by host device, a sensitivity function to the at least one cluster to define a sensitivity boundary relative to at least one learned behavior boundary, the sensitivity boundary related to the variance associated with the at least one cluster and to a mean value of the at least one cluster; and identifying, by host device, a data element of the set of data elements as an anomalous data element associated with an attribute of the at least one computer environment resource when the data element of the set of data falls outside of the sensitivity boundary. 2. The method of claim 1, wherein the learned behavior function defines the learned behavior boundary as being three standard deviations from a centroid of the at least one cluster. 3. The method of claim 1, wherein the sensitivity boundary further relates to a ratio of the mean value of the at least one cluster and the variance of the at least one cluster. 4. The method of claim 1, wherein applying the sensitivity function to the at least one cluster further comprises adjusting, by the host device, a value of the sensitivity boundary for a relatively small mean value of the at least one cluster. 5. The method of claim 1, further comprising: receiving, by the host device, a user-selected global sensitivity parameter; and adjusting, by the host device, a sensitivity adjustment value of the sensitivity boundary based upon the global sensitivity parameter. 6. The method of claim 5, wherein receiving the global sensitivity parameter based upon the user selected input value comprises: displaying, by the host device and via a graphical user interface, a sensitivity selection screen; and receiving, by host device, the global sensitivity parameter based upon a user-selected input value provided from the sensitivity selection screen. 7. The method of claim 1, further comprising: receiving, by the host device, the set of data elements from the at least one computer environment resource of the computer infrastructure, each data element of the set of data elements relating to an attribute of the at least one computer environment resource; and applying, by host device, a clustering function to the set of data elements to define the data training set. 8. The method of claim 1, wherein the sensitivity function satisfies the following relation: τ i * = τ ±  δ   ( γ   μ  μ τ + β ( 1 - μ / α ) ) wherein τ* relates to a sensitivity boundary value, τ relates to the variance of the at least one cluster 82, δ relates to a user-selected global sensitivity parameter, γ related to an internal sensitivity parameter, μ relates to the mean value of the at least one cluster, α relates to a slope parameter configured to define a shape of the sensitivity boundary for a relatively small mean value, and β relates to an intercept parameter configured to define a value of the sensitivity boundary for a zero mean value. 9. A host device, comprising: a controller comprising a memory and a processor, the controller configured to: apply a learned behavior function to a data training set and to a set of data elements received from at least one computer environment resource to define at least one learned behavior boundary relative to at least one cluster of data elements of the data training set, the at least one learned behavior boundary related to a variance associated with the at least one cluster; apply a sensitivity function to the at least one cluster to define a sensitivity boundary relative to at least one learned behavior boundary, the sensitivity boundary related to the variance associated with the at least one cluster and to a mean value of the at least one cluster; and identify a data element of the set of data elements as an anomalous data element associated with an attribute of the at least one computer environment resource when the data element of the set of data falls outside of the sensitivity boundary. 10. The host device of claim 9, wherein the learned behavior function defines the learned behavior boundary as being three standard deviations from a centroid of the at least one cluster. 11. The host device of claim 9, wherein the sensitivity boundary further relates to a ratio of the mean value of the at least one cluster and the variance of the at least one cluster. 12. The host device of claim 9, wherein when applying the sensitivity function to the at least one cluster, the host device is further configured to adjust a value of the sensitivity boundary for a relatively small mean value of the at least one cluster. 13. The host device of claim 9, wherein the host device if further configured to: receive a user-selected global sensitivity parameter; and adjust a sensitivity adjustment value of the sensitivity boundary based upon the global sensitivity parameter. 14. The host device of claim 13, wherein when receiving the global sensitivity parameter based upon the user selected input value, the host device is configured to: display, via a graphical user interface, a sensitivity selection screen; and receive the global sensitivity parameter based upon a user-selected input value provided from the sensitivity selection screen. 15. The host device of claim 9, wherein the host device is further configured to: receive the set of data elements from the at least one computer environment resource of the computer infrastructure, each data element of the set of data elements relating to an attribute of the at least one computer environment resource; and apply a clustering function to the set of data elements to define the data training set. 16. The host device of claim 9, wherein the sensitivity function satisfies the following relation: τ i * = τ ±  δ   ( γ   μ  μ τ + β ( 1 - μ / α ) ) wherein τ* relates to a sensitivity boundary value, τ relates to the variance of the at least one cluster 82, δ relates to a user-selected global sensitivity parameter, δ related to an internal sensitivity parameter, μ relates to the mean value of the at least one cluster, α relates to a slope parameter configured to define a shape of the sensitivity boundary for a relatively small mean value, and β relates to an intercept parameter configured to define a value of the sensitivity boundary for a zero mean value. 17. A computer program product encoded with instructions that, when executed by a controller of a host device, causes the controller to: apply a learned behavior function to a data training set and to a set of data elements received from at least one computer environment resource to define at least one learned behavior boundary relative to at least one cluster of data elements of the data training set, the at least one learned behavior boundary related to a variance associated with the at least one cluster; apply a sensitivity function to the at least one cluster to define a sensitivity boundary relative to at least one learned behavior boundary, the sensitivity boundary related to the variance associated with the at least one cluster and to a mean value of the at least one cluster; and identify a data element of the set of data elements as an anomalous data element associated with an attribute of the at least one computer environment resource when the data element of the set of data falls outside of the sensitivity boundary.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: In a host device, a method for performing an anomaly analysis of a computer environment includes applying a learned behavior function to a data training set and to a set of data elements received from at least one computer environment resource to define at least one learned behavior boundary relative to at least one cluster of data elements of the data training set; applying a sensitivity function to the at least one cluster to define a sensitivity boundary relative to at least one learned behavior boundary, the sensitivity boundary related to a variance associated with the at least one cluster and to a mean value of the at least one cluster; and identifying a data element of the set of data elements as an anomalous data element when the data element of the set of data falls outside of the sensitivity boundary.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: In a host device, a method for performing an anomaly analysis of a computer environment includes applying a learned behavior function to a data training set and to a set of data elements received from at least one computer environment resource to define at least one learned behavior boundary relative to at least one cluster of data elements of the data training set; applying a sensitivity function to the at least one cluster to define a sensitivity boundary relative to at least one learned behavior boundary, the sensitivity boundary related to a variance associated with the at least one cluster and to a mean value of the at least one cluster; and identifying a data element of the set of data elements as an anomalous data element when the data element of the set of data falls outside of the sensitivity boundary.
A machine learning apparatus determines an order in which numerical values in an input dataset are to be entered to a neural network for data classification, based on a reference pattern that includes an array of reference values to provide a criterion for ordering the numerical values. The machine learning apparatus then calculates an output value of the neural network whose input-layer neural units respectively receive the numerical values arranged in the determined order. The machine learning apparatus further calculates an input error at the input-layer neural units, based on a difference between the calculated output value and a correct classification result indicated by a training label. The machine learning apparatus updates the reference values in the reference pattern, based on the input error at the input-layer neural units.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A non-transitory computer-readable storage medium storing a learning program that causes a computer to perform a procedure comprising: obtaining an input dataset including a set of numerical values and a training label indicating a correct classification result corresponding to the input dataset; determining an input order in which the numerical values in the input dataset are to be entered to a neural network for data classification, based on a reference pattern that includes an array of reference values to provide a criterion for ordering the numerical values; calculating an output value of the neural network whose input-layer neural units respectively receive the numerical values arranged in the input order; calculating an input error at the input-layer neural units of the neural network, based on a difference between the calculated output value and the correct classification result indicated by the training label; and updating the reference values in the reference pattern, based on the input error at the input-layer neural units. 2. The non-transitory computer-readable storage medium according to claim 1, wherein the determining of an input order includes: forming a first vector whose elements are the numerical values arranged in a specific order; forming a second vector whose elements are the reference values in the reference pattern; and seeking an input order of the numerical values that maximizes an inner product of the first vector and the second vector, by varying the specific order. 3. The non-transitory computer-readable storage medium according to claim 1, wherein the updating of the reference values includes: selecting one of the reference values in the reference pattern; determining a tentative input order of the numerical values, based on a temporary reference pattern generated by temporarily varying the selected reference value by a specified amount; calculating difference values between the numerical values arranged in the input order determined by using the reference pattern and corresponding numerical values arranged in the tentative input order determined by using the temporary reference pattern; determining whether to increase or decrease the selected reference value in the reference pattern, based on the input error and the difference values; and modifying the selected reference value in the reference pattern according to a result of the determining of whether to increase or decrease. 4. The non-transitory computer-readable storage medium according to claim 3, wherein the determining of whether to increase or decrease the selected reference value includes: forming a third vector representing the input error at the input-layer neural units; forming a fourth vector from the difference values arranged according to the tentative input order of the numerical values; and determining whether to increase or decrease the selected reference value, based on an inner product of the third vector and the fourth vector. 5. A machine learning method comprising: obtaining an input dataset including a set of numerical values and a training label indicating a correct classification result corresponding to the input dataset; determining, by a processor, an input order in which the numerical values in the input dataset are to be entered to a neural network for data classification, based on a reference pattern that includes an array of reference values to provide a criterion for ordering the numerical values; calculating, by the processor, an output value of the neural network whose input-layer neural units respectively receive the numerical values arranged in the input order; calculating, by the processor, an input error at the input-layer neural units of the neural network, based on a difference between the calculated output value and the correct classification result indicated by the training label; and updating, by the processor, the reference values in the reference pattern, based on the input error at the input-layer neural units. 6. A machine learning apparatus comprising: a memory that stores therein a reference pattern including an array of reference values to provide a criterion for ordering numerical values to be entered to a neural network for data classification; and a processor configured to perform a procedure including: obtaining an input dataset including a set of numerical values and a training label indicating a correct classification result corresponding to the input dataset; determining an input order in which the numerical values in the input dataset are to be entered to the neural network, based on the reference pattern in the memory; calculating an output value of the neural network whose input-layer neural units respectively receive the numerical values arranged in the input order; calculating an input error at the input-layer neural units of the neural network, based on a difference between the calculated output value and the correct classification result indicated by the training label; and updating the reference values in the reference pattern, based on the input error at the input-layer neural units.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A machine learning apparatus determines an order in which numerical values in an input dataset are to be entered to a neural network for data classification, based on a reference pattern that includes an array of reference values to provide a criterion for ordering the numerical values. The machine learning apparatus then calculates an output value of the neural network whose input-layer neural units respectively receive the numerical values arranged in the determined order. The machine learning apparatus further calculates an input error at the input-layer neural units, based on a difference between the calculated output value and a correct classification result indicated by a training label. The machine learning apparatus updates the reference values in the reference pattern, based on the input error at the input-layer neural units.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A machine learning apparatus determines an order in which numerical values in an input dataset are to be entered to a neural network for data classification, based on a reference pattern that includes an array of reference values to provide a criterion for ordering the numerical values. The machine learning apparatus then calculates an output value of the neural network whose input-layer neural units respectively receive the numerical values arranged in the determined order. The machine learning apparatus further calculates an input error at the input-layer neural units, based on a difference between the calculated output value and a correct classification result indicated by a training label. The machine learning apparatus updates the reference values in the reference pattern, based on the input error at the input-layer neural units.
Provided are a method and device for recognizing spam short messages. In the method, a first feature word set is obtained in a spam short message sample set, and a first conditional probability of each feature word in the first feature word set is obtained; a second feature word set is obtained in a non-spam short message sample set and a second conditional probability of each feature word in the second feature word set is obtained; and a spam short message set is recognized from a short message set according to the number of words contained in each short message in the short message set to be processed, the number of repetition times of each short message in the short message set, the first feature word set, the second feature word set, the first conditional probability and the second conditional probability.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for recognizing spam short messages, comprising: obtaining, from a spam short message sample set, a first feature word set and a first conditional probability of each feature word in the first feature word set; obtaining, from a non-spam short message sample set, a second feature word set and a second conditional probability of each feature word in the second feature word set; and recognizing a spam short message set from a short message set to be processed, according to the number of words contained in each short message in the short message set, the number of repetition times of each short message in the short message set, the first feature word set, the second feature word set, the first conditional probability and the second conditional probability. 2. The method as claimed in claim 1, wherein recognizing the spam short message set from the short message set to be processed comprises: calculating a typeweight of each short message according to the following formula: typeweight = , P  ( C   0 )  ( ∏ t = 1 n   P  ( Wt | C   0 ) ) N P  ( C   1 )  ( ∏ t = 1 n   P  ( Wt | C   1 ) ) N , where P(C0) is a total amount of short message samples in the spam short message sample set, P(C1) is a total amount of short message samples in the non-spam short message sample set, P(Wt|C0) is the first conditional probability, P(Wt|C1) is the second conditional probability, n is the number of words contained in each short message, N is the number of repetition times of each short message in the short message set, and Wt belongs to the first feature word set or the second feature word set; and recognizing the spam short message set according to a comparison result of the typeweight and a preset threshold, wherein the typeweight of each spam short message in the spam short message set is larger than the preset threshold and the preset threshold is a ratio of P(C0) to P(C1). 3. The method as claimed in claim 1, wherein obtaining the first feature word set and the first conditional probability comprises: preprocessing the spam short message sample set; performing word segmentation on each short message sample in the spam short message sample set and obtaining content of each word contained in each short message sample and the number of appearance times of each word; statistically calculating the number of appearance times of each word in the spam short message sample set according to the number of appearance times of each word in each short message sample; calculating the first conditional probability according to a ratio of the number obtained through the statistical calculation to a total amount of short message samples in the spam short message sample set; and calculating a weight of each word in the spam short message sample set by using the number obtained through the statistical calculation and the first conditional probability, sorting all words by weights in a decreasing order, and selecting top N words to form the first feature word set, wherein N is a positive integer. 4. The method as claimed in claim 1, wherein obtaining the second feature word set in the non-spam short message sample set and the second conditional probability comprises: preprocessing the non-spam short message sample set; performing word segmentation on each short message sample in the non-spam short message sample set and obtaining content of each word contained in each short message sample, and the number of appearance times of each word; statistically calculating the number of appearance times of each word in the non-spam short message sample set according to the number of appearance times of each word in each short message sample; calculating the second conditional probability according to a ratio of the number obtained through the statistical calculation to a total amount of short message samples in the non-spam short message sample set; and calculating a weight of each word in the non-spam short message sample set by using the number obtained through the statistical calculation and the second conditional probability, sorting all words by weights in a decreasing order, and selecting the top N words to form the second feature word set, wherein N is a positive integer. 5. The method as claimed in claim 1, wherein after recognizing the spam short message set from the short message set to be processed, the method further comprises: obtaining a calling number sending one or more spam short messages in the spam short message set and a called number receiving one or more spam short messages in the spam short message set; and monitoring the obtained calling number and called number. 6. The method as claimed in claim 1, wherein the method is applied to a Hadoop platform, and each short message in the short message set is processed in parallel on the Hadoop platform. 7. A device for recognizing spam short messages, comprising: a first obtaining module, configured to obtain a first feature word set in a spam short message sample set, and a first conditional probability of each feature word in the first feature word set; a second obtaining module, configured to obtain a second feature word set in a non-spam short message sample set and a second conditional probability of each feature word in the second feature word set; and a recognizing module, configured to recognize a spam short message set from a short message set to be processed, according to the number of words contained in each short message in the short message set, the number of repetition times of each short message in the short message set, the first feature word set, the second feature word set, the first conditional probability and the second conditional probability. 8. The device as claimed in claim 7, wherein the recognizing module comprises: a first calculating unit, configured to calculate a typeweight of each short message according to the following formula: typeweight = P  ( C   0 )  ( ∏ t = 1 n   P  ( Wt | C   0 ) ) N P  ( C   1 )  ( ∏ t = 1 n   P  ( Wt | C   1 ) ) N , where P(C0) is a total amount of short message samples in the spam short message sample set, P(C1) is a total amount of short message samples in the non-spam short message sample set, P(Wt|C0) is the first conditional probability, P(Wt|C1) is the second conditional probability, n is the number of words contained in each short message, N is the number of repetition times of each short message in the short message set, and Wt belongs to the first feature word set or the second feature word set; and a recognizing unit, configured to recognize the spam short message set according to a comparison result of the typeweight and a preset threshold, wherein the typeweight of each spam short message in the spam short message set is larger than the preset threshold and the preset threshold is a ratio of P(C0) to P(C1). 9. The device as claimed in claim 7, wherein the first obtaining module comprises: a first preprocessing unit, configured to preprocess the spam short message sample set; a first word segmentation unit, configured to perform word segmentation on each short message sample in the spam short message sample set and obtain content of each word contained in each short message sample and the number of appearance times of each word; a first statistical unit, configured to statistically calculate the number of appearance times of each word in the spam short message sample set according to the number of appearance times of each word in each short message sample; a second calculating unit, configured to calculate the first conditional probability according to a ratio of the number obtained through the statistical calculation to a total amount of short message samples in the spam short message sample set; and a first selecting unit, configured to calculate a weight of each word in the spam short message sample set by using the number obtained through the statistical calculation and the first conditional probability, sort all words by weights in a decreasing order, and select top N words to form the first feature word set, wherein N is a positive integer. 10. The device as claimed in claim 7, wherein the second obtaining module comprises: a second preprocessing unit, configured to preprocess the non-spam short message sample set; a second word segmentation unit, configured to perform word segmentation on each short message sample in the non-spam short message sample set and obtain content of each word contained in each short message sample, and the number of appearance times of each word; a second statistical unit, configured to statistically calculate the number of appearance times of each word in the non-spam short message sample set according to the number of appearance times of each word in each short message sample; a third calculating unit, configured to calculate the second conditional probability according to a ratio of the number obtained through the statistical calculation to a total amount of short message samples in the non-spam short message sample set; and a second selecting unit, configured to calculate a weight of each word in the non-spam short message sample set by using the number obtained through the statistical calculation and the second conditional probability, sort all words by weights in a decreasing order, and select top N words to form the second feature word set, wherein N is a positive integer. 11. The device as claimed in claim 7, further comprising: a third obtaining module, configured to obtain a calling number sending one or more spam short messages in the spam short message set and a called number receiving one or more spam short messages in the spam short message set; and a monitoring module, configured to monitor the obtained calling number and called number. 12. The device as claimed in claim 7, wherein the device is applied to a Hadoop platform, and each short message in the short message set is processed in parallel on the Hadoop platform. 13. The method as claimed in claim 2, wherein the method is applied to a Hadoop platform, and each short message in the short message set is processed in parallel on the Hadoop platform. 14. The method as claimed in claim 3, wherein the method is applied to a Hadoop platform, and each short message in the short message set is processed in parallel on the Hadoop platform. 15. The method as claimed in claim 4, wherein the method is applied to a Hadoop platform, and each short message in the short message set is processed in parallel on the Hadoop platform. 16. The method as claimed in claim 5, wherein the method is applied to a Hadoop platform, and each short message in the short message set is processed in parallel on the Hadoop platform. 17. The device as claimed in claim 8, wherein the device is applied to a Hadoop platform, and each short message in the short message set is processed in parallel on the Hadoop platform. 18. The device as claimed in claim 9, wherein the device is applied to a Hadoop platform, and each short message in the short message set is processed in parallel on the Hadoop platform. 19. The device as claimed in claim 10, wherein the device is applied to a Hadoop platform, and each short message in the short message set is processed in parallel on the Hadoop platform. 20. The device as claimed in claim 11, wherein the device is applied to a Hadoop platform, and each short message in the short message set is processed in parallel on the Hadoop platform.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Provided are a method and device for recognizing spam short messages. In the method, a first feature word set is obtained in a spam short message sample set, and a first conditional probability of each feature word in the first feature word set is obtained; a second feature word set is obtained in a non-spam short message sample set and a second conditional probability of each feature word in the second feature word set is obtained; and a spam short message set is recognized from a short message set according to the number of words contained in each short message in the short message set to be processed, the number of repetition times of each short message in the short message set, the first feature word set, the second feature word set, the first conditional probability and the second conditional probability.
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Provided are a method and device for recognizing spam short messages. In the method, a first feature word set is obtained in a spam short message sample set, and a first conditional probability of each feature word in the first feature word set is obtained; a second feature word set is obtained in a non-spam short message sample set and a second conditional probability of each feature word in the second feature word set is obtained; and a spam short message set is recognized from a short message set according to the number of words contained in each short message in the short message set to be processed, the number of repetition times of each short message in the short message set, the first feature word set, the second feature word set, the first conditional probability and the second conditional probability.
A neuromorphic network includes a first node configured to transmit a first optical signal and a second node configured to transmit a second optical signal. A waveguide optically connects the first node to the second node. An integrated optical synapse is located on the waveguide between the first node and the second node, the optical synapse configured to change an optical property based on the first optical signal and the second optical signal such that if a correlation between the first optical signal and the second optical signal is strong, the optical connection between the first node and the second node is increased and if the correlation between the first optical signal and the second optical signal is weak, the optical connection between the first node and the second node is decreased.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A neuromorphic network, comprising: a first node configured to transmit a first optical signal; a second node configured to transmit a second optical signal; a waveguide optically connecting the first node to the second node; and an integrated optical synapse located on the waveguide between the first node and the second node, the optical synapse comprising an optical property that is configured to change a strength of an optical connection between the first node and the second node based on a correlation between the first optical signal and the second optical signal, wherein based on a determination that the correlation is strong, the strength of the optical connection between the first node and the second node is increased and based on a determination that the correlation is weak, the strength of the optical connection between the first node and the second node is decreased. 2. The neuromorphic network of claim 1, further comprising a first detector configured to receive a portion of the first optical signal and a second detector configured to receive a portion of the second optical signal, wherein the first detector and the second detector are configured to transmit signals to the optical synapse such that the optical property of the optical synapse changes based on the signals from the first detector and the second detector. 3. The neuromorphic network of claim 1, further comprising: a first optical cavity configured to receive a portion of the first optical signal and a portion of the second optical signal, wherein the first optical cavity is configured to change the optical property of the optical synapse based on the presence of a portion of the first optical signal in the first optical cavity; and a second optical cavity configured to receive a portion of the first optical signal and a portion of the second optical signal when the first optical cavity contains a portion of the first optical signal, wherein the second optical cavity is configured to change the optical property of the optical synapse based on the presence of a portion of the second optical signal in the second optical cavity. 4. The neuromorphic network of claim 1, further comprising an electrical circuit in electrical communication with the optical synapse, the electrical circuit configured to provide an electrical signal to the optical synapse based on the first optical signal and the second optical signal. 5. The neuromorphic network of claim 4, wherein the electrical circuit comprises a capacitor, a constant bias, and a voltage divider. 6. The neuromorphic network of claim 1, wherein the optical synapse is formed from at least one of a photorefractive material, an optical nonlinear material, a phase change material, or a magneto-optical material. 7. The neuromorphic network of claim 1, further comprising a third node, the third node configured in optical communication with the second node along a second waveguide. 8. The neuromorphic network of claim 7, further comprising a second optical synapse configured along the second waveguide, the second optical synapse configured to change an optical property based on a signal from the third node and the second optical signal such that if a second correlation between the signal from the third node and the second optical signal is strong, the strength of the optical connection between the third node and the second node is increased and if the correlation between the signal from the third node and the second optical signal is weak, the strength of the optical connection between the third node and the second node is decreased. 9. The neuromorphic network of claim 1, wherein the correlation between the first optical signal and the second optical signal is based on at least one of the time order of when each signal is fired and the duration between one of the optical signals firing and the other of the optical signals firing. 10. The neuromorphic network of claim 1, wherein the optical synapse is configured to change optical properties when a signal that incorporates the first optical signal and the second optical signal exceeds a predetermined threshold. 11. The neuromorphic network of claim 1, wherein the optical synapse changes properties based on one of an electrical stimulus or an optical stimulus. 12. An optical synapse for a neuromorphic network, the optical synapse comprising: a substrate; an optical weighting layer disposed on the substrate; and two electrodes configured in electrical communication with the optical weighting layer, wherein the electrodes are configured to receive electrical pulses from a first node and a second node of the neuromorphic network, wherein the optical weighting layer is configured to change optical properties based on the electrical pulses of the two electrodes. 13. The optical synapse of claim 12, further comprising a first detector configured to receive a portion of a first signal from the first node and a second detector configured to receive a portion of a second signal from the second node, wherein the first detector and the second detector are configured to transmit the electrical pulses to the two electrodes. 14. The optical synapse of claim 12, wherein the optical weighting layer comprises a first layer and a second layer. 15. The optical synapse of claim 14, wherein the first layer is a silicon layer and the second layer is a ferroelectric layer, wherein the electrodes are configured on the ferroelectric layer. 16. The optical synapse of claim 12, wherein the optical weighting layer is formed from at least one of a photorefractive material, a nonlinear material, a phase change material, or a magneto-optical material. 17. The optical synapse of claim 12, further comprising an electrical circuit in electrical communication with electrodes, the electrical circuit configured to provide an electrical signal to the electrodes based on the pulses from the first node and the second node. 18. The optical synapse of claim 16, wherein the electrical circuit comprises a capacitor, a constant bias, and a voltage divider. 19. The optical synapse of claim 16, further comprising a waveguide extending from the optical weighting layer to the first node and the second node. 20. An optical synapse for a neuromorphic network, the optical synapse comprising: a first optical cavity configured to receive a portion of a first optical signal from a first node of the neuromorphic network and a portion of a second optical signal from a second node of the neuromorphic network; and a second optical cavity configured to receive a portion of the first optical signal and a portion of the second optical signal, wherein the second optical cavity is configured to change an optical property of the optical synapse based on the presence of a portion of the second optical signal in the second optical cavity.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A neuromorphic network includes a first node configured to transmit a first optical signal and a second node configured to transmit a second optical signal. A waveguide optically connects the first node to the second node. An integrated optical synapse is located on the waveguide between the first node and the second node, the optical synapse configured to change an optical property based on the first optical signal and the second optical signal such that if a correlation between the first optical signal and the second optical signal is strong, the optical connection between the first node and the second node is increased and if the correlation between the first optical signal and the second optical signal is weak, the optical connection between the first node and the second node is decreased.
G06N30635
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A neuromorphic network includes a first node configured to transmit a first optical signal and a second node configured to transmit a second optical signal. A waveguide optically connects the first node to the second node. An integrated optical synapse is located on the waveguide between the first node and the second node, the optical synapse configured to change an optical property based on the first optical signal and the second optical signal such that if a correlation between the first optical signal and the second optical signal is strong, the optical connection between the first node and the second node is increased and if the correlation between the first optical signal and the second optical signal is weak, the optical connection between the first node and the second node is decreased.
A method, medium, and system to receive actual operational flight data for an engine of a particular type and configuration; train a neural network to generate an indicator of the health of the engine based on multiple different inputs to the neural network at a time the flight data was acquired; determine a deterioration factor for the engine, based at least in part, on an operational climate for the engine; and provide a record of the determined deterioration factor.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method, the method comprising: receiving actual operation flight data for an engine of a particular type and configuration; training a neural network to generate an indicator of the health of the engine based on multiple different inputs to the neural network at a time the flight data was acquired; determining a deterioration factor for the engine, based at least in part, on an operational climate for the engine; and providing a record of the determined deterioration factor. 2. The method of claim 1, wherein the engine comprises a high-bypass turbofan engine. 3. The method of claim 1, wherein the health of the engine is indicated by an exhaust gas temperature parameter for the engine. 4. The method of claim 1, wherein the multiple different inputs to the neural network include at least a bleed ratio, a Mach number, a percentage of maximum fan speed of the engine, ambient temperature, and altitude. 5. The method of claim 1, wherein the climate is determined to be at least one of the following different climates: tropical/equatorial, dry, mild temperate, continental/microthermal, and polar. 6. The method of claim 1, wherein the deterioration factor is determined on a basis of an airline operator. 7. A non-transitory medium storing processor-executable program instructions, the medium comprising program instructions executable by a computer to: receive actual operation flight data for an engine of a particular type and configuration; train a neural network to generate an indicator of the health of the engine based on multiple different inputs to the neural network at a time the flight data was acquired; determine a deterioration factor for the engine based, at least in part, on an operational climate for the engine; and provide a record of the determined deterioration factor. 8. The medium of claim 7, wherein the engine comprises a high-bypass turbofan engine. 9. The medium of claim 7, wherein the health of the engine is indicated by an exhaust gas temperature parameter for the engine. 10. The medium of claim 7, wherein the multiple different inputs to the neural network include at least a bleed ratio, a Mach number, a percentage of maximum fan speed of the engine, ambient temperature, and altitude. 11. The medium of claim 7, wherein the climate is determined to be at least one of the following different climates: tropical/equatorial, dry, mild temperate, continental/microthermal, and polar. 12. The medium of claim 7, wherein the deterioration factor is determined on a basis of an airline operator. 13. A system comprising: a computing device comprising: a memory storing processor-executable program instructions; and a processor to execute the processor-executable program instructions to cause the computing device to: receive actual operational flight data for an engine of a particular type and configuration; train a neural network to generate an indicator of the health of the engine based on multiple different inputs to the neural network at a time the flight data was acquired; determine a deterioration factor for the engine, based at least in part, on an operational climate for the engine; and provide a record of the determined deterioration factor. 14. The system of claim 13, wherein the engine comprises a high-bypass turbofan engine. 15. The system of claim 13, wherein the health of the engine is indicated by an exhaust gas temperature parameter for the engine. 16. The system of claim 13, wherein the multiple different inputs to the neural network include at least a bleed ratio, a Mach number, a percentage of maximum fan speed of the engine, ambient temperature, and altitude. 17. The system of claim 13, wherein the climate is determined to be at least one of the following different climates: tropical/equatorial, dry, mild temperate, continental/microthermal, and polar. 18. The system of claim 13, wherein the deterioration factor is determined on a basis of an airline operator.
REJECTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method, medium, and system to receive actual operational flight data for an engine of a particular type and configuration; train a neural network to generate an indicator of the health of the engine based on multiple different inputs to the neural network at a time the flight data was acquired; determine a deterioration factor for the engine, based at least in part, on an operational climate for the engine; and provide a record of the determined deterioration factor.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method, medium, and system to receive actual operational flight data for an engine of a particular type and configuration; train a neural network to generate an indicator of the health of the engine based on multiple different inputs to the neural network at a time the flight data was acquired; determine a deterioration factor for the engine, based at least in part, on an operational climate for the engine; and provide a record of the determined deterioration factor.
A processor-implemented method, system, and/or computer program product generate an intelligent persona agent for use in designing a product. One or more processors input a persona specification into an intelligent persona agent generator. The persona specification describes attributes of a set of model users of a particular type of product, and the intelligent personal agent generator creates an intelligent persona agent that is a software-based version of the set of model users. The intelligent persona agent monitors intermediate design choices taken during a design of a product of the particular type of product by a design team. In response to the intelligent persona agent identifying an intermediate design choice that will lead to a feature that is in conflict with the persona specification of the intelligent persona agent, designers modify the intermediate design choice, which modifies the design of the product in order to create an improved product design.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A processor-implemented method of generating an intelligent persona agent for use in designing a product, the processor-implemented method comprising: inputting, by one or more processors, a persona specification into an intelligent persona agent generator, wherein the persona specification describes attributes of a set of model users of a particular type of product, and wherein the intelligent personal agent generator creates an intelligent persona agent that is a software-based version of the set of model users; monitoring, by the intelligent persona agent, intermediate design choices taken during a design of a product by a design team, wherein the product is of the particular type of product; in response to the intelligent persona agent identifying an intermediate design choice that will lead to a feature that is in conflict with the persona specification of the intelligent persona agent, modifying the intermediate design choice to create a modified intermediate design choice; and using the modified intermediate design choice to modify the design of the product to create an improved product design. 2. The processor-implemented method of claim 1, further comprising: controlling, by one or more processors, operation of a manufacturing device using the improved product design. 3. The processor-implemented method of claim 1, wherein the design team is a processor-based design logic. 4. The processor-implemented method of claim 1, further comprising: acquiring, by one or more processors, the persona specification from a social media database. 5. The processor-implemented method of claim 1, further comprising: acquiring, by one or more processors, the persona specification from a set of sensors associated with a set of physical devices. 6. The processor-implemented method of claim 1, further comprising: acquiring, by one or more processors, the persona specification from a set of biometric sensors associated with a set of persons. 7. The processor-implemented method of claim 1, wherein the product is a good. 8. A computer program product for generating an intelligent persona agent for use in designing a product, the computer program product comprising a non-transitory computer readable storage medium having program code embodied therewith, the program code readable and executable by a processor to perform a method comprising: inputting a persona specification into an intelligent persona agent generator, wherein the persona specification describes attributes of a set of model users of a particular type of product, and wherein the intelligent personal agent generator creates an intelligent persona agent that is a software-based version of the set of model users; monitoring, by the intelligent persona agent, intermediate design choices taken during a design of a product by a design team, wherein the product is of the particular type of product; in response to the intelligent persona agent identifying an intermediate design choice that will lead to a feature that is in conflict with the persona specification of the intelligent persona agent, modifying the intermediate design choice to create a modified intermediate design choice; and using the modified intermediate design choice to modify the design of the product to create an improved product design. 9. The computer program product of claim 8, wherein the method further comprises: controlling operation of a manufacturing device using the improved product design. 10. The computer program product of claim 8, wherein the design team is a processor-based design logic. 11. The computer program product of claim 8, wherein the method further comprises: acquiring the persona specification from a social media database. 12. The computer program product of claim 8, wherein the method further comprises: acquiring the persona specification from a set of sensors associated with a set of entities. 13. The computer program product of claim 8, wherein the product is a good. 14. The computer program product of claim 8, wherein the product is an advertising campaign. 15. A computer system comprising: a processor, a computer readable memory, and a non-transitory computer readable storage medium; first program instructions to input a persona specification into an intelligent persona agent generator, wherein the persona specification describes attributes of a set of model users of a particular type of product, and wherein the intelligent personal agent generator creates an intelligent persona agent that is a software-based version of the set of model users; second program instructions to monitor, by the intelligent persona agent, intermediate design choices taken during a design of a product by a design team, wherein the product is of the particular type of product; third program instructions to, in response to the intelligent persona agent identifying an intermediate design choice that will lead to a feature that is in conflict with the persona specification of the intelligent persona agent, modify the intermediate design choice to create a modified intermediate design choice; and fourth program instructions to use the modified intermediate design choice to modify the design of the product to create an improved product design; and wherein the first, second, third, and fourth program instructions are stored on the non-transitory computer readable storage medium for execution by one or more processors via the computer readable memory. 16. The computer system of claim 15, further comprising: fifth program instructions to control operation of a manufacturing device using the improved product design; and wherein the fifth program instructions are stored on the non-transitory computer readable storage medium for execution by one or more processors via the computer readable memory. 17. The computer system of claim 15, wherein the design team is a processor-based design logic. 18. The computer system of claim 15, further comprising: fifth program instructions to acquire the persona specification from a social media database; and wherein the fifth program instructions are stored on the non-transitory computer readable storage medium for execution by one or more processors via the computer readable memory. 19. The computer system of claim 15, further comprising: fifth program instructions to acquire the persona specification from a set of biometric sensors associated with a set of persons; and wherein the fifth program instructions are stored on the non-transitory computer readable storage medium for execution by one or more processors via the computer readable memory. 20. The computer system of claim 15, wherein the product is a good.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A processor-implemented method, system, and/or computer program product generate an intelligent persona agent for use in designing a product. One or more processors input a persona specification into an intelligent persona agent generator. The persona specification describes attributes of a set of model users of a particular type of product, and the intelligent personal agent generator creates an intelligent persona agent that is a software-based version of the set of model users. The intelligent persona agent monitors intermediate design choices taken during a design of a product of the particular type of product by a design team. In response to the intelligent persona agent identifying an intermediate design choice that will lead to a feature that is in conflict with the persona specification of the intelligent persona agent, designers modify the intermediate design choice, which modifies the design of the product in order to create an improved product design.
G06N3006
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A processor-implemented method, system, and/or computer program product generate an intelligent persona agent for use in designing a product. One or more processors input a persona specification into an intelligent persona agent generator. The persona specification describes attributes of a set of model users of a particular type of product, and the intelligent personal agent generator creates an intelligent persona agent that is a software-based version of the set of model users. The intelligent persona agent monitors intermediate design choices taken during a design of a product of the particular type of product by a design team. In response to the intelligent persona agent identifying an intermediate design choice that will lead to a feature that is in conflict with the persona specification of the intelligent persona agent, designers modify the intermediate design choice, which modifies the design of the product in order to create an improved product design.
A robot system includes a robot including a robot arm, and a first hand and a second hand which are connected to the robot arm and which are provided to independently rotate about an axis on the robot arm; and a controller configured to control an operation of the robot. When the robot arm and the first hand are operated so that the first hand reaches a predetermined target position, teaching values for the first hand in the target position is generated. When the first hand and the second hand are rotated based on the teaching values for the first hand, a relative error in rotation amount around the axis between the first hand and the second hand is acquired and stored in a memory. Teaching values for the second hand is generated from the teaching values for the first hand based on the acquired relative error.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A robot system, comprising: a robot including a robot arm, and a first hand and a second hand which are connected to the robot arm and which are provided to independently rotate about an axis on the robot arm; and a controller configured to control an operation of the robot, wherein the controller includes: a memory unit; a first generation unit configured to, when the robot arm and the first hand are operated so that the first hand reaches a predetermined target position, generate teaching values for the first hand in the target position; an error acquiring unit configured to, when the first hand and the second hand are rotated based on the teaching values for the first hand, acquire a relative error in rotation amount around the axis between the first hand and the second hand and store the acquired relative error in the memory unit; and a second generation unit configured to generate teaching values for the second hand from the teaching values for the first hand based on the acquired relative error. 2. The robot system of claim 1, wherein the first hand includes a sensor, the second hand includes an indicator portion which is detectable by the sensor of the first hand, and the controller is configured to allow the error acquiring unit to acquire the relative error, by rotating the second hand and the first hand based on a rotation amount value around the axis, which is included in the teaching values for the first hand, and then rotating the first hand with respect to the second hand until the indicator portion is detected by the sensor. 3. The robot system of claim 2, wherein the controller is configured to allow the error acquiring unit to acquire the relative error, by operating the robot arm so as to take a first posture differing from a second posture of the robot arm, the second posture being a posture taken by the robot arm when the first hand reaches the target position, and then rotating the second hand and the first hand based on the teaching values for the first hand. 4. The robot system of claim 3, wherein the robot arm includes a pivot joint, and a first arm and a second arm which are serially connected to each other through the pivot joint, and the first posture is a posture in which the first arm and the second arm are folded by the pivot joint so as to overlap with each other. 5. The robot system of claim 4, wherein the first hand and the second hand are provided at a free end side of the second arm which is a tip side of the robot arm, and the rotation amount around the axis is defined by rotation angles of the first hand and the second hand with respect to an extended direction of the second arm. 6. The robot system of claim 4, wherein the controller is configured to allow the error acquiring unit to acquire the relative error, by rotating the first hand and the second hand clockwise or counterclockwise in accordance with a rotation direction of the first hand when the first hand has reached the target position. 7. The robot system of claim 1, wherein the target position includes a plurality of target positions, the first generation unit is configured to individually generate teaching values for the first hand with respect to the respective target positions, the error acquiring unit is configured to individually acquire relative errors corresponding to the respective teaching values for the first hand and to store the relative errors in the memory unit, and the second generation unit is configured to individually generate teaching values for the second hand based on the respective relative errors. 8. The robot system of claim 2, further comprising: a detected jig provided to be mounted to the second hand and provided with the indicator portion. 9. The robot system of claim 2, further comprising: a detecting jig provided to be mounted to the first hand and provided with the sensor. 10. The robot system of claim 8, further comprising: a detecting jig provided to be mounted to the first hand and provided with the sensor. 11. The robot system of claim 1, wherein the controller further includes a state monitoring unit configured to allow the error acquiring unit to acquire the relative error at a predetermined timing. 12. The robot system of claim 2, wherein the controller further includes a state monitoring unit configured to allow the error acquiring unit to acquire the relative error at a predetermined timing. 13. The robot system of claim 3, wherein the controller further includes a state monitoring unit configured to allow the error acquiring unit to acquire the relative error at a predetermined timing. 14. The robot system of claim 4, wherein the controller further includes a state monitoring unit configured to allow the error acquiring unit to acquire the relative error at a predetermined timing. 15. The robot system of claim 5, wherein the controller further includes a state monitoring unit configured to allow the error acquiring unit to acquire the relative error at a predetermined timing. 16. The robot system of claim 1, wherein the controller further includes an operation control unit configured to control operations of the robot arm, the first hand and the second hand, based on prior teaching information previously stored in the memory unit, the teaching values for the first hand and the teaching values for the second hand. 17. A robot teaching method for teaching a robot including a robot arm, and a first hand and a second hand which are connected to the robot arm and which are provided to independently rotate about an axis on the robot arm, the method comprising: generating teaching values for the first hand in a predetermined target position by operating the robot arm and the first hand so that the first hand reaches the predetermined target position, rotating, based on the teaching values for the first hand, the first hand and the second hand, acquiring a relative error in rotation amount around the axis between the first hand and the second hand, and storing the acquired relative error, and generating teaching values for the second hand from the teaching values for the first hand based on the acquired relative error. 18. The method of claim 17, wherein the second hand includes an indicator portion which is detectable by a sensor provided in the first hand, and said acquiring the relative error includes rotating the first hand with respect to the second hand until the indicator portion is detected by the sensor. 19. The method of claim 17, wherein before said rotating, the robot arm is operated so as to take a first posture differing from a second posture of the robot arm, the second posture being a posture taken by the robot arm when the first hand has reached the target position and then the second hand and the first hand are rotated based on the teaching values for the first hand. 20. A control device for controlling an operation of a robot including a robot arm, and a first hand and a second hand which are connected to the robot arm and which are provided to independently rotate about an axis on the robot arm, the control device comprising: a storage means; and a control means, wherein the control means includes: a first generation unit configured to, when the robot arm and the first hand are operated so that the first hand reaches a predetermined target position, generate teaching values for the first hand in the target position; an error acquiring unit configured to, when the first hand and the second hand are rotated based on the teaching values for the first hand, acquire a relative error in rotation amount around the axis between the first hand and the second hand and store the acquired relative error in the storage means; and a second generation unit configured to generate teaching values for the second hand from the teaching values for the first hand based on the acquired relative error.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A robot system includes a robot including a robot arm, and a first hand and a second hand which are connected to the robot arm and which are provided to independently rotate about an axis on the robot arm; and a controller configured to control an operation of the robot. When the robot arm and the first hand are operated so that the first hand reaches a predetermined target position, teaching values for the first hand in the target position is generated. When the first hand and the second hand are rotated based on the teaching values for the first hand, a relative error in rotation amount around the axis between the first hand and the second hand is acquired and stored in a memory. Teaching values for the second hand is generated from the teaching values for the first hand based on the acquired relative error.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A robot system includes a robot including a robot arm, and a first hand and a second hand which are connected to the robot arm and which are provided to independently rotate about an axis on the robot arm; and a controller configured to control an operation of the robot. When the robot arm and the first hand are operated so that the first hand reaches a predetermined target position, teaching values for the first hand in the target position is generated. When the first hand and the second hand are rotated based on the teaching values for the first hand, a relative error in rotation amount around the axis between the first hand and the second hand is acquired and stored in a memory. Teaching values for the second hand is generated from the teaching values for the first hand based on the acquired relative error.
A method is known for identifying a cause and effect relationship between events of equipment by using a Bayesian network, but it has been difficult to comprehensively gather events relating to people. Additionally, there is no known method for estimating the time at which other events occur between two or more events. Provided is an information processing apparatus including a target information obtaining section that obtains known event information relating to at least one known event that has occurred for a target; an information processing apparatus that obtains statistical data relating to events; and an event information generating section that generates unknown event information relating to an unknown event of the target, based on the statistical data and the known event information.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. An information processing apparatus comprising: a target information obtaining section that obtains known event information relating to at least one known event that has occurred for a target; an information processing apparatus that obtains statistical data relating to events; and an event information generating section that generates unknown event information relating to an unknown event of the target, based on the statistical data and the known event information. 2. The information processing apparatus according to claim 1, comprising: a probability calculating section that calculates an event occurrence probability based on the statistical data. 3. The information processing apparatus according to claim 2, wherein the probability calculating section calculates an occurrence probability of a second event occurring if a first event occurs. 4. The information processing apparatus according to claim 2, wherein the probability calculating section calculates an occurrence probability of a second event occurring if a first event occurs, for each time interval of the first event and the second event. 5. The information processing apparatus according to claim 1, further comprising: a BN generating section that generates a Bayesian network (BN) including a plurality of events as nodes, based on the event occurrence probability, wherein the event information generating section generates the unknown event information based on the Bayesian network. 6. The information processing apparatus according to claim 5, wherein the BN generating section generates a conditional probability table having time intervals, for each node in the Bayesian network. 7. The information processing apparatus according to claim 6, wherein the event information generating section estimates a time at which the unknown event of the target occurred, based on the conditional probability table having time intervals. 8. The information processing apparatus according to claim 7, wherein the occurrence time estimating section estimates the time interval of the unknown event of the target to be a time interval during which the occurrence probability is highest in the conditional probability table having time intervals. 9. The information processing apparatus according to claim 7, wherein the occurrence time estimating section identifies a time interval of three or more events, based on a sum or product of the occurrence probabilities of two or more time intervals included between the three or more events in the conditional probability table having time intervals, and the information processing apparatus estimates the time at which the unknown event of the target occurred based on the identified time interval. 10. The information processing apparatus according to claim 9, wherein the occurrence time estimating section identifies the time interval of the three or more events to be a time interval in which the sum or product of the occurrence probabilities of the two or more time intervals included between the three or more events in the conditional probability table having time intervals is highest, where the sum of the two or more time intervals does not exceed a time interval of a first event and a time interval of a last event among the three or more events, and the information processing apparatus estimates the time at which the unknown event of the target occurred based on the identified time interval. 11. The information processing apparatus according to claim 5, wherein the event information generating section includes an occurrence estimating section that estimates whether the unknown event of the target has occurred for the target from the known event information, based on the Bayesian network. 12. The information processing apparatus according to claim 11, further comprising: a recommend generating section that generates information to be recommended to the target in relation to the unknown event, based on the unknown event information. 13. The information processing apparatus according to claim 12, wherein the event information generating section generates unknown event information indicating that an unknown event whose occurrence probability of having occurred for the target is greater than or equal to a threshold value has occurred for the target, and the recommend generating section generates the information to be recommended for the unknown event whose unknown event information indicates that the unknown event has occurred for the target. 14. The information processing apparatus according to claim 1, wherein the information processing apparatus: obtains first-order statistical information from an external server; generates statistical data by gathering the first-order statistical information; obtains synonym data including a plurality of expressions relating to events; and obtains statistical data relating to two events, by matching an expression of the statistical data that matches expressions relating to the two events, based on the synonym data. 15. The information processing apparatus according to claim 5, wherein the information processing apparatus obtains the statistical data for each category of the events; the BN generating section identifies a category with which the target is associated, based on an association of the target; and the event information generating section generates the unknown event information based on the statistical data of the category with which the target is associated. 16. The information processing apparatus according to claim 1, wherein the event is a life event relating to the target. 17. An information processing method performed by a computer, comprising: obtaining known event information relating to at least one known event that has occurred for a target; obtaining statistical data relating to events; and generating unknown event information relating to an unknown event of the target, based on the statistical data and the known event information. 18. The information processing method according to claim 17, comprising: calculating an event occurrence probability based on the statistical data. 19. The information processing method according to claim 18, wherein the calculating includes calculating an occurrence probability of a second event occurring if a first event occurs. 20. The information processing method according to claim 18, wherein the calculating includes calculating an occurrence probability of a second event occurring if a first event occurs, for each time interval of the first event and the second event. 21. A program that, when executed by a computer, causes the computer to function as: a target information obtaining section that obtains known event information relating to at least one known event that has occurred for a target; an information processing apparatus that obtains statistical data relating to events; and an event information generating section that generates unknown event information relating to an unknown event of the target, based on the statistical data and the known event information. 22. The program according to claim 21, further causing the computer to function as: a probability calculating section that calculates an event occurrence probability based on the statistical data. 23. The program according to claim 22, wherein the probability calculating section calculates an occurrence probability of a second event occurring if a first event occurs. 24. The program according to claim 22, wherein the probability calculating section calculates an occurrence probability of a second event occurring if a first event occurs, for each time interval of the first event and the second event. 25. A storage medium storing thereon the program according to claim 21.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method is known for identifying a cause and effect relationship between events of equipment by using a Bayesian network, but it has been difficult to comprehensively gather events relating to people. Additionally, there is no known method for estimating the time at which other events occur between two or more events. Provided is an information processing apparatus including a target information obtaining section that obtains known event information relating to at least one known event that has occurred for a target; an information processing apparatus that obtains statistical data relating to events; and an event information generating section that generates unknown event information relating to an unknown event of the target, based on the statistical data and the known event information.
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method is known for identifying a cause and effect relationship between events of equipment by using a Bayesian network, but it has been difficult to comprehensively gather events relating to people. Additionally, there is no known method for estimating the time at which other events occur between two or more events. Provided is an information processing apparatus including a target information obtaining section that obtains known event information relating to at least one known event that has occurred for a target; an information processing apparatus that obtains statistical data relating to events; and an event information generating section that generates unknown event information relating to an unknown event of the target, based on the statistical data and the known event information.
Data points, calendar entries, trends, behavioral patterns may be used to predict and pre-emptively build digital and printable products with selected collections of images without the user's active participation. The collections are selected from files on the user's device, cloud-based photo library, or other libraries shared among other individuals and grouped into thematic products. Based on analysis of the user's collections and on-line behaviors, the system may estimate types and volumes of potential media-centric products, and the resources needed for producing and distributing such media-centric products for a projected period of time. A user interface may take the form of a “virtual curator”, which is a graphical or animated persona for augmenting and managing interactions between the user and the system managing the user's stored media assets. The virtual curator can assume one of many personas, as appropriate, with each user. For example, the virtual curator can be presented as an avatar-animated character in an icon, or icon that floats around the screen. The virtual curator can also interact with the user via text messaging, or audio messaging.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for producing and distributing one or more customized media centric products, the customized media-centric products being provided based on one or more users' collections of media assets stored in one or more storage devices accessible by one or more processors, the method comprising: monitoring the collections of media assets, and the users' media recordings and media usage behaviors; identifying the users' relationship and life-events based on the users' media recordings, the user's media usage behaviors, and information collected in the users' accounts, contacts, calendars, texts, posts, time and date information, location information; and based on the monitored media recordings and media usage behaviors, selecting for each user a user persona from a set of predetermined or dynamically created user personas, each user persona assisting in estimating a potential volume of the customized media centric products intended to recognize the identified relationships and life-events of that user and one or more recipients of the customized media-centric products associated with that user for a projected period of time. 2. The method of claim 1, wherein the recipients of the customized media-centric products are predicted taking into account the content derived from the collections of media assets appropriate for the media-centric products. 3. The method of claim 1, further comprising calculating, so as to fulfill the estimated potential volume resulting from customer orders for the projected period of time, one or more of: a required planning and inventory of product materials, packaging materials, physical storage, memory, computational capacity, network capacity and bandwidth, printing capacity, finishing capacity, delivery arrangements, and personnel required. 4. The method of claim 1, further comprising identifying a plurality of types of customized media-centric products in the estimated potential volume of customized media-centric products. 5. The method of claim 4, further comprising adapting the potential volume and types of media-centric products to accommodate locations, timeframes, and events that are identified by monitoring the social network interactions of the users. 6. The method of claim 1, wherein identifying users' relationships and life-events further comprises analyzing content, metadata, comments, shares, and tags associated with the users' collections of assets. 7. The method of claim 1, further comprising analyzing each user's collections of media assets to determine important media assets of that user. 8. The method of claim 7, wherein the important media assets include assets that have been used or shared with or by the user or by others in the user's social network. 9. The method of claim 7, further comprising creating and presenting to each user virtual versions of one or more of the custom media-centric products incorporating the important media assets of that user, at least one suggested recipient, and an event to be recognized as appropriate for that user. 10. The method of claim 9, further comprising providing each user with an option to select, modify, or request alternatives to the presented virtual versions of the customized media-centric products. 11. The method of claim 10, wherein the alternatives are provided based upon one or more personas associated with or dynamically determined from analyzing activities of individuals associated with the user's social network. 12. The method of claim 9, further comprising presenting the user an option to purchase the customized media-centric products represented by the presented virtual versions of the media-centric products. 13. The method of claim 1, wherein the user persona guides customization of the media-centric products for specific recipients and events to be recognized from the user's identified relationships and events. 14. The method of claim 1, wherein the selected user persona is further based on analyzing the user's media accounts, collections of media assets, media usage behaviors, media product purchase histories, and media product type and format preferences. 15. The method of claim 14, wherein the selected persona is further based on a cultural or ethnic background of the user or one or more of the recipients. 16. The method of claim 1, the collections of media assets include still images and video sequences recorded by or accessible to the users. 17. The method of claim 1, wherein the customized media-centric products include one or more of hardcopy prints, photo albums, photo calendars, photo greeting cards, posters, collages, various physical products with a tangible surface incorporating printed versions of the user's images such as mugs and T-shirts, soft display digital multimedia slide shows, digital movies, highlight videos, video key frames, and virtual reality objects and presentations. 18. The method of claim 1, further comprising creating a hardcopy of each of the customized media-centric products. 19. The method of claim 18, further comprising transmitting a softcopy of each of the customized media-centric products to the recipients. 20. The method of claim 18, wherein the customized media-centric products include one or more still images of an extracted person, or one or more sequences extracted from video recordings of the extracted person. 21. The method of claim 1, wherein each user is associated with one or more of the customized media-centric products, such products being customized by one or more of the processors, each such processor residing on a device accessible by that user. 22. The method of claim 1, wherein one or more of the customized media-centric products are customized with assistance from a remote operator, by a crowd sourced operator, by a recipient or by the user. 23. The method of claim 22 wherein, prior to the customized media-centric products being customized with the assistance by a recipient, the user authorizes a range of options that may be presented to the recipient for customization. 24. The method of claim 1, wherein monitoring media recordings and media usage behaviors comprises monitoring frequencies and durations of: recording, viewing, sharing, editing, manipulation, rating, printing, or making of photo-centric products using or incorporating the collections of media assets. 25. The method of claim 1, wherein the life-events identified comprises one or more of: birthdays, weddings, anniversaries, births, illnesses, deaths, new jobs, recurring events, spontaneous events, new relationships, new homes, awards, accolades, aggregations of events, important events, objects of interest, locations of interest, temporal events, academic events, hobbies, recreational activities, sporting events, vacations, parties, national holidays, religious holidays, secular holidays, and combinations thereof. 26. The method of claim 1, wherein some of the customized media-centric products incorporate one or more of: professionally produced or crowd-sourced graphics, clip art, music, audio, narration, text, transitions, special effects, 3-D rendered objects and environments, still images, and videos sequences. 27. The method of claim 1, wherein some of the customized media-centric products are based on a planned or spontaneous event. 28. The method of claim 1, wherein one of the recipients is a user. 29. The method of claim 1, wherein one of the storage devices is locally resident to one of the processors. 30. The method of claim 1, wherein one of the storage devices in remotely accessible to one of the processors via a communication network. 31. The method of claim 1, further comprising interacting with each user using a graphical user interface having a curator persona selected based on analyzing users preferences from the monitoring of the collections of media assets, and the users' media recordings and media usage behaviors; 32. The method of claim 28, wherein the curator persona further interacts with the user via voice and audio interactions. 33. The method of claim 1, further comprising dynamically adapting the types of media-centric products based on one or more of: the user's current activities, location, network bandwidth, device capabilities, and data plan limitations. 34. The method of claim 1, wherein the user persona includes one or more of: the user's location, the user's data plan, the user's smart phone capabilities, the user's cost sensitivities, and other attributes associated with whether or not delivering predetermined products to a particular user is suitable. 35. The method of claim 1, wherein monitoring media recordings and media usage behaviors includes monitoring physiological responses.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Data points, calendar entries, trends, behavioral patterns may be used to predict and pre-emptively build digital and printable products with selected collections of images without the user's active participation. The collections are selected from files on the user's device, cloud-based photo library, or other libraries shared among other individuals and grouped into thematic products. Based on analysis of the user's collections and on-line behaviors, the system may estimate types and volumes of potential media-centric products, and the resources needed for producing and distributing such media-centric products for a projected period of time. A user interface may take the form of a “virtual curator”, which is a graphical or animated persona for augmenting and managing interactions between the user and the system managing the user's stored media assets. The virtual curator can assume one of many personas, as appropriate, with each user. For example, the virtual curator can be presented as an avatar-animated character in an icon, or icon that floats around the screen. The virtual curator can also interact with the user via text messaging, or audio messaging.
G06N3006
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Data points, calendar entries, trends, behavioral patterns may be used to predict and pre-emptively build digital and printable products with selected collections of images without the user's active participation. The collections are selected from files on the user's device, cloud-based photo library, or other libraries shared among other individuals and grouped into thematic products. Based on analysis of the user's collections and on-line behaviors, the system may estimate types and volumes of potential media-centric products, and the resources needed for producing and distributing such media-centric products for a projected period of time. A user interface may take the form of a “virtual curator”, which is a graphical or animated persona for augmenting and managing interactions between the user and the system managing the user's stored media assets. The virtual curator can assume one of many personas, as appropriate, with each user. For example, the virtual curator can be presented as an avatar-animated character in an icon, or icon that floats around the screen. The virtual curator can also interact with the user via text messaging, or audio messaging.
In a method and a computer for determining a training function in order to generate annotated training images, a training image and training-image information are provided to a computer, together with an isolated item of image information that is independent of the training image. A first calculation is made in the computer by applying an image-information-processing first function to the isolated item of image information, and a second calculation is made by applying an image-information-processing second function to the training image. Adjustments to the first and second functions are made based on these calculation results, from which a determination of a training function is then made in the computer.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for determining a training function comprising: in a computer, receiving a first reception of a training image and an item of training-image information via an interface of the computer, wherein the training-image information is image information for the training image; in said computer, receiving a second reception of an isolated item of image information via the interface, wherein the isolated item of image information is independent of the training image; in said computer, making a first calculation of a first result of an image-information-processing first function when applied to the isolated item of image information; in said computer, making a second calculation of a second result of an image-processing second function when applied to the training image; in said computer, making an adjustment of a parameter of at least one of the image-information-processing first function or the image-processing second function, based on at least the first result and the second result; and in said computer, making a determination of a training function based on the image-information-processing first function wherein, when applied to an item of image information, the training function generates an associated image as an output of the computer. 2. The method as claimed in claim 1, wherein at least one of the first function or the second function is provided by an artificial neural network, and wherein the parameters of the artificial neural network comprise edge weights of the artificial neural network. 3. The method as claimed in claim 2, wherein the adjustment comprises adjusting the edge weights by minimizing a cost function by execution of backpropagation. 4. The method as claimed in claim 1, wherein said item of image information of an image comprises segmentation of the image into at least one image region. 5. The method as claimed in claim 1, wherein said item of image information of an image comprises a variable that assesses whether a defined object or a plurality of defined objects are depicted in the image. 6. The method as claimed in claim 1, wherein the image-information-processing first function is a generator function that, when applied to said item of image information, generates an associated image as said output; wherein the image-processing second function is a classification function that, when applied to said image, generates an associated item of image information as said output; wherein the first result is a calculated image; wherein the second result is a first item of calculated image information; wherein the training function is the image-information-processing first function; and wherein the method furthermore comprises making a third calculation in the computer of a second item of calculated image information by applying the image-processing function to the calculated image. 7. The method as claimed in claim 6, wherein the first item of calculated image information comprises an estimation of a first probability of the training image being contained in a set of training images; and wherein the second item of calculated image information comprises an estimation of a second probability of the calculated image being contained in a set of training images. 8. The method as claimed in claim 7, wherein at least one of the first function or the second function is provided by an artificial neural network, and wherein the parameters of the artificial neural network comprise edge weights of the artificial neural network; wherein the adjustment comprises adjusting the edge weights by minimizing a cost function by execution of backpropagation; and wherein the cost function is based on at least a first difference of the first item of calculated image information from the training-image information. 9. The method as claimed in claim 8, wherein the cost function is furthermore based on at least a second difference of the second item of calculated image information from the isolated item of image information. 10. The method as claimed in claim 1, wherein at least one of the first function or the second function is provided by an artificial neural network, and wherein the parameters of the artificial neural network comprise edge weights of the artificial neural network; wherein the image-information-processing first function is an information autoencoder that, when applied to said first item of image information, generates a second item of image information as said output; wherein the image-processing second function is an image autoencoder that, when applied to a first image, generates a second image as said output; wherein a central layer of the information autoencoder and a central layer of the image autoencoder have a same number of central nodes; wherein the first result corresponds to first node values; wherein the first node values are values of the nodes of the central layer of the information autoencoder when the isolated item of image information is the input value of the information autoencoder; wherein the second result corresponds to second node values; wherein the second node values are values of the nodes of the central layer of the image autoencoder when the training image is the input value of the image autoencoder; wherein the method furthermore comprises making a further calculation of further node values that are values of the nodes of the central layer of the information autoencoder when the training-image information is the input value of the information autoencoder; and wherein the training function is composed of the first part of the information autoencoder with the second part of the image autoencoder. 11. The method as claimed in claim 10, wherein at least one of the first function or the second function is provided by an artificial neural network, and wherein the parameters of the artificial neural network comprise edge weights of the artificial neural network; wherein the adjustment comprises adjusting the edge weights by minimizing a cost function by execution of backpropagation; and wherein a distance between the first node values and the second node values makes a negative contribution to the cost function and wherein a distance between the second node values and the third node values makes a positive contribution to the cost function. 12. The method as claimed in claim 10, wherein: the training function generates an image as an output value from an item of image information as an input value such that an item of image information is used as an input value of the information autoencoder; the node values of the central layer of the information autoencoder are transferred to the node values of the central layer of the image autoencoder; and the output value of the training function corresponds to the resulting output value of the image autoencoder. 13. A function-determining computer comprising: a processor; an interface that receives a first reception of a training image and an item of training-image information into said processor, wherein the training-image information is image information for the training image; said interface also receiving a second reception of an isolated item of image information into said processor, wherein the isolated item of image information is independent of the training image; said processor being configured to make a first calculation of a first result of an image-information-processing first function when applied to the isolated item of image information; said processor being configured to make a second calculation of a second result of an image-processing second function when applied to the training image; said processor being configured to make an adjustment of a parameter of at least one of the image-information-processing first function or the image-processing second function, based on at least the first result and the second result; and said processor being configured to make a determination of a training function based on the image-information-processing first function wherein, when applied to an item of image information, the training function generates an associated image as an output of the computer. 14. A non-transitory, computer-readable data storage medium encoded with programming instructions, said storage medium being loaded into a computer and said programming instructions causing said computer to: receive a first reception of a training image and an item of training-image information via an interface of the computer, wherein the training-image information is image information for the training image; receive a second reception of an isolated item of image information via the interface, wherein the isolated item of image information is independent of the training image; make a first calculation of a first result of an image-information-processing first function when applied to the isolated item of image information; make a second calculation of a second result of an image-processing second function when applied to the training image; make an adjustment of a parameter of at least one of the image-information-processing first function or the image-processing second function, based on at least the first result and the second result; and make a determination of a training function based on the image-information-processing first function wherein, when applied to an item of image information, the training function generates an associated image as an output of the computer.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: In a method and a computer for determining a training function in order to generate annotated training images, a training image and training-image information are provided to a computer, together with an isolated item of image information that is independent of the training image. A first calculation is made in the computer by applying an image-information-processing first function to the isolated item of image information, and a second calculation is made by applying an image-information-processing second function to the training image. Adjustments to the first and second functions are made based on these calculation results, from which a determination of a training function is then made in the computer.
G06N3082
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: In a method and a computer for determining a training function in order to generate annotated training images, a training image and training-image information are provided to a computer, together with an isolated item of image information that is independent of the training image. A first calculation is made in the computer by applying an image-information-processing first function to the isolated item of image information, and a second calculation is made by applying an image-information-processing second function to the training image. Adjustments to the first and second functions are made based on these calculation results, from which a determination of a training function is then made in the computer.
A data processing system generates a result of processing a natural language query. A determination is made as to whether the natural language query or the result has a temporal characteristic. In response, a reminder notification data structure is generated having an associated scheduled reminder notification time for outputting a reminder notification of the result generated for the natural language query. The reminder notification data structure is stored in a data storage device and, at a later time from a time that the reminder notification data structure was stored in the data storage device, in response to the later time being equal to or later than the scheduled reminder notification time, a reminder notification is output to a client device associated with a user. The reminder notification specifies the result generated for the natural language query.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method, in a data processing system comprising a processor and a memory that operate to implement a natural language processing system, the method comprising: generating, by the natural language processing system implemented by the data processing system, a result of processing a natural language query; determining, by the data processing system, that at least one of the natural language query or the result comprises a temporal characteristic; in response to determining that at least one of the natural language query or the result comprises a temporal characteristic, generating a reminder notification data structure having an associated scheduled reminder notification time for outputting a reminder notification of the result generated for the natural language query; storing the reminder notification data structure in a data storage device; and at a later time from a time that the reminder notification data structure was stored in the data storage device, in response to the later time being equal to or later than the scheduled reminder notification time, outputting a reminder notification to a client device associated with a user, wherein the reminder notification specifies the result generated for the natural language query. 2. The method of claim 1, wherein the reminder notification further specifies a history of changes to the result occurring from a time that the result was generated for the natural language query and the scheduled reminder notification time. 3. The method of claim 1, wherein the natural language processing system is a Question and Answer (QA) system, the natural language query is a natural language question input to the QA system, and the result is an answer generated by the QA system for the natural language question. 4. The method of claim 1, wherein the natural language processing system is a search engine, the natural language query is a search query input to the search engine, and the result comprises at least one search result generated by the search engine. 5. The method of claim 1, wherein generating a reminder notification data structure comprises: in response to determining that at least one of the natural language query or the result comprises a temporal characteristic, outputting an option to the client device of the user to create the reminder notification data structure, wherein the reminder notification data structure is created in response to the user selecting the option. 6. The method of claim 1, further comprising: identifying, by the data processing system, temporal characteristics of the natural language query; identifying, by the data processing system, temporal characteristics of the result; and calculating, by the data processing system, the scheduled reminder notification time based on the temporal characteristics of the natural language query and the temporal characteristics of the result. 7. The method of claim 6, wherein at least one of the temporal characteristics of the natural language query or temporal characteristics of the result are identified by identifying at least one of time-based keywords or key phrases, concept relationships associated with time in language of the natural language query or result, a lexical answer type or focus of the natural language query or result that is associated with time, or implicit timing aspects within the natural language query or result. 8. The method of claim 6, wherein identifying the temporal characteristics of the natural language query comprises identifying a temporal characteristic of a domain associated with the natural language query. 9. The method of claim 1, further comprising: automatically checking, by the data processing system, for a change in the result at a time between the current time and the scheduled reminder notification time; in response to a change in the result being identified, determining, by the data processing system, whether the change in the result is significant enough to send a change notification to the user; and in response to the change in the result being significant enough to send a change notification to the user, outputting, by the data processing system, a notification of the change in the result to the client device associated with the user. 10. The method of claim 1, wherein the scheduled reminder notification time is a time calculated based on at least one of an arbitrarily selected default timeframe, a default timeframe associated with a domain of the natural language query, or a user specified default timeframe, prior to a temporal characteristic of the result. 11-21. (canceled)
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A data processing system generates a result of processing a natural language query. A determination is made as to whether the natural language query or the result has a temporal characteristic. In response, a reminder notification data structure is generated having an associated scheduled reminder notification time for outputting a reminder notification of the result generated for the natural language query. The reminder notification data structure is stored in a data storage device and, at a later time from a time that the reminder notification data structure was stored in the data storage device, in response to the later time being equal to or later than the scheduled reminder notification time, a reminder notification is output to a client device associated with a user. The reminder notification specifies the result generated for the natural language query.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A data processing system generates a result of processing a natural language query. A determination is made as to whether the natural language query or the result has a temporal characteristic. In response, a reminder notification data structure is generated having an associated scheduled reminder notification time for outputting a reminder notification of the result generated for the natural language query. The reminder notification data structure is stored in a data storage device and, at a later time from a time that the reminder notification data structure was stored in the data storage device, in response to the later time being equal to or later than the scheduled reminder notification time, a reminder notification is output to a client device associated with a user. The reminder notification specifies the result generated for the natural language query.
A computer-implemented method includes a processor extracting data and metadata from a process model, where the process is comprised of activities and the metadata is associated with each activity. The processor generates at least one user story for at least one activity, where the at least one user story includes an estimate attribute reflecting a predicted timeframe for completion of at least a portion of the at least one activity. The processor updates the model to reflect the at least one user story and displays the updated model as a project plan in a project management interface on a computing resource. The processor assigns a resource to the at least one user story and dynamically updates the estimate attribute of the at least one user story to reflect a new predicted timeframe, calculates impacts to the process and displays the impacts and the new predicted timeframe in the project management interface.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system for simulating operation of a network, comprising: a memory; a processor in communication with the memory; and program instructions executable by the processor via the memory to perform a method, the method comprising: simulating, by the processor, the operation of the network; accessing, by the processor, utilizing an underlying hardware platform of the processor, real time; and providing, by the processor, utilizing a simulator time clock, simulation time to components of the network, wherein the providing comprises: advancing, by the processor, the simulation time at discrete moments in real time no faster than real time when the simulating of the operation of the network is faster than the real time; and advancing the simulation time at discrete moments in real time more slowly than the real time when the simulating of the operation of the network is slower than the real time. 2. The system of claim 1, the method further comprising: extracting, by the processor, time information for control of simulation time. 3. The system of claim 1, wherein the providing the simulation time further comprises: utilizing, by the processor, a hypervisor for providing the simulation time and a simulation time advance rate to simulated components in the simulating of the operation of the network. 4. The system of claim 3, wherein the hypervisor comprises a clock control module, wherein the clock control module receives the simulation time and a time slow down factor. 5. The system of claim 3, wherein the providing the simulation time further comprises comprises: receiving, by a clock control module of the hypervisor, the simulation time and a time slow down factor; providing, by the clock control module, updated timeout values; outputting, by the clock control module, the simulation time, and the simulation time advance rate; receiving, by a periodic timer and a one shot timer of the hypervisor, for each simulated component in the simulating of the operation of the network, the updated timeout values and based on the receiving, outputting timer interrupts; and receiving, by a system time setting mechanism of the hypervisor, the simulation time and the simulation time advance rate, wherein the simulated components receive one of: a time interrupt from one of the periodic timer or the one-shot timer, or the simulation time and the simulation time advance rate, from the system time setting mechanism. 6. The system of claim 1, wherein the simulated components are virtual machines. 7. The system of claim 6, wherein the virtual machines represent nodes of the network. 8. The system of claim 7, wherein the system time is a piece-wise linear approximation of actual simulation time, sampled at discrete moments in said real time. 9. The system of claim 1, the method further comprising: advancing, by the processor, the simulation time at discrete moments in real time, wherein the discrete moments are at constant time intervals from one another. 10. (canceled) 11. A method system for simulating operation of a network, comprising: simulating, by a processor, operation of the network; accessing, by the processor, utilizing an underlying hardware platform of the processor, real time; and providing, by the processor, a simulation time to simulated components of a virtual network, at discrete moments in real time, wherein the providing comprises: advancing, by the processor, the simulation time no faster than real time when a simulator conducts operations faster than the real time; and advancing, by the processor, the simulation time, more slowly than said real time when the simulator conducts operations more slowly than the real time. 12. The method of claim 11, wherein the simulation time is driven by a timestamp of a next event to be processed in the simulation. 13. The method of claim 11, wherein the simulation time is driven by receipt of a data packet by a node in the network. 14. The method of claim 11, wherein the simulated components are virtual machines. 15. The method of claim 14, wherein the virtual machines represent nodes of the network. 16. The method of claim 11, wherein said system time is a piece-wise linear approximation of actual simulation time in said simulator, sampled at discrete moments in said real time. 17. The method of claim 11, wherein the discrete moments are at constant time intervals from one another. 18. (canceled) 19. A computer readable non-transitory storage medium storing instructions of a computer program which when executed by a computer system results in performance of steps of a method for disseminating content, comprising: simulating, by the processor, the operation of the network; accessing, by the processor, utilizing an underlying hardware platform of the processor, real time; and providing, by the processor, utilizing a simulator time clock, simulation time to components of the network, wherein the providing comprises: advancing, by the processor, the simulation time at discrete moments in real time no faster than real time when the simulating of the operation of the network is faster than the real time; and advancing the simulation time at discrete moments in real time more slowly than the real time when the simulating of the operation of the network is slower than the real time.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A computer-implemented method includes a processor extracting data and metadata from a process model, where the process is comprised of activities and the metadata is associated with each activity. The processor generates at least one user story for at least one activity, where the at least one user story includes an estimate attribute reflecting a predicted timeframe for completion of at least a portion of the at least one activity. The processor updates the model to reflect the at least one user story and displays the updated model as a project plan in a project management interface on a computing resource. The processor assigns a resource to the at least one user story and dynamically updates the estimate attribute of the at least one user story to reflect a new predicted timeframe, calculates impacts to the process and displays the impacts and the new predicted timeframe in the project management interface.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A computer-implemented method includes a processor extracting data and metadata from a process model, where the process is comprised of activities and the metadata is associated with each activity. The processor generates at least one user story for at least one activity, where the at least one user story includes an estimate attribute reflecting a predicted timeframe for completion of at least a portion of the at least one activity. The processor updates the model to reflect the at least one user story and displays the updated model as a project plan in a project management interface on a computing resource. The processor assigns a resource to the at least one user story and dynamically updates the estimate attribute of the at least one user story to reflect a new predicted timeframe, calculates impacts to the process and displays the impacts and the new predicted timeframe in the project management interface.
A physical environment is equipped with a plurality of sensors (e.g., motion sensors). As individuals perform various activities within the physical environment, sensor readings are received from one or more of the sensors. Based on the sensor readings, activities being performed by the individuals are recognized and the sensor data is labeled based on the recognized activities. Future activity occurrences are predicted based on the labeled sensor data. Activity prompts may be generated and/or facility automation may be performed for one or more future activity occurrences.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: receiving a set of activity recognition training data, the activity recognition training data including a plurality of sensor events, wherein each sensor event is labeled with a corresponding activity class; training an activity recognition module based on the activity recognition training data; receiving sensor events from a smart environment; using the trained activity recognition module to label each sensor event with an activity class to generate labeled sensor event data; training an activity occurrence predictor based, at least in part, on the labeled sensor event data; and using the activity occurrence predictor to predict a future occurrence time of one or more activities within the smart environment. 2. A method as recited in claim 1, wherein the future occurrence time is expressed as a date/time. 3. A method as recited in claim 1, wherein the future occurrence time is expressed as a number of time units until a next occurrence of a particular activity. 4. A method as recited in claim 3, wherein a time unit is a second. 5. A method as recited in claim 3, wherein a time unit is a minute. 6. A method as recited in claim 1, wherein training the activity occurrence predictor includes training a regressor function. 7. A method as recited in claim 6, wherein training the regressor function includes training a respective regressor function for each of a plurality of activities. 8. A method as recited in claim 1, wherein training the activity occurrence predictor includes: determining local features of the labeled sensor event data; determining context features associated with the labeled sensor event data; determining a loss function associated with the labeled sensor event data; using a multi-output regression learner to train the activity occurrence predictor based on the local features and the context features to minimize the loss function. 9. A method as recited in claim 8, wherein the local features include sensor event data labeled with recognized activity classes. 10. A method as recited in claim 8, wherein the context features include data based on previous activity occurrence predictions. 11. A method as recited in claim 8, wherein the loss function is expressed as a root mean squared error. 12. A method as recited in claim 1, further comprising, performing facility automation within the smart environment based, at least in part, on a predicted future occurrence time of a particular activity. 13. A method comprising: determining an activity of interest; querying an activity prediction server for a predicted future time associated with the activity of interest; receiving, from the server, the predicted future time associated with the activity of interest; comparing the predicted future time to a current time; and when the current time is equal to or greater than the predicted future time, presenting an activity prompt associated with the activity of interest. 14. A method as recited in claim 13, wherein querying the activity prediction server includes periodically querying the activity prediction server. 15. A method as recited in claim 13, wherein determining the activity of interest includes one or more of: determining an activity specified in association with user-configured settings; determining an activity for which no activity prompts are currently pending; or determining an activity for which an activity prompt is currently pending and is schedule to be presented within a threshold time period. 16. A personal computing device configured to perform the method as recited in claim 13. 17. A mobile phone configured to perform the method as recited in claim 13. 18. A personal computing device comprising: a processor; a memory communicatively coupled to the processor; an activity prompting application stored in the memory and executed on the processor, to configure the personal computing device to perform the method as recited in claim 13. 19. An activity prediction server system comprising: a processor; a memory communicatively connected to the processor; a sensor event data store stored in the memory; an activity recognition module stored in the memory and executed on the processor, the activity recognition module configured to: receive sensor event data from sensors within a smart environment; label a received sensor event as being triggered during a recognized activity; and store the labeled sensor event data in the sensor event data store; and an activity prediction module stored in the memory and executed on the processor, the activity prediction module configured to analyze the labeled sensor event data in the sensor event data store to predict a future time of occurrence of a particular activity within the smart environment.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A physical environment is equipped with a plurality of sensors (e.g., motion sensors). As individuals perform various activities within the physical environment, sensor readings are received from one or more of the sensors. Based on the sensor readings, activities being performed by the individuals are recognized and the sensor data is labeled based on the recognized activities. Future activity occurrences are predicted based on the labeled sensor data. Activity prompts may be generated and/or facility automation may be performed for one or more future activity occurrences.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A physical environment is equipped with a plurality of sensors (e.g., motion sensors). As individuals perform various activities within the physical environment, sensor readings are received from one or more of the sensors. Based on the sensor readings, activities being performed by the individuals are recognized and the sensor data is labeled based on the recognized activities. Future activity occurrences are predicted based on the labeled sensor data. Activity prompts may be generated and/or facility automation may be performed for one or more future activity occurrences.
A computer-implemented method of evaluating information confidence based on a cognitive trait of a user, is provided, the method including receiving information from a user as an answer to a question in an active learning question and answer system; monitoring a user for a cognitive behavior indicator when the user is providing the information; determining a cognitive trait based on the cognitive behavior indicator; determining a quantified level of information confidence for the information based on the cognitive trait; and annotating the information with the cognitive trait and the quantified level of information confidence. A system and a computer program product for evaluating information confidence based on a cognitive behavior of a user are also provided.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method of evaluating information confidence based on cognitive behavior indicators of a user, the method comprising: receiving, by a processor, information from a user as an answer to a question in an active learning question and answer system; monitoring, by the processor, the user for one or more cognitive behavior indicators when the user is providing the information, wherein the one or more cognitive behavior indicators comprise one or more biometric measurements of the user at a time the information is provided by the user and measures of an influence of an external stimuli on the user at the time the information is provided by the user; determining, by the processor, a quantified level of information confidence for the information received from the user based on the one or more biometric measurements of the user and the measures of the influence of an external stimuli on the user; and annotating, by the processor, the information with the cognitive behavior indicators, a user profile associated with the user, and the quantified level of information confidence. 2. (canceled) 3. The computer-implemented method of claim 1, wherein the cognitive behavior indicator comprises a physiological measurement. 4. The computer-implemented method of claim 1, wherein the cognitive behavior indicator comprises one or more of user eye tracking, user typing speed, user heart rate, user breathing rate, user blink frequency, user skin conductance, electroencephalogram information, user body temperature, user perspiration quantity, user blood pressure and user amount of time elapsed during information input. 5. The computer-implemented method of claim 1, further comprising annotating the information, the cognitive trait and the quantified level of information confidence for the information received from the user with information received from and cognitive behavior traits of other users. 6. The computer-implemented method of claim 1, further comprising excluding information from the user if the quantified level of the information confidence is below a designated threshold. 7. The computer-implemented method of claim 1, further comprising ranking results of the information and cognitive traits of multiple users based on the quantified level of information confidence. 8. The computer-implemented method of claim 1, wherein the cognitive behavior indicator is measured by a user-wearable device. 9. The computer-implemented method of claim 1, further comprising annotating the information with user characteristics stored in a user profile. 10. A system for evaluating information confidence based on cognitive behavior indicators of a user, the system comprising: a sensor to detect a cognitive behavior indicator of a user; a memory; a processor communicatively coupled to the memory and the sensor, where the processor is configured to perform: receiving information from a user as an answer to a question in an active learning question and answer system; monitoring the user for one or more cognitive behavior indicator detected by the sensor when the user is providing the information, wherein the one or more cognitive behavior indicators comprise one or more biometric measurements of the user at a time the information is provided by the user and measures of an influence of an external stimuli on the user at the time the information is provided by the user; determining a quantified level of information confidence for the information received from the user based on the one or more biometric measurements of the user and the measures of the influence of an external stimuli on the user; and annotating the information with the cognitive behavior indicators, a user profile associated with the user, and the quantified level of information confidence. 11. (canceled) 12. The system of claim 10, wherein the cognitive behavior indicator comprises a physiological measurement. 13. The system of claim 10, wherein the cognitive behavior indicator comprises one or more of user eye tracking, user typing speed, user heart rate, user breathing rate, user blink frequency, user skin conductance, electroencephalogram information, user body temperature, user perspiration quantity, user blood pressure and user amount of time elapsed during information input. 14. The system of claim 10, wherein the processor is further configured to perform annotating the information, the cognitive trait and the quantified level of information confidence for the information received from the user with information received from and cognitive behavior traits of other users. 15. A computer program product for evaluating information confidence based on cognitive behavior indicators of a user, the computer program product comprising: a computer readable storage medium having program instructions embodied therewith, wherein the computer readable storage medium is not a transitory signal per se, the program instructions executable by a processor to cause the processor to perform a method comprising: receiving information from a user as an answer to a question in an active learning question and answer system; monitoring the user for one or more cognitive behavior indicators detected by a sensor when the user is providing the information, wherein the one or more cognitive behavior indicators comprise one or more biometric measurements of the user at a time the information is provided by the user and measures of an influence of an external stimuli on the user at the time the information is provided by the user; determining a quantified level of information confidence for the information received from the user based on the one or more biometric measurements of the user and the measures of external stimuli on the user; and annotating the information with the cognitive behavior indicators, a user profile associated with the user, and the quantified level of information confidence. 16. (canceled) 17. The computer program product of claim 15, wherein the cognitive behavior indicator comprises a physiological measurement. 18. The computer program product of claim 15, wherein the cognitive behavior indicator comprises one or more of user eye tracking, user typing speed, user heart rate, user breathing rate, user blink frequency, user skin conductance, electroencephalogram information, user body temperature, user perspiration quantity, user blood pressure and user amount of time elapsed during information. 19. The computer program product of claim 15, wherein the computer readable program code is further configured to perform annotating the information, the cognitive trait and the quantified level of information confidence for the information received from the user with information received from and cognitive behavior traits of other users. 20. The computer program product of claim 15, wherein the computer readable program code is further configured to perform excluding information from the user if the quantified level of the information confidence is below a designated threshold.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A computer-implemented method of evaluating information confidence based on a cognitive trait of a user, is provided, the method including receiving information from a user as an answer to a question in an active learning question and answer system; monitoring a user for a cognitive behavior indicator when the user is providing the information; determining a cognitive trait based on the cognitive behavior indicator; determining a quantified level of information confidence for the information based on the cognitive trait; and annotating the information with the cognitive trait and the quantified level of information confidence. A system and a computer program product for evaluating information confidence based on a cognitive behavior of a user are also provided.
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A computer-implemented method of evaluating information confidence based on a cognitive trait of a user, is provided, the method including receiving information from a user as an answer to a question in an active learning question and answer system; monitoring a user for a cognitive behavior indicator when the user is providing the information; determining a cognitive trait based on the cognitive behavior indicator; determining a quantified level of information confidence for the information based on the cognitive trait; and annotating the information with the cognitive trait and the quantified level of information confidence. A system and a computer program product for evaluating information confidence based on a cognitive behavior of a user are also provided.
Training instances from a target domain are represented by feature vectors storing values for a set of features, and are labeled by labels from a set of labels. Both a noise marginalizing transform and a weighting of one or more source domain classifiers are simultaneously learned by minimizing the expectation of a loss function that is dependent on the feature vectors corrupted with noise represented by a noise probability density function, the labels, and the one or more source domain classifiers operating on the feature vectors corrupted with the noise. An input instance from the target domain is labeled with a label from the set of labels by operations including applying the learned noise marginalizing transform to an input feature vector representing the input instance and applying the one or more source domain classifiers weighted by the learned weighting to the input feature vector representing the input instance.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A device comprising: a computer programmed to perform a machine learning method operating on training instances from a target domain, the training instances represented by feature vectors storing values for a set of features and labeled by labels from a set of labels, the machine learning method including the operations of: optimizing a loss function dependent on all of: the feature vectors representing the training instances from the target domain corrupted with noise, the labels of the training instances from the target domain, and one or more source domain classifiers operating on the feature vectors representing the training instances from the target domain corrupted with the noise, to simultaneously learn both a noise marginalizing transform and a weighting of the one or more source domain classifiers; and generating a label prediction for an unlabeled input instance from the target domain that is represented by an input feature vector storing values for the set of features by operations including applying the learned noise marginalizing transform to the input feature vector and applying the one or more source domain classifiers weighted by the learned weighting to the input feature vector. 2. The device of claim 1 wherein the loss function is not dependent on any training instance from any domain other than the target domain. 3. The device of claim 1 wherein the loss function is a quadratic loss function, the one or more source domain classifiers are linear classifiers, and the optimizing of the quadratic loss function comprises evaluating a closed form solution of the loss function for a vector representing parameters of the noise marginalizing transform and the weighting of the one or more source domain classifiers. 4. The device of claim 3 wherein the closed form solution is dependent upon the statistical expectation and variance values of the training instances from the target domain corrupted with the noise represented by a noise probability density function (noise pdf). 5. The device of claim 1 wherein the loss function is an exponential loss function, the one or more source domain classifiers are linear classifiers, and the optimizing of the exponential loss function is performed analytically using statistical values of the training instances from the target domain corrupted with the noise represented by a noise probability density function (noise pdf). 6. The device of claim 1 wherein the loss function L is optimized by optimizing: ℒ  ( w , z ) = ∑ n = 1 N     [ L  ( x ~ n , f , y n ; w , z ) ] p  ( x ~ n | x n ) where xn, n=1, . . . , N are the feature vectors representing the training instances from the target domain, {tilde over (x)}n, n=1, . . . , N are the feature vectors representing the training instances from the target domain corrupted with the noise, p({tilde over (x)}n|xn) is a noise probability density function (noise pdf) representing the noise, f represents the one or more source domain classifiers, w represents parameters of the noise marginalizing transform, z represents the weighting of the one or more source domain classifiers, and is the statistical expectation. 7. The device of claim 6 wherein generating the label prediction for the unlabeled input instance from the target domain comprises computing the label prediction ŷin according to: ŷin=(w*)Txin+(z*)Tf(xin) where xin is the input feature vector representing the unlabeled input instance from the target domain, w* represents the learned parameters of the noise marginalizing transform, and z* represents the learned weighting of the one or more source domain classifiers. 8. The device of claim 1 wherein the loss function L is a quadratic loss function and the optimizing of the quadratic loss function L comprises minimizing: ℒ  ( w , z ) = 1 N  ∑ n = 1 N     [ ( w T  x ~ n + z T  f  ( x ~ n ) - y n ) 2 ] p  ( x ~ n | x n ) where xn, n=1, . . . , N are the feature vectors representing the training instances from the target domain, {tilde over (x)}n, n=1, . . . , N are the feature vectors representing the training instances from the target domain corrupted with the noise, p({tilde over (x)}n|xn) is a noise probability density function (noise pdf) representing the noise, f represents the one or more source domain classifiers, w represents parameters of the noise marginalizing transform, z represents the weighting of the one or more source domain classifiers, and is the statistical expectation. 9. The device of claim 8 wherein the one or more source domain classifiers f are linear classifiers, and the minimizing comprises evaluating a closed form solution of (w,z) for a vector [ w * z * ] where w* represents the learned parameters of the noise marginalizing transform and z* represents the learned weighting of the one or more source domain classifiers. 10. The device of claim 1 wherein the loss function L is an exponential loss function and the optimizing of the exponential loss function L comprises minimizing: ℒ  ( w , z ) = ∑ n = 1 N     [ e - y n  ( w T  x ~ n + z T  f  ( x ~ n ) ) ] p  ( x ~ n | x n ) where xn, n=1, . . . , N are the feature vectors representing the training instances from the target domain, {tilde over (x)}n, n=1, . . . , N are the feature vectors representing the training instances from the target domain corrupted with the noise, p({tilde over (x)}n|xn) is a noise probability density function (noise pdf) representing the noise, f represents the one or more source domain classifiers, w represents parameters of the noise marginalizing transform, z represents the weighting of the one or more source domain classifiers, and is the statistical expectation. 11. The device of claim 1 wherein one of: each training instance from the target domain represents a corresponding image, the set of features is a set of image features, the one or more source domain classifiers are one or more source domain image classifiers, and the machine learning method includes the further operation of generating each training instance from the target domain by extracting values for the set of image features from the corresponding image; and each training instance from the target domain represents a corresponding text-based document, the set of features is a set of text features, the one or more source domain classifiers are one or more source domain document classifiers, and the machine learning method includes the further operation of generating each training instance from the target domain by extracting values for the set of text features from the corresponding text-based document. 12. A non-transitory storage medium storing instructions executable by a computer to perform a machine learning method operating on N training instances from a target domain, the training instances represented by feature vectors xn, n=1, . . . , N storing values for a set of features and labeled by labels yn, n=1, . . . , N from a set of labels, the machine learning method including the operations of: optimizing the function (w,z) given by: ℒ  ( w , z ) = ∑ n = 1 N     [ L  ( x ~ n , f , y n ; w , z ) ] p  ( x ~ n | x n ) with respect to w and z where {tilde over (x)}n, n=1, . . . , N are the feature vectors representing the training instances from the target domain corrupted with noise, p({tilde over (x)}n|xn) is a noise probability density function (noise pdf) representing the noise, f represents one or more source domain classifiers, L is a loss function, w represents parameters of a noise marginalizing transform, z represents a weighting of the one or more source domain classifiers, and is the statistical expectation, to generate learned parameters w* of the noise marginalizing transform and a learned weighting z* of the one or more source domain classifiers; and generating a label prediction ŷin for an unlabeled input instance from the target domain represented by input feature vector xin by operations including applying the noise marginalizing transform with the learned parameters w* to the input feature vector xin and applying the one or more source domain classifiers weighted by the learned weighting z* to the input feature vector xin. 13. The non-transitory storage medium of claim 12 wherein the loss function L is the quadratic loss function (wT{tilde over (x)}in+zTf({tilde over (x)}n)−yn)2. 14. The non-transitory storage medium of claim 12 wherein the loss function L is a quadratic loss function, the one or more source domain classifiers f are linear classifiers, and the optimizing comprises evaluating a closed form solution of (w,z) for a vector [ w * z * ] where w* represents the learned parameters of the noise marginalizing transform and z* represents the learned weighting of the one or more source domain classifiers. 15. The non-transitory storage medium of claim 12 wherein the loss function L is the exponential loss function e−yn(wT{tilde over (x)}n+zTf({tilde over (x)}n)). 16. The non-transitory storage medium of claim 12 wherein each training instance from the target domain represents a corresponding image, the set of features is a set of image features, the one or more source domain classifiers are one or more source domain image classifiers, and the machine learning method includes the further operation of: generating the feature vector xn representing each training instance by extracting values for the set of image features from the corresponding image. 17. The non-transitory storage medium of claim 12 wherein each training instance from the target domain represents a corresponding text-based document, the set of features is a set of text features, the one or more source domain classifiers are one or more source domain document classifiers, and the machine learning method includes the further operation of: generating the feature vector xn representing each training instance by extracting values for the set of text features from the corresponding text-based document. 18. A machine learning method operating on training instances from a target domain, the training instances represented by feature vectors storing values for a set of features and labeled by labels from a set of labels, the machine learning method comprising: simultaneously learning both a noise marginalizing transform and a weighting of one or more source domain classifiers by minimizing the expectation of a loss function dependent on the feature vectors corrupted with noise represented by a noise probability density function, the labels, and the one or more source domain classifiers operating on the feature vectors corrupted with the noise; and labeling an unlabeled input instance from the target domain with a label from the set of labels by operations including applying the learned noise marginalizing transform to an input feature vector representing the unlabeled input instance and applying the one or more source domain classifiers weighted by the learned weighting to the input feature vector representing the unlabeled input instance; wherein the simultaneous learning and the labeling are performed by a computer. 19. The method of claim 18 wherein the loss function is not dependent on any feature vector representing a training instance from any domain other than the target domain. 20. The method of claim 18 wherein the loss function is a quadratic loss function and the simultaneous learning comprises evaluating a closed form solution of the loss function for a vector representing parameters of the noise marginalizing transform and the weighting of the one or more source domain classifiers.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Training instances from a target domain are represented by feature vectors storing values for a set of features, and are labeled by labels from a set of labels. Both a noise marginalizing transform and a weighting of one or more source domain classifiers are simultaneously learned by minimizing the expectation of a loss function that is dependent on the feature vectors corrupted with noise represented by a noise probability density function, the labels, and the one or more source domain classifiers operating on the feature vectors corrupted with the noise. An input instance from the target domain is labeled with a label from the set of labels by operations including applying the learned noise marginalizing transform to an input feature vector representing the input instance and applying the one or more source domain classifiers weighted by the learned weighting to the input feature vector representing the input instance.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Training instances from a target domain are represented by feature vectors storing values for a set of features, and are labeled by labels from a set of labels. Both a noise marginalizing transform and a weighting of one or more source domain classifiers are simultaneously learned by minimizing the expectation of a loss function that is dependent on the feature vectors corrupted with noise represented by a noise probability density function, the labels, and the one or more source domain classifiers operating on the feature vectors corrupted with the noise. An input instance from the target domain is labeled with a label from the set of labels by operations including applying the learned noise marginalizing transform to an input feature vector representing the input instance and applying the one or more source domain classifiers weighted by the learned weighting to the input feature vector representing the input instance.
A method predicts the fall risk of a user based on a machine learning model. The model is trained using data about the user, which may be from wearable sensors and depth sensors, manually input by the user, and received from other types of sources. Data about a population of users and data from structured tests completed by the user can also be used to train the model. The model uses features and motifs discovered based on the data that correlate to fall risk events to update fall risk scores and predictions. The user is provided a recommendation describing how the user can reduce a predicted fall risk for the user.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method comprising: receiving, from one or more sensors worn by a user or placed near a user, sensor data tracked for determining fall risk for the user; receiving, from the user or from records associated with the user, one or more user inputs providing data about factors for the user that indicate the fall risk of the user; collecting, from one or more external sources, data about fall risk of a population of users; providing the received sensor data, the received one or more user inputs, and the collected data from the one or more external sources to a machine learning model configured for determining fall risk of users, the machine learning model having been trained based on data about a plurality of users that was received from sensors worn by or placed near a user of the plurality of users, from user inputs by a user of the plurality of users, and from external sources. determining a fall risk score for the user from the machine learning model based on the data about the user, the fall risk score used in determining a prediction as to the likelihood of the user experiencing a fall; and notifying, in response to the determined fall risk score being greater than a predetermined threshold, the user with information about the fall risk score and the determined prediction as to the likelihood of the user experiencing a fall. 2. The method of claim 1, further comprising conducting a structured test with the user regularly over time, the structured test including asking the user to perform certain actions to test a current state of the user and current fall risk for the user, wherein results of the structured test are used in training the machine learning model. 3. The method of claim 2, wherein the structured test comprises a test for measuring a mean radius of trace of the user while standing, the test comprising: providing instructions to the user to stand up for the structured test; measuring, with sensor data collected from the one or more sensors, a starting position of the user and a mean radius of trace of the user while the user is standing over a period of time, the mean radius of trace representing the average distance that the user deviated from the starting position over the period of time; and updating the prediction of the fall risk of the user based on the measured sway radius. 4. The method of claim 2, further comprising periodically retraining the machine learning model based on data collected during the structured test completed by the user. 5. The method of claim 1, further comprising: generating a recommendation describing how the user can reduce the determined fall risk for the user based on data obtained from the one or more sensors, the one or more user inputs, and the one or more external sources; and providing the recommendation to the user or a person associated with the user. 6. The method of claim 5, wherein the recommendation comprises a description of conditions in an environment of the user that the user should change to reduce the determined fall risk or a description of a change to the user's posture or gait to reduce the determined fall risk. 7. The method of claim 1, further comprising performing activity classification on the user based on the received sensor data, the activity classification used to determine a daily activity type and corresponding duration of the user and to determine based on body position of the user whether an assistive device is used by the user. 8. The method of claim 1, wherein the one or more sensors comprise at least one depth sensor placed in a room in which fall risk of the user is tracked, the at least one depth sensor taking measurements of a posture of the user while standing, the posture mapped to one or more axes to determine a center of gravity of the user, the center of gravity used in training the machine learning model. 9. The method of claim 1, wherein the user inputs comprise information associated with one or more of the following: the user's age; the user's history of previous falls; the user's use of assistive devices; the user's vision; the user's history of neurological diseases; and lighting conditions. 10. The method of claim 1, wherein the predetermined threshold is based on the information associated with the mean and standard deviation of the number of falls experienced by users per a predetermined period of time within the population of users. 11. The method of claim 1, wherein providing the received sensor data, the received one or more user inputs, and the collected data from the one or more external sources to the machine learning model further comprises: generating a multi-dimensional matrix including the data about the user organized by dimension; and providing features identified using the multi-dimensional matrix to the machine learning model, the features used to train the machine learning model. 12. A computer program product stored on a non-transitory computer-readable medium that includes instructions that, when loaded into memory, cause a processor to perform a method, the method comprising: receiving, from one or more sensors worn by a user or placed near a user, sensor data tracked for determining fall risk for the user; receiving, from the user or from records associated with the user, one or more user inputs providing data about factors for the user that indicate the fall risk of the user; collecting, from one or more external sources, data about fall risk of a population of users; providing the received sensor data, the received one or more user inputs, and the collected data from the one or more external sources to a machine learning model configured for determining fall risk of users, the machine learning model having been trained based on data about a plurality of users that was received from sensors worn by or placed near a user of the plurality of users, from user inputs by a user of the plurality of users, and from external sources. determining a fall risk score for the user from the machine learning model based on the data about the user, the fall risk score used in determining a prediction as to the likelihood of the user experiencing a fall; and notifying, in response to the determined fall risk score being greater than a predetermined threshold, the user with information about the fall risk score and the determined prediction as to the likelihood of the user experiencing a fall. 13. The computer program product of claim 12, further comprising conducting a structured test with the user regularly over time, the structured test including asking the user to perform certain actions to test a current state of the user and current fall risk for the user, wherein results of the structured test are used in training the machine learning model. 14. The computer program product of claim 12, further comprising: generating a recommendation describing how the user can reduce the determined fall risk for the user based on data obtained from the one or more sensors, the one or more user inputs, and the one or more external sources; and providing the recommendation to the user or a person associated with the user. 15. The computer program product of claim 12, further comprising performing activity classification on the user based on the received sensor data, the activity classification used to determine a daily activity type and corresponding duration of the user and to determine based on body position of the user whether an assistive device is used by the user. 16. The computer program product of claim 12, wherein providing the received sensor data, the received one or more user inputs, and the collected data from the one or more external sources to the machine learning model further comprises: generating a multi-dimensional matrix including the data about the user organized by dimension; and providing features identified using the multi-dimensional matrix to the machine learning model, the features used to train the machine learning model. 17. A system comprising: a processor configured to execute instructions; a non-transitory computer-readable memory storing instructions executable by the processor and causing the processor to carry out the steps of: receiving, from one or more sensors worn by a user or placed near a user, sensor data tracked for determining fall risk for the user; receiving, from the user or from records associated with the user, one or more user inputs providing data about factors for the user that indicate the fall risk of the user; collecting, from one or more external sources, data about fall risk of a population of users; providing the received sensor data, the received one or more user inputs, and the collected data from the one or more external sources to a machine learning model configured for determining fall risk of users, the machine learning model having been trained based on data about a plurality of users that was received from sensors worn by or placed near a user of the plurality of users, from user inputs by a user of the plurality of users, and from external sources. determining a fall risk score for the user from the machine learning model based on the data about the user, the fall risk score used in determining a prediction as to the likelihood of the user experiencing a fall; and notifying, in response to the determined fall risk score being greater than a predetermined threshold, the user with information about the fall risk score and the determined prediction as to the likelihood of the user experiencing a fall. 18. The system of claim 17, further comprising conducting a structured test with the user regularly over time, the structured test including asking the user to perform certain actions to test a current state of the user and current fall risk for the user, wherein results of the structured test are used in training the machine learning model. 19. The system of claim 17, further comprising: generating a recommendation describing how the user can reduce the determined fall risk for the user based on data obtained from the one or more sensors, the one or more user inputs, and the one or more external sources; and providing the recommendation to the user or a person associated with the user. 20. The system of claim 17, further comprising performing activity classification on the user based on the received sensor data, the activity classification used to determine a daily activity type and corresponding duration of the user and to determine based on body position of the user whether an assistive device is used by the user. 21. The system of claim 17, wherein providing the received sensor data, the received one or more user inputs, and the collected data from the one or more external sources to the machine learning model further comprises: generating a multi-dimensional matrix including the data about the user organized by dimension; and providing features identified using the multi-dimensional matrix to the machine learning model, the features used to train the machine learning model.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method predicts the fall risk of a user based on a machine learning model. The model is trained using data about the user, which may be from wearable sensors and depth sensors, manually input by the user, and received from other types of sources. Data about a population of users and data from structured tests completed by the user can also be used to train the model. The model uses features and motifs discovered based on the data that correlate to fall risk events to update fall risk scores and predictions. The user is provided a recommendation describing how the user can reduce a predicted fall risk for the user.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method predicts the fall risk of a user based on a machine learning model. The model is trained using data about the user, which may be from wearable sensors and depth sensors, manually input by the user, and received from other types of sources. Data about a population of users and data from structured tests completed by the user can also be used to train the model. The model uses features and motifs discovered based on the data that correlate to fall risk events to update fall risk scores and predictions. The user is provided a recommendation describing how the user can reduce a predicted fall risk for the user.
Examples relate to generating negative classifier data based on positive classifier data. In one example, a computing device may: obtain positive classifier data for a first class, the positive classifier data including at least one correlated feature set and, for each correlated feature set, a measure of likelihood that data matching the correlated feature set belongs to the first class; determine, for each feature included in the at least one correlated feature set, a de-correlated measure of likelihood that data including the feature belongs to the first class; and generate, based on each de-correlated measure of likelihood, negative classifier data for classifying data as belonging to a second class.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A non-transitory machine-readable storage medium encoded with instructions executable by a hardware processor of a computing device for generating negative classifier data based on positive classifier data, the machine-readable storage medium comprising instructions to cause the hardware processor to: obtain positive classifier data for a first class, the positive classifier data including at least one correlated feature set and, for each correlated feature set, a measure of likelihood that data matching the correlated feature set belongs to the first class; determine, for each feature included in the at least one correlated feature set, a de-correlated measure of likelihood that data including the feature belongs to the first class; and generate, based on each de-correlated measure of likelihood, negative classifier data for classifying data as belonging to a second class. 2. The storage medium of claim 1, wherein each de-correlated measure of likelihood is determined, for each feature included in the at least one correlated feature set, by calculating a sum of each likelihood that the feature would be randomly selected from each of its corresponding feature sets. 3. The storage medium of claim 1, wherein the instructions further cause the hardware processor to: train a classifier based on the positive classifier data and the negative classifier data. 4. The storage medium of claim 3, wherein the classifier receives, as input, test data including at least one feature value and produces, as output, an output class for the test data. 5. The storage medium of claim 1, wherein each correlated feature set is correlated with respect to an order of feature values. 6. A computing device for generating negative classifier data based on positive classifier data, the computing device comprising: a hardware processor; and a data storage device storing instructions that, when executed by the hardware processor, cause the hardware processor to: obtain positive classifier data for a first class, the positive classifier data including at least one correlated feature set and, for each feature set, a measure of likelihood that data matching the feature set belongs to the first class; determine, for each feature included in the at least one correlated feature set, a de-correlated measure of likelihood that data including the feature belongs to the first class; and generate, based on each de-correlated measure of likelihood, negative classifier data for classifying data as belonging to a second class. 7. The computing device of claim 6, wherein each de-correlated measure of likelihood is determined, for each feature included in the at least one correlated feature set, by calculating a sum of each likelihood that the feature would be randomly selected from each of its corresponding feature sets. 8. The computing device of claim 6, wherein the instructions further cause the hardware processor to: train a classifier based on the positive classifier data and the negative classifier data. 9. The computing device of claim 8, wherein the classifier receives, as input, test data including at least one feature value and produces, as output, an output class for the test data. 10. The computing device of claim 6, wherein each correlated feature set is correlated with respect to an order of feature values. 11. A method for generating negative classifier data based on positive classifier data, implemented by a hardware processor, the method comprising: obtaining positive classifier data for a first class, the positive classifier data including at least one correlated feature set and, for each feature set, a measure of likelihood that data matching the feature set belongs to the first class; determining, for each feature included in the at least one correlated feature set, a de-correlated measure of likelihood that data including the feature belongs to the first class; and generating, based on each de-correlated measure of likelihood, negative classifier data for classifying data as belonging to a second class. 12. The method of claim 11, wherein each de-correlated measure of likelihood is determined, for each feature included in the at least one correlated feature set, by calculating a sum of each likelihood that the feature would be randomly selected from each of its corresponding feature sets. 13. The method of claim 11, further comprising: training a classifier based on the positive classifier data and the negative classifier data. 14. The method of claim 13, wherein the classifier receives, as input, test data including at least one feature value and produces, as output, an output class for the test data. 15. The method of claim 11, wherein each correlated feature set is correlated with respect to an order of feature values.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Examples relate to generating negative classifier data based on positive classifier data. In one example, a computing device may: obtain positive classifier data for a first class, the positive classifier data including at least one correlated feature set and, for each correlated feature set, a measure of likelihood that data matching the correlated feature set belongs to the first class; determine, for each feature included in the at least one correlated feature set, a de-correlated measure of likelihood that data including the feature belongs to the first class; and generate, based on each de-correlated measure of likelihood, negative classifier data for classifying data as belonging to a second class.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Examples relate to generating negative classifier data based on positive classifier data. In one example, a computing device may: obtain positive classifier data for a first class, the positive classifier data including at least one correlated feature set and, for each correlated feature set, a measure of likelihood that data matching the correlated feature set belongs to the first class; determine, for each feature included in the at least one correlated feature set, a de-correlated measure of likelihood that data including the feature belongs to the first class; and generate, based on each de-correlated measure of likelihood, negative classifier data for classifying data as belonging to a second class.
A method and system for detecting and learning human errors in the operation of business software applications and responding by alerting the users should there be a possible data fault. By intercepting the user's data input to the business software application, extracting from the data input an input value and checking any value discrepancy against a pre-defined default value and a zero-tolerable range of default value, human errors are detected, or the system is adjusted to accommodate the input value. Therefore, through the use over a period of time and by a plurality of users, the system learns and adapts to the changing data input that reflect the changing business needs of the users of the business software application.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system for detecting, learning, and correcting human errors in operation of a business software application, comprising: a supportive intelligence layer comprising one or more computer system modules, wherein the supportive intelligence layer being configured to: intercept user's data input to the business software application; extract from the data input an input value; determine whether the data input is associated with an existing Attribute; if the data input is not associated with an existing Attribute, then create a new Attribute, Element Item, or Element for the data input, wherein the new Attribute is defined with an initial default value equal to the input value of the data input and a zero-tolerable range of default value; else if the data input is associated with an existing Attribute, then determine whether the input value is within the associated Attribute's tolerable range of default value or among its list of acceptable values; if the input value is within the associated Attribute's tolerable range of default value or among its list of acceptable values, then pass the data input to the business software application; else if the input value is outside the associated Attribute's tolerable range of default value and not among its list of acceptable values, then alert the user of a value discrepancy and provide four options: (i) changes the input value to that of the associated Attribute's current default value; (ii) accepts the input value and widens the associated Attribute's current tolerable range of default value; (iii) accepts the input value and adds the input value to the associated Attribute's list of acceptable values; and (iv) accepts the input value and replaces the associated Attribute's current default value with that of the input value. 2. The system of claim 1, wherein supportive intelligence layer being further configured to provide a suggestive input value for the data input based on the associated Attribute's default value. 3. The system of claim 1, wherein supportive intelligence layer being further configured to directly set an input value for the data input based on the associated Attribute's default value. 4. The system of claim 1, wherein supportive intelligence layer being implemented as one or more specially configured computing processors. 5. The system of claim 1, wherein supportive intelligence layer being implemented as extension to the business software application. 6. A computer-implemented method for detecting, learning, and correcting human errors in operation of a business software application, comprising: intercepting user's data input to the business software application; extracting from the data input an input value; determining whether the data input is associated with an existing Attribute; if the data input is not associated with an existing Attribute, then creating a new Attribute, Element Item, or Element for the data input, wherein the new Attribute is defined with an initial default value equal to the input value of the data input and a zero-tolerable range of default value; else if the data input is associated with an existing Attribute, then determining whether the input value is within the associated Attribute's tolerable range of default value or among its list of acceptable values; if the input value is within the associated Attribute's tolerable range of default value or among its list of acceptable values, then passing the data input to the business software application; else if the input value is outside the associated Attribute's tolerable range of default value and not among its list of acceptable values, then alerting the user of a value discrepancy and providing four options: (i) changing the input value to that of the associated Attribute's current default value; (ii) accepting the input value and widening the associated Attribute's current tolerable range of default value; (iii) accepting the input value and adding the input value to the associated Attribute's list of acceptable values; and (iv) accepting the input value and replacing the associated Attribute's current default value with that of the input value. 7. The method of claim 6, further comprising providing a suggestive input value for the data input based on the associated Attribute's default value. 8. The method of claim 6, further comprising directly setting an input value for the data input based on the associated Attribute's default value.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method and system for detecting and learning human errors in the operation of business software applications and responding by alerting the users should there be a possible data fault. By intercepting the user's data input to the business software application, extracting from the data input an input value and checking any value discrepancy against a pre-defined default value and a zero-tolerable range of default value, human errors are detected, or the system is adjusted to accommodate the input value. Therefore, through the use over a period of time and by a plurality of users, the system learns and adapts to the changing data input that reflect the changing business needs of the users of the business software application.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method and system for detecting and learning human errors in the operation of business software applications and responding by alerting the users should there be a possible data fault. By intercepting the user's data input to the business software application, extracting from the data input an input value and checking any value discrepancy against a pre-defined default value and a zero-tolerable range of default value, human errors are detected, or the system is adjusted to accommodate the input value. Therefore, through the use over a period of time and by a plurality of users, the system learns and adapts to the changing data input that reflect the changing business needs of the users of the business software application.
With respect to the model selection issue of a mixture model, the present invention performs high-speed model selection under an appropriate standard regarding the number of model candidates which exponentially increases as the number and the types to be mixed increase. A mixture model estimation device comprises: a data input unit to which data of a mixture model to be estimated, candidate values of the number of mixtures which are required for estimating the mixture model of the data, and types of components configuring the mixture model and parameters thereof, are input; a processing unit which sets the number of mixtures from the candidate values, calculates, with respect to the set number of mixtures, a variation probability of a hidden variable for a random variable which becomes a target for mixture model estimation of the data, and estimates the optimal mixture model by optimizing the types of the components and the parameters therefor using the calculated variation probability of the hidden variable so that the lower bound of the posterior probabilities of the model separated for each component of the mixture model can be maximized; and a model estimation result output unit which outputs the model estimation result obtained by the processing unit.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A mixture model estimation device comprising: a data input unit that inputs data of a mixture model to be estimated, and candidate values for a mixture number, and types and parameters of components constituting the mixture model that are necessary for estimating the mixture model of the data; a processing unit comprising a computer hardware processor that sets the mixture number from the candidate values, calculates a variation probability of a hidden variable for a random variable which is a target for estimating the mixture model of the data with respect to the set mixture number, and optimally estimates the mixture model by optimizing the types and parameters of the components using the calculated variation probability of the hidden variable so that a lower bound of a model posterior probability separated for each of the components of the mixture model is maximized; and a model estimation result output unit that outputs a model estimation result obtained by the processing unit. 2. The mixture model estimation device according to claim 1, wherein the processing unit obtains the mixture number of the mixture model optimally by calculating the lower bound of the model posterior probability and the types and parameters of the components for all the candidate values for the mixture number. 3. The mixture model estimation device according to claim 1, wherein: the mixture number is denoted by C, the random variable is denoted by X, the types of the components are denoted by S1, . . . , SC, and the parameters of the components are denoted by θ=(π1, . . . , πC, φ1S1, . . . , φCSC) (π1, . . . , πC are mixture ratios when the mixture number is 1 to C, and φ1S1, . . . , φCSC are parameters of distributions of components S1 to SC when the mixture number is 1 to C), the mixture model is expressed by equation 1: [ Math .  1 ] P  ( X | θ ) = ∑ c = 1 C  π c  P c  ( X ; φ c S c ) ( 1 ) when the hidden variable for the random variable X is denoted by Z=(Z1, . . . , ZC), a joint distribution of a complete variable that is a pair of the random variable X and the hidden variable Z is defined by equation 2: [ Math .  2 ] P  ( X , Z | θ ) = ∑ c = 1 C  ( π c  P c  ( X ; φ c S c ) ) Z c ( 2 ) when N data values of the random variable X are denoted by xn (n=1, . . . , N), and N values of the hidden variable Z for the values xn are denoted by zn (n=1, . . . , N), a posterior probability of the hidden variable Z is expressed by equation 3: [Math. 3] P(zn|xn,θ)∝πcPc(xn;φcSc) (3) 4. The mixture model estimation device according to claim 1, wherein the mixture model comprises a plurality of mixture distributions having different independent characteristics. 5. The mixture model estimation device according claim 1, wherein the mixture model comprises a plurality of various mixture distributions. 6. The mixture model estimation device according to claim 1, wherein the mixture model comprises mixture distributions of different stochastic regression functions. 7. The mixture model estimation device according to claim 1, wherein the mixture model comprises mixture distributions of different stochastic discriminant functions. 8. The mixture model estimation device according to claim 1, wherein the mixture model comprises mixture distributions of a hidden Markov model having different output probabilities. 9. A mixture model estimation method comprising: by using an input unit, inputting data of a mixture model to be estimated, and candidate values for a mixture number, and types and parameters of components constituting the mixture model that are necessary for estimating the mixture model of the data; causing a processing unit to set the mixture number from the candidate values, calculate a variation probability of a hidden variable for a random variable which is a target for estimating the mixture model of the data, and optimally estimate the mixture model by optimizing the types and parameters of the components using the calculated variation probability of the hidden variable so that a lower bound of a model posterior probability separated for each of the components of the mixture model is maximized; and causing a model estimation result output unit to output a model estimation result obtained by the processing unit. 10. In the mixture model estimation method of claim 9, the processing unit obtains the mixture number of the mixture model optimally by calculating the lower bound of the model posterior probability and the types and parameters of the components for all the candidate values for the mixture number. 11. In the mixture model estimation method of claim 9, wherein: the mixture number is denoted by C, the random variable is denoted by X, the types of the components are denoted by S1, . . . , SC, and the parameters of the components are denoted by θ=(π1, . . . , πC, φ1S1, . . . , φCSC) (π1, . . . , πC are mixture ratios when the mixture number is 1 to C, and φ1S1, . . . , φCSC are parameters of distributions of components S1 to SC when the mixture number is 1 to C), the mixture model is expressed by equation 1: [ Math .  1 ] P  ( X | θ ) = ∑ c = 1 C  π c  P c  ( X ; φ c S c ) ( 1 ) the hidden variable for the random variable X is denoted by Z=(Z1, . . . , ZC), a joint distribution of a complete variable that is a pair of the random variable X and the hidden variable X is defined by equation 2: [ Math .  2 ] P  ( X , Z | θ ) = ∑ c = 1 C  ( π c  P c  ( X ; φ c S c ) ) Z c ( 2 ) if N data values of the random variable X are denoted by Xn (n=1, . . . , N), and N values of the hidden variable Z for the values Xn are denoted by Zn (n=1, . . . , N), a posterior probability of the hidden variable Z is expressed by equation 3: [Math. 3] P(zn|xn,θ)∝πcPc(xn;φcSc) (3) 12. In the mixture model estimation method of claim 9, the mixture model includes a plurality of mixture distributions having different independence characteristics. 13. In the mixture model estimation method of claim 9, the mixture model includes a plurality of various mixture distributions. 14. In the mixture model estimation method of claim 9, the mixture model includes mixture distributions of different stochastic regression functions. 15. In the mixture model estimation method of claim 9, the mixture model includes mixture distributions of different stochastic discriminant functions. 16. In the mixture model estimation method of claim 9, the mixture model includes mixture distributions of a hidden Markov model having different output probabilities. 17. A non-transitory computer-readable medium storing a computer-readable mixture model estimation program for operating a computer as a mixture model estimation device comprising: an input unit that inputs data of a mixture model to be estimated, and candidate values for a mixture number, and types and parameters of components constituting the mixture model that are necessary for estimating the mixture model of the data; a processing unit comprising a processor that sets the mixture number from the candidate values, calculates a variation probability of a hidden variable for a random variable which is a target for estimating the mixture model of the data with respect to the set mixture number, and optimally estimates the mixture model by optimizing the types and parameters of the components using the calculated variation probability of the hidden variable so that a lower bound of a model posterior probability separated for each of the components of the mixture model is maximized; and a model estimation result output unit that outputs a model estimation result obtained by the processing unit. 18. In the mixture model estimation program of claim 17, the optimal mixture number of the mixture model is optimally obtained by calculating the lower bound of the model posterior probability and the types and parameters of the components for all the candidate values for the mixture number. 19. In the mixture model estimation program of claim 17, wherein the mixture number is denoted by C, the random variable is denoted by X, the types of the components are denoted by S1, . . . , SC, and the parameters of the components are denoted by θ=(π1, . . . , πC, φ1S1, . . . , φCSC) (π1, . . . , πC are mixture ratios when the mixture number is 1 to C, and φ1S1, . . . , φCSC are parameters of distributions of components S1 to SC when the mixture number is 1 to C), the mixture model is expressed by equation 1: [ Math .  1 ] P  ( X | θ ) = ∑ c = 1 C  π c  P c  ( X ; φ c S c ) ( 1 ) when the hidden variable for the random variable X is denoted by Z=(Z1, . . . , ZC), a joint distribution of a complete variable that is a pair of the random variable X and the hidden variable X is defined by equation 2: [ Math .  2 ] P  ( X , Z | θ ) = ∑ c = 1 C  ( π c  P c  ( X ; φ c S c ) ) Z c ( 2 ) when N data values of the random variable X are denoted by Xn (n=1, . . . , N), and N values of the hidden variable Z for the values Xn are denoted by Zn (n=1, . . . , N), a posterior probability of the hidden variable Z is expressed by equation 3: [Math. 3] P(zn|xn,θ)∝πcPc(xn;φcSc) (3) 20. The mixture model estimation device according to claim 1, wherein the processing unit calculates the variation probability of the hidden variable by solving an optimization problem expressed by a first equation, calculates the lower bound of the model posterior probability by a second equation, calculates an optimal mixture model H(t) and parameters θ(t) of components of the optimal mixture model after t iterations by using the variation probability of the hidden variable by a third equation, and determines whether the lower bound of the model posterior probability converges by using a fourth equation, wherein when the processing unit determines that the lower bound of the model posterior probability does not converge, the processing unit repeats processes of first to fourth equation, and if the processing unit determines that the lower bound converges, the processing unit compares a lower bound of a model posterior probability of a currently-set optimal mixture model with the lower bound of the model posterior probability obtained through calculations, and sets the larger value as the optimal mixture model, and the processing unit repeats the processes of first to fourth equation for all the candidate values for the mixture number so as to estimate the mixture model optimally.
REJECTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: With respect to the model selection issue of a mixture model, the present invention performs high-speed model selection under an appropriate standard regarding the number of model candidates which exponentially increases as the number and the types to be mixed increase. A mixture model estimation device comprises: a data input unit to which data of a mixture model to be estimated, candidate values of the number of mixtures which are required for estimating the mixture model of the data, and types of components configuring the mixture model and parameters thereof, are input; a processing unit which sets the number of mixtures from the candidate values, calculates, with respect to the set number of mixtures, a variation probability of a hidden variable for a random variable which becomes a target for mixture model estimation of the data, and estimates the optimal mixture model by optimizing the types of the components and the parameters therefor using the calculated variation probability of the hidden variable so that the lower bound of the posterior probabilities of the model separated for each component of the mixture model can be maximized; and a model estimation result output unit which outputs the model estimation result obtained by the processing unit.
G06N700
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: With respect to the model selection issue of a mixture model, the present invention performs high-speed model selection under an appropriate standard regarding the number of model candidates which exponentially increases as the number and the types to be mixed increase. A mixture model estimation device comprises: a data input unit to which data of a mixture model to be estimated, candidate values of the number of mixtures which are required for estimating the mixture model of the data, and types of components configuring the mixture model and parameters thereof, are input; a processing unit which sets the number of mixtures from the candidate values, calculates, with respect to the set number of mixtures, a variation probability of a hidden variable for a random variable which becomes a target for mixture model estimation of the data, and estimates the optimal mixture model by optimizing the types of the components and the parameters therefor using the calculated variation probability of the hidden variable so that the lower bound of the posterior probabilities of the model separated for each component of the mixture model can be maximized; and a model estimation result output unit which outputs the model estimation result obtained by the processing unit.
Detecting patterns and sequences associated with an anomaly in predictions made a predictive system. The predictive system makes predictions by learning spatial patterns and temporal sequences in an input data that change over time. As the input data is received, the predictive system generates a series of predictions based on the input data. Each prediction is compared with corresponding actual value or state. If the prediction does not match or deviates significantly from the actual value or state, an anomaly is identified for further analysis. A corresponding state or a series of states of the predictive system before or at the time of prediction are associated with the anomaly and stored. The anomaly can be detected by monitoring whether the predictive system is placed in the state or states that is the same or similar to the stored state or states.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of detecting anomaly, comprising: updating a predictive device responsive to receiving input data at a first time; generating a prediction output based on the updated predictive device, the prediction output representing a predicted value or a state of a system at a second time subsequent to the first time; and detecting an anomaly by monitoring the prediction output. 2. The method of claim 1, wherein updating of the predictive device comprises activating or deactivating cells in the predictive device. 3. The method of claim 2, wherein generating the prediction output comprises detecting cells that are active in the updated predictive device. 4. The method of claim 2, wherein the activating or deactivating of cells in the predictive device causes the predictive device to perform online learning. 5. The method of claim 2, further comprising storing activating or deactivating states of at least a subset of the cells corresponding to the anomaly, and wherein detecting the anomaly comprises detecting activation or deactivation of at least the subset of cells. 6. The method of claim 2, wherein detecting the anomaly comprises detecting whether the predicted output deviates from the actual value or state above at the second time beyond a threshold difference. 7. The method of claim 1, wherein the predictive system is configured to learn temporal sequences of spatial patterns in the input data by establishing connections between cells in the predictive device based on the received input data. 8. The method of claim 7, wherein the predictive output is generated by at least detecting cells that are predictively activated based on activation of other cells connected to the predictively activated cells. 9. The method of claim 8, wherein a cell is predictively activated responsive to at least a subset of cells connected to the cell being activated. 10. The method of claim 1, generating the prediction output comprises: generating a series of spatial pooler outputs in sparse distributed representation by processing the input data over time, each of the spatial pooler outputs indicating a spatial pattern detected in the input data at a time; and generating the prediction output based on stored temporal relationships between the series of spatial pooler outputs. 11. A non-transitory computer-readable storage medium storing instructions thereon, the instructions when executed by a processor causing the processor to: update a predictive device responsive to receiving input data at a first time; generate a prediction output based on the updated predictive device, the prediction output representing a predicted value or a state of a system at a second time subsequent to the first time; and detect an anomaly by monitoring the prediction output. 12. The computer-readable storage medium of claim 11, wherein instructions to update the predictive device comprises instructions to activate or deactivate cells in the predictive device. 13. The computer-readable storage medium of claim 12, wherein instructions to generate the prediction output comprises instructions to detect cells that are active in the updated predictive device. 14. The computer-readable storage medium of claim 12, wherein the activating or deactivating of cells in the predictive device causes the predictive device to perform online learning. 15. The computer-readable storage medium of claim 12, wherein further comprising instructions to store activating or deactivating states of at least a subset of the cells corresponding to the anomaly, and wherein instructions to detect the anomaly comprises instructions to detect activation or deactivation of at least the subset of cells. 16. The computer-readable storage medium of claim 12, wherein instructions to detect the anomaly comprises instructions to detect whether the predicted output deviates from the actual value or state above at the second time beyond a threshold difference. 17. The computer-readable storage medium of claim 12, wherein the predictive system is configured to learn temporal sequences of spatial patterns in the input data by establishing connections between cells in the predictive device based on the received input data. 18. The computer-readable storage medium of claim 17, wherein the predictive output is generated by detecting at least cells that are predictively activated by activation of other cells connected to the predictively activated cells. 19. The computer-readable storage medium of claim 19, wherein a cell is predictively activated responsive to at least a subset of cells connected to the cell being activated. 20. The computer-readable storage medium of claim 11, wherein instructions to generate the prediction output comprises instructions to: generate a series of spatial pooler outputs in sparse distributed representation by processing the input data over time, each of the spatial pooler outputs indicating a spatial pattern detected in the input data at a time; and generate the prediction output based on stored temporal relationships between the series of spatial pooler outputs. 21. A computing system, comprising: a processor; a predicting system configured to generate a prediction output at a first time responsive to receiving an input data, the prediction output representing a predicted value or state at a second time subsequent to the first time; and an anomaly detector configured to detect an anomaly by monitoring the prediction output.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Detecting patterns and sequences associated with an anomaly in predictions made a predictive system. The predictive system makes predictions by learning spatial patterns and temporal sequences in an input data that change over time. As the input data is received, the predictive system generates a series of predictions based on the input data. Each prediction is compared with corresponding actual value or state. If the prediction does not match or deviates significantly from the actual value or state, an anomaly is identified for further analysis. A corresponding state or a series of states of the predictive system before or at the time of prediction are associated with the anomaly and stored. The anomaly can be detected by monitoring whether the predictive system is placed in the state or states that is the same or similar to the stored state or states.
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Detecting patterns and sequences associated with an anomaly in predictions made a predictive system. The predictive system makes predictions by learning spatial patterns and temporal sequences in an input data that change over time. As the input data is received, the predictive system generates a series of predictions based on the input data. Each prediction is compared with corresponding actual value or state. If the prediction does not match or deviates significantly from the actual value or state, an anomaly is identified for further analysis. A corresponding state or a series of states of the predictive system before or at the time of prediction are associated with the anomaly and stored. The anomaly can be detected by monitoring whether the predictive system is placed in the state or states that is the same or similar to the stored state or states.
In an approach to determining a classification of an error in a computing system, a computer receives a notification of an error during a test within a computing system. The computer then retrieves a plurality of log files created during the test from within the computing system and determines data containing one or more error categorizations. The computer determines a classification of the error, based, at least in part, on the plurality of log files and the data containing one or more error categorizations.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for determining a classification of an error in a computing system, the method comprising: receiving, by one or more computer processors, a notification of an error during a test within a computing system; retrieving, by one or more computer processors, a plurality of log files created during the test from within the computing system; determining, by one or more computer processors, data containing one or more error categorizations; and determining, by one or more computer processors, a classification of the error, based, at least in part, on the plurality of log files and the data containing one or more error categorizations. 2. The method of claim 1, further comprising: determining, by one or more computer processors, a confidence score associated with the classification of the error. 3. The method of claim 2, further comprising: determining, by one or more computer processors, whether the confidence score meets a threshold value; and responsive to determining the confidence score meets the threshold value, reporting, by one or more computer processors, the classification of the error. 4. The method of claim 3, further comprising: responsive to determining the confidence score does not meet the threshold value, determining, by one or more computer processors, whether additional log files created during the test exist; responsive to determining additional log files created during the test exist, retrieving, by one or more computer processors, the additional log files; and determining, by one or more computer processors, a second classification of the error, based, at least in part, on the plurality of log files, the data containing one or more error categorizations, and the additional log files. 5. The method of claim 4, further comprising: responsive to determining additional log files created during the test do not exist, reporting, by one or more computer processors, the classification of the error and the confidence score associated with the classification of the error. 6. The method of claim 1, wherein determining, by one or more computer processors, data containing one or more error categorizations further comprises: retrieving, by one or more computer processors, a plurality of test log files from a test within the computing system; parsing, by one or more computer processors, the plurality of test log files to obtain a timestamp of each log file; merging, by one or more computer processors, the plurality of test log files based, at least in part, on the timestamp; and categorizing, by one or more computer processors, one or more errors contained in each of the merged plurality of test log files. 7. The method of claim 6, wherein the categorizing, by one or more computer processors, one or more errors contained in each of the merged plurality of test log files further comprises performing, by one or more computer processors, a machine learning algorithm operation on each of the merged plurality of test log files. 8. The method of claim 2, wherein determining, by one or more computer processors, the confidence score associated with the classification of the error further comprises: determining, by one or more computer processors, a plurality of test log files used to determine the data containing one or more error categorizations; comparing, by one or more computer processors, the plurality of log files created during the test to the plurality of test log files used to determine the data containing one or more error categorizations; determining, by one or more computer processors, based, at least in part, on the comparing, a similarity value between the plurality of log files created during the test and the plurality of test log files; and responsive to determining the similarity value between the plurality of log files created during the test and the plurality of test log files, setting, by one or more computer processors, the confidence score, based, at least in part, on the similarity value.
REJECTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: In an approach to determining a classification of an error in a computing system, a computer receives a notification of an error during a test within a computing system. The computer then retrieves a plurality of log files created during the test from within the computing system and determines data containing one or more error categorizations. The computer determines a classification of the error, based, at least in part, on the plurality of log files and the data containing one or more error categorizations.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: In an approach to determining a classification of an error in a computing system, a computer receives a notification of an error during a test within a computing system. The computer then retrieves a plurality of log files created during the test from within the computing system and determines data containing one or more error categorizations. The computer determines a classification of the error, based, at least in part, on the plurality of log files and the data containing one or more error categorizations.
A management system for guiding an agent in a media-specific dialogue has a conversion engine for instantiating ongoing dialogue as machine-readable text, if the dialogue is in voice media, a context analysis engine for determining facts from the text, a rules engine for asserting rules based on fact input, and a presentation engine for presenting information to the agent to guide the agent in the dialogue. The context analysis engine passes determined facts to the rules engine, which selects and asserts to the presentation engine rules based on the facts, and the presentation engine provides periodically updated guidance to the agent based on the rules asserted.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A management system for guiding an agent in a media-specific dialogue, comprising: a conversion engine for instantiating ongoing dialogue as machine-readable text, if the dialogue is in voice media; a context analysis engine for determining facts from the text; a rules engine for asserting rules based on fact input; and a presentation engine for presenting information to the agent to guide the agent in the dialogue; wherein the context analysis engine passes determined facts to the rules engine, which selects and asserts to the presentation engine rules based on the facts, and the presentation engine provides periodically updated guidance to the agent based on the rules asserted. 2. The system of claim 1 wherein a first script is provided to the agent as the dialogue begins, the guidance influences the dialogue, and the system continues to analyze and provide updated guidance during the dialogue. 3. The system of claim 1 wherein the agent is enabled to configure ways in which the presentation engine presents information. 4. The system of claim 1 comprising a back-end process wherein facts determined from the dialogue are used to elicit further information, processing based on the further information is conducted while dialogue continues, and after processing new information is presented to the agent. 5. The system of claim 1 wherein characteristics known for a specific agent are used in preparing presentation to the agent. 6. The system of claim 1 wherein prompts presented to the agent are annotated with indicia indicating importance or other characteristic of the prompts, 7. The system of claim 1 wherein the agent is enabled to tag prompts with feedback to the system, which feedback alters further presentation to the agent. 8. The system of claim 1 wherein analysis includes one or more indications of volume, inflection, sentence structure, rate of dialogue, other than text, and these indications are interpreted by the rules engine to select and assert rules. 9. The system of claim 1 wherein prestored knowledge of one or both of the agent and a party to the dialogue other than the agent is used to influence the content and nature of information provided to the agent. 10. The system of claim 1 wherein the agent is an interactive voice response system, and guidance to the agent is selection of voice responses by the IVR. 11. A method for guiding an agent in a media-specific dialogue, comprising steps of: (a) if the dialogue is voice, converting the voice to machine-readable text; (b) analyzing the text by context analysis engine and determining facts from the text; (c) passing the determined facts to a rules engine and asserting rules based on the fact input; and (d) presenting new guidance information to the agent to guide the agent in the dialogue. 12. The method of claim 11 wherein a first script is provided to the agent as the dialogue begins, the guidance influences the dialogue, and the system continues to analyze and provide updated guidance during the dialogue. 13. The method of claim 11 wherein the agent is enabled to configure ways in which the presentation engine presents information. 14. The method of claim 11 comprising a back-end process wherein facts determined from the dialogue are used to elicit further information, processing based on the further information is conducted while dialogue continues, and after processing new information is presented to the agent. 15. The method of claim 11 wherein characteristics known for a specific agent are used in preparing presentation to the agent. 16. The method of claim 11 wherein prompts presented to the agent are annotated with indicia indicating importance or other characteristic of the prompts, 17. The method of claim 11 wherein the agent is enabled to tag prompts with feedback to the system, which feedback alters further presentation to the agent. 18. The method of claim 11 wherein analysis includes one or more indications of volume, inflection, sentence structure, rate of dialogue, other than text, and these indications are interpreted by the rules engine to select and assert rules. 19. The method of claim 11 wherein prestored knowledge of one or both of the agent and a party to the dialogue other than the agent is used to influence the content and nature of information provided to the agent. 20. The method of claim 11 wherein the agent is an interactive voice response system, and guidance to the agent is selection of voice responses by the IVR.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A management system for guiding an agent in a media-specific dialogue has a conversion engine for instantiating ongoing dialogue as machine-readable text, if the dialogue is in voice media, a context analysis engine for determining facts from the text, a rules engine for asserting rules based on fact input, and a presentation engine for presenting information to the agent to guide the agent in the dialogue. The context analysis engine passes determined facts to the rules engine, which selects and asserts to the presentation engine rules based on the facts, and the presentation engine provides periodically updated guidance to the agent based on the rules asserted.
G06N5025
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A management system for guiding an agent in a media-specific dialogue has a conversion engine for instantiating ongoing dialogue as machine-readable text, if the dialogue is in voice media, a context analysis engine for determining facts from the text, a rules engine for asserting rules based on fact input, and a presentation engine for presenting information to the agent to guide the agent in the dialogue. The context analysis engine passes determined facts to the rules engine, which selects and asserts to the presentation engine rules based on the facts, and the presentation engine provides periodically updated guidance to the agent based on the rules asserted.
A method of generating neuron spiking pulses in a neuromorphic system is provided which includes floating one or more selected bit lines connected to target cells, having a first state, from among a plurality of memory cells arranged at intersections of a plurality of word lines and a plurality of bit lines; and stepwisely increasing voltages applied to unselected word lines connected to unselected cells, having a second state, from among memory cells connected to the one or more selected bit lines other than the target cells having the first state.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of generating neuron spiking pulses in a neuromorphic system, comprising: floating one or more selected bit lines connected to target cells, the target cells having a first state, from among a plurality of memory cells arranged at intersections of a plurality of word lines and a plurality of bit lines; and stepwisely increasing voltages applied to unselected word lines connected to unselected cells, the unselected cells having a second state, from among memory cells connected to the one or more selected bit lines other than the target cells having the first state. 2. The method of claim 1, further comprising: setting cells connected to unselected bit lines other than the one or more selected bit lines to the first state before the increasing. 3. The method of claim 1, further comprising: setting all cells connected to the one or more selected bit lines to the second state; and setting the target cells to the first state. 4. The method of claim 1, further comprising: detecting whether the neuron spiking pulses are output from selected word lines connected to the target cells. 5. The method of claim 4, wherein the increasing and the detecting are iterated until the neuron spiking pulses are generated. 6. The method of claim 4, wherein the detecting comprises determining that one of the voltages applied to the unselected word lines is greater than a sum of the voltage applied to the one or more selected bit lines and voltage drops in unselected memory cells. 7. The method of claim 1, wherein the floating and the increasing are performed contemporaneously. 8. The method of claim 3, wherein the first state is different from the second state. 9. The method of claim 8, wherein the plurality of memory cells are phase change memory cells. 10. The method of claim 9, wherein the first state is an amorphous state of a phase change material included in each memory cell and the second state is a crystal state of the phase change material. 11. The method of claim 10, wherein the plurality of memory cells transitions from an amorphous state to a crystalline state when the neuron spiking pulse has been generated. 12. The method of claim 8, wherein the first state is a high resistance state and the second state is a low resistance state. 13. The method of claim 1, wherein at least one of the plurality of memory cells comprises a diode for reducing a current leakage. 14. A method of implementing an STDP (Spike-Timing Dependent Plasticity) algorithm of a neuromophic system including a synaptic circuit having a first memory cell and a second memory cell, the method comprising: providing a first signal to a first bit line connected to a first memory cell; and providing a second signal to a second bit line connected to a second memory cell with a time interval with respect to the first signal. 15. The method of claim 14, wherein the first and second memory cells are phase change memory cells. 16. The method of claim 15, wherein a level of the second signal is increased or decreased according to the time interval such that a resistance of the second memory cell is larger than a resistance of the first memory cell. 17. A neuromorphic system implementing method comprising: floating one or more selected bit lines connected to target cells, the target cells having a first state, from among a plurality of memory cells arranged at intersections of a plurality of word lines and a plurality of bit lines; stepwisely increasing voltages applied to unselected word lines connected to unselected cells, the unselected cells having a second state, from among cells connected to the one or more selected bit lines other than the target cells having the first state, to generate neuron spiking pulses; and providing first and second neuron spiking pulses selected from the neuron spiking pulses to a synaptic circuit including first and second memory cells with a time interval, to implement an STDP (Spike-Timing Dependent Plasticity) algorithm. 18. The method of claim 17, wherein the memory cells are phase change memory cells. 19. The method of claim 18, wherein the first state is an amorphous state of a phase change material included in each memory cell, and the second state is a crystal state of the phase change material.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method of generating neuron spiking pulses in a neuromorphic system is provided which includes floating one or more selected bit lines connected to target cells, having a first state, from among a plurality of memory cells arranged at intersections of a plurality of word lines and a plurality of bit lines; and stepwisely increasing voltages applied to unselected word lines connected to unselected cells, having a second state, from among memory cells connected to the one or more selected bit lines other than the target cells having the first state.
G06N304
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method of generating neuron spiking pulses in a neuromorphic system is provided which includes floating one or more selected bit lines connected to target cells, having a first state, from among a plurality of memory cells arranged at intersections of a plurality of word lines and a plurality of bit lines; and stepwisely increasing voltages applied to unselected word lines connected to unselected cells, having a second state, from among memory cells connected to the one or more selected bit lines other than the target cells having the first state.
Methods, systems, and products adapt recommender systems with pairwise feedback. A pairwise question is posed to a user. A response is received that selects a preference for a pair of items in the pairwise question. A latent factor model is adapted to incorporate the response, and an item is recommended to the user based on the response.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. (canceled) 2. A method, comprising: sending a sequence of questions from a server to a client device, each question in the sequence of questions soliciting one content item from a pair of different content items; receiving at the server successive responses from the client device to the sequence of questions, each successive response selecting a preference for the one content item in the pair of different content items in each question; sending the successive responses as feedback to a latent factor model for recommending content to the client device; and receiving, after each successive response, a probability of a preference for another content item in another pair of different content items. 3. The method of claim 2, comprising incorporating adaptive pairwise preference feedback into the latent factor model, such that previous feedback on pairs of different content items affects future pairs of different content items. 4. The method of claim 3, wherein the latent factor model operates in a Bayesian framework. 5. The method of claim 2, comprising generating a recommendation for content based on the successive responses. 6. The method of claim 2, comprising querying for one question in the sequence of questions. 7. The method of claim 2, comprising predicting a difference in the preference for the pair of different content items. 8. The method of claim 2, comprising determining a change in entropy after each successive response. 9. A system, comprising: a processor; and memory storing code that when executed causes the processor to perform operations, the operations comprising: sending a sequence of questions from a server to a client device, each question in the sequence of questions soliciting one content item from a pair of different content items; receiving at the server successive responses from the client device to the sequence of questions, each successive response selecting a preference for the one content item in the pair of different content items in each question; sending the successive responses as feedback to a latent factor model for recommending content to the client device; and receiving, after each successive response, a probability of a preference for another content item in another pair of different content items. 10. The system of claim 9, comprising incorporating adaptive pairwise preference feedback into the latent factor model, such that previous feedback on the pairs of different content items affects future pairs of different content items. 11. The system of claim 10, wherein the latent factor model operates in a Bayesian framework. 12. The system of claim 9, wherein the operations comprise generating a recommendation for content based on the successive responses. 13. The system of claim 9, wherein the operations comprise querying for one question in the sequence of questions. 14. The system of claim 9, wherein the operations comprise predicting a difference in the preference for the pair of different content items. 15. The system of claim 9, wherein the operations comprise determining a change in entropy after each successive response. 16. A memory storing instructions that when executed cause a processor to perform operations, the operations comprising: sending a sequence of questions from a server to a client device, each question in the sequence of questions soliciting one content item from a pair of different content items; receiving at the server successive responses from the client device to the sequence of questions, each successive response selecting a preference for the one content item in the pair of different content items in each question; sending the successive responses as feedback to a latent factor model for recommending content to a user of the client device; and receiving, after each successive response, a probability of a preference for another content item in another pair of different content items. 17. The memory of claim 16, comprising incorporating adaptive pairwise preference feedback into the latent factor model, such that previous feedback on the pairs of different content items affects future pairs of different content items. 18. The memory of claim 17, wherein the latent factor model operates in a Bayesian framework. 19. The memory of claim 16, wherein the operations comprise generating a recommendation for content based on the successive responses. 20. The memory of claim 16, wherein the operations comprise querying for one question in the sequence of questions. 21. The memory of claim 16, wherein the operations comprise predicting a difference in the preference for the pair of different content items.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Methods, systems, and products adapt recommender systems with pairwise feedback. A pairwise question is posed to a user. A response is received that selects a preference for a pair of items in the pairwise question. A latent factor model is adapted to incorporate the response, and an item is recommended to the user based on the response.
G06N5027
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Methods, systems, and products adapt recommender systems with pairwise feedback. A pairwise question is posed to a user. A response is received that selects a preference for a pair of items in the pairwise question. A latent factor model is adapted to incorporate the response, and an item is recommended to the user based on the response.
Disclosed herein is an information processing apparatus including an evaluation information extraction section configured to extract evaluation information including an object targeted to be evaluated and an evaluation of the object targeted to be evaluated from a linguistic expression given as information expressed linguistically by a user of interest; an identification section configured to identify whether the evaluation information is of a first type regarding content or of a second type regarding another user; and an evaluation prediction section configured to predict the evaluation by the user of interest regarding the content, based on the evaluation information of the first type given by the user of interest and on the evaluation information given by the other user in the evaluation information of the second type given by the user of interest.
Please help me write a proper abstract based on the patent claims. CLAIM: 1-9. (canceled) 10. An information processing apparatus comprising: an evaluation information extracting section configured to extract evaluation information including an object targeted to be evaluated by a user of interest; and an identification section configured to identify whether said evaluation information is of a first type regarding content or of a second type regarding another user; and an evaluation prediction section configured to predict the evaluation by said user of interest regarding said content, based on the evaluation information of said first type given by said user of interest and on the evaluation information given by said other user in the evaluation information of said second type given by said user of interest; and a content presentation section configured to present one or more contents based on the predicted evaluation. 11. The information processing apparatus of claim 10, wherein said evaluation information of said second type includes said evaluation information in which said linguistic expression given by said other user is targeted to be evaluated; and wherein said evaluation prediction section predicts the evaluation by said user of interest regarding said content, based on said evaluation information of said first type given by said user of interest and on said evaluation information included in said linguistic expression given by said other user targeted to be evaluated in said evaluation information of said second type given by said user of interest. 12. The information processing apparatus of claim 10, wherein said evaluation information of said second type includes said evaluation information in which said other user is targeted to be evaluated; and said evaluation prediction section predicts the evaluation by said user of interest regarding said content, based on said evaluation information of said first type given by said user of interest and on said evaluation information given by said other user targeted to be evaluated in said evaluation information of said second type given by said user of interest. 13. The information processing apparatus of claim 10, wherein said evaluation prediction section comprises: an estimation section configured to estimate a parameter for use in the evaluation prediction of said content by said user of interest, based on said evaluation information of said first type given by said user of interest and on said evaluation information given by said other user in said evaluation information of said second type given by said user of interest; and a prediction section configured to predict the evaluation by said user of interest regarding said content based on said parameter of said user of interest 14. The information processing apparatus of claim 13, wherein said prediction section sets as said parameter of said user of interest the value obtained by a weighted addition of the parameter of said user of interest and the parameter of said other user evaluated positively by said user of interest. 15. The information processing apparatus of claim 13, wherein said prediction section estimates said parameter of said user of interest by sharing a prior distribution of said parameter between said user of interest and said other user evaluated positively by said user of interest. 16. The information processing apparatus of claim 13, wherein said prediction section predicts the evaluation by said user of interest regarding said content by use of said parameter of said other user evaluated positively by said user of interest 17. An information processing method comprising the steps of: extracting evaluation information including an object targeted to be evaluated by a user of interest; and identifying whether said evaluation information is of a first type regarding content or of a second type regarding another user; and predicting the evaluation by said user of interest regarding said content, based on the evaluation information of said first type given by said user of interest and on the evaluation information given by said other user in the evaluation information of said second type given by said user of interest; and presenting one or more contents based on the predicted evaluation. 18. A non-transitory, computer-readable medium comprising instructions program for causing a computer to execute a process comprising: extracting evaluation information including an object targeted to be evaluated by a user of interest; and identifying whether said evaluation information is of a first type regarding content or of a second type regarding another user; and predicting the evaluation by said user of interest regarding said content, based on the evaluation information of said first type given by said user of interest and on the evaluation information given by said other user in the evaluation information of said second type given by said user of interest; and presenting one or more contents based on the predicted evaluation.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Disclosed herein is an information processing apparatus including an evaluation information extraction section configured to extract evaluation information including an object targeted to be evaluated and an evaluation of the object targeted to be evaluated from a linguistic expression given as information expressed linguistically by a user of interest; an identification section configured to identify whether the evaluation information is of a first type regarding content or of a second type regarding another user; and an evaluation prediction section configured to predict the evaluation by the user of interest regarding the content, based on the evaluation information of the first type given by the user of interest and on the evaluation information given by the other user in the evaluation information of the second type given by the user of interest.
G06N502
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Disclosed herein is an information processing apparatus including an evaluation information extraction section configured to extract evaluation information including an object targeted to be evaluated and an evaluation of the object targeted to be evaluated from a linguistic expression given as information expressed linguistically by a user of interest; an identification section configured to identify whether the evaluation information is of a first type regarding content or of a second type regarding another user; and an evaluation prediction section configured to predict the evaluation by the user of interest regarding the content, based on the evaluation information of the first type given by the user of interest and on the evaluation information given by the other user in the evaluation information of the second type given by the user of interest.
A parameter-based multi-model blending method and system are described. The method includes selecting a parameter of interest among parameters estimated by each of a set of individual models, running the set of individual models with a range of inputs to obtain a range of estimates of the parameters from each of the set of individual models, and identifying, for each of the set of individual models, critical parameters among the parameters estimated, the critical parameters exhibiting a specified correlation with an error in estimation of the parameter of interest. For each subspace of combinations of the critical parameters, obtaining a parameter-based blended model is based on blending the set of individual models in accordance with the subspace of the critical parameters, the subspace defining a sub-range for each of the critical parameters.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A multi-expert based machine learning method to determine a blended forecasting model, the method comprising: storing historical data, the historical data including estimates and measurements of a parameter of interest and estimates of critical parameters, the critical parameters determined to be critical to an estimate of the parameter of interest; training a plurality of machine learning models with respective machine learning algorithms using training data that includes a first set of parameter values associated with a first range of time points, the first set of parameter values being obtained from the historical data; obtaining estimates of the parameter of interest with each of the machine learning models using a second set of parameter values associated with a second range of time points, the second set of parameter values being obtained from the historical data; determining, using a processor, a most accurate machine learning model among the machine learning models at each time point in the second range of time points; and determining the blended forecasting model based on the most accurate machine learning model determined for each time point in the second range of time points. 2. The method according to claim 1, wherein each of the machine learning algorithms blends a set of individual models. 3. The method according to claim 2, further comprising obtaining estimates of the input critical parameters from the set of individual models to determine the blended forecasting model. 4. The method according to claim 1, wherein the training the plurality of the machine learning models with the machine learning algorithms includes training with a linear regression, random forest regression, gradient boosting regression tree, support vector machine, or neural network. 5. The method according to claim 1, wherein the determining the most accurate machine learning model at each time point in the second range of time points includes comparing the respective estimate of the parameter of interest obtained with each of the machine learning models at the associated time point with a corresponding measurement of the parameter of interest in the historical data. 6. The method according to claim 1, wherein the determining the blended forecasting model includes selecting one of the machine learning models determined as the most accurate machine learning model as the blended forecasting model. 7. The method according to claim 1, wherein, the determining the blended forecasting model includes correlating, for each time point in the second range of time points, the most accurate machine learning model with corresponding values of the critical parameters in the historical data, wherein the blended forecasting model corresponding with input critical parameters is determined as the most accurate machine learning model correlated with the input critical parameters. 8. The method according to claim 1, wherein the correlating includes training a classification machine learning model. 9. A multi-expert based machine learning system to determine a blended forecasting model, the system comprising: a memory device to store historical data of parameters, the historical data including estimates and measurements of a parameter of interest and estimates of critical parameters determined to be critical to an estimate of the parameter of interest; and a processor configured to train a plurality of learning models with respective machine learning algorithms using training data that includes a first set of parameter values associated with a first range of time points obtained from the historical data, to obtain estimates of the parameter of interest with each of the machine learning models using a second set of parameter values associated with a second range of time points, the second set of parameter values being obtained from the historical data, to determine a most accurate machine learning model among the machine learning models at each time point in the second range of time points, and to determine the blended forecasting model from the most accurate machine learning models. 10. The system according to claim 9, wherein each of the machine learning algorithms blends a set of individual models. 11. The system according to claim 10, wherein the set of individual models estimates the input critical parameters. 12. The system according to claim 9, wherein the machine learning algorithms include training with a linear regression, random forest regression, gradient boosting regression tree, support vector machine, or neural network. 13. The system according to claim 9, wherein the processor compares the respective estimate of the parameter of interest obtained with each of the machine learning models at the associated time point in the second range of time points with a corresponding measurement of the parameter of interest in the historical data to determine the most accurate machine learning model at each time point in the range of time points. 14. The system according to claim 9, wherein for each time point, the processor correlates the most accurate machine learning model with corresponding values of the critical parameters in the second set of parameter values from the historical data, and the processor determines the blended forecasting model as the most accurate machine learning model correlated with input critical parameters. 15. A non-transitory computer program product having computer readable instructions stored thereon which, when executed by a processor, cause the processor to implement a method of determining a blended forecasting model, the method comprising: obtaining historical data, the historical data including estimates and measurements of a parameter of interest and estimates of critical parameters, the critical parameters determined to be critical to an estimate of the parameter of interest; training a plurality of machine learning models with respective machine learning algorithms using training data that includes a first set of parameter values associated with a first range of time points, the first set of parameter values being obtained from the historical data; obtaining estimates of the parameter of interest with each of the machine learning models using a second set of parameter associated with a second range of time points, the second set of parameter values being obtained from the historical data; determining a most accurate machine learning model among the machine learning models at each time point in the second range of time points; and determining the blended forecasting model based on the most accurate machine learning model determined for each time point in the second range of time points. 16. The non-transitory computer program product according to claim 15, wherein each of the machine learning algorithms blends a set of individual models. 17. The non-transitory computer program product according to claim 16, further comprising obtaining estimates of the input critical parameters from the set of individual models to determine the blended forecasting model. 18. The non-transitory computer program product according to claim 15, wherein the training the plurality of the machine learning models with the machine learning algorithms includes training with a linear regression, random forest regression, gradient boosting regression tree, support vector machine, or neural network. 19. The non-transitory computer program product according to claim 15, wherein the determining the most accurate machine learning model at each time point in the second range of time points includes comparing the respective estimate of the parameter of interest obtained with each of the machine learning models at the associated time point with a corresponding measurement of the parameter of interest in the historical data. 20. The non-transitory computer program product according to claim 15, wherein, for each time point, the determining the blended forecasting model includes correlating the most accurate machine learning model with corresponding values of the critical parameters in the historical data, wherein the blended forecasting model corresponding with input critical parameters is determined as the most accurate machine learning model correlated with the input critical parameters.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A parameter-based multi-model blending method and system are described. The method includes selecting a parameter of interest among parameters estimated by each of a set of individual models, running the set of individual models with a range of inputs to obtain a range of estimates of the parameters from each of the set of individual models, and identifying, for each of the set of individual models, critical parameters among the parameters estimated, the critical parameters exhibiting a specified correlation with an error in estimation of the parameter of interest. For each subspace of combinations of the critical parameters, obtaining a parameter-based blended model is based on blending the set of individual models in accordance with the subspace of the critical parameters, the subspace defining a sub-range for each of the critical parameters.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A parameter-based multi-model blending method and system are described. The method includes selecting a parameter of interest among parameters estimated by each of a set of individual models, running the set of individual models with a range of inputs to obtain a range of estimates of the parameters from each of the set of individual models, and identifying, for each of the set of individual models, critical parameters among the parameters estimated, the critical parameters exhibiting a specified correlation with an error in estimation of the parameter of interest. For each subspace of combinations of the critical parameters, obtaining a parameter-based blended model is based on blending the set of individual models in accordance with the subspace of the critical parameters, the subspace defining a sub-range for each of the critical parameters.
A system and method for providing various user interfaces is disclosed. In one embodiment, the various user interfaces include a series of user interfaces that guide a user through the machine learning process. In one embodiment, the various user interfaces are associated with a unified, project-based data scientist workspace to visually prepare, build, deploy, visualize and manage models, their results and datasets.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: generating, using one or more processors, a data import interface for presentation to a user, the data import interface including a first set of one or more graphical elements that receive user interaction defining a dataset to be imported; generating, using the one or more processors, a machine learning model creation interface for presentation to the user, the machine learning model creation interface including a second set of one or more graphical elements that receive user interaction defining a model to be generated; generating, using the one or more processors, a model testing interface for presentation to the user, the model testing interface including a third set of one or more graphical elements defining a model to be tested and a test dataset; and generating, using the one or more processors, a results interface for presentation to the user, the results interface including a fourth set of graphical elements informing the user of results obtained by testing the model to be tested with the test dataset. 2. The method of claim 1, wherein the first set of one or more graphical elements includes a first graphical element, a second graphical element and one or more of a third and a fourth graphical element, and the method further comprises: receiving, via the user interacting with the first graphical element of the data import interface a user-defined source of the dataset to be imported; receiving, via the user interacting with the second graphical element of the data import interface, a user-defined file including the dataset to be imported; dynamically updating the data import interface for the user to preview at least a sample of the dataset to be imported; receiving, via user interaction with one or more of the third graphical element and the fourth graphical element of the data import interface, a selection of one or more of a text blob and identifier columns from the user, wherein the third graphical element, when interacted with by the user, selects a text blob column and the fourth graphical element, when interacted with by the user, selects an identifier column; and importing the dataset based on the user's interaction with the first graphical element, the second graphical element and one or more of the third graphical element and the fourth graphical element. 3. The method of claim 1, the second set of one or more graphical elements includes a first graphical element, a second graphical element, a third graphical element, a fourth graphical element and a fifth graphical element, and the method further comprises: presenting to the user, via the first graphical element, a dataset used in generating the model to be generated; dynamically modifying the second graphical element based on one or more columns of the dataset to be used in generating the model; receiving, via user interaction with the second graphical element, a user-selected objective column to be used to generate the model, the objective column associated with the dataset to be used in generating the model; dynamically modifying a third graphical element to identify a type of machine learning task based on the received, user-selected objective column; dynamically modifying a fourth graphical element to include a set of one or more machine learning methods associated with the identified machine learning task; the set of machine learning methods omitting machine learning methods not associated with the machine learning task; dynamically modifying a fifth graphical element such that the fifth graphical element is associated with a user-definable parameter set that is associated with a current selection from the set of a machine learning methods of the fourth graphical element; and generating, responsive to user input, the currently selected model using the user-definable parameter set for the user-selected objective column of the dataset to be used for model generation. 4. The method of claim 3, wherein the machine learning task is one of classification and regression. 5. The method of claim 3, wherein the machine learning task is classification when the objective column is categorical and the machine learning task is regression when the objective column is continuous. 6. The method of claim 3, wherein the machine learning task is one of classification and regression and the set of machine learning methods includes a plurality of machine learning methods associated with classification when the learning task is classification and the set of machine learning methods includes a plurality of machine learning methods associated with regression when the machine learning task is regression. 7. The method of claim 1, wherein the fourth set of one or more graphical elements includes one or more of a confusion matrix, a cost/benefit weighting, a score, and an interactive visualization of the results, wherein: the confusion matrix includes information about predicted positives and negatives and actual positives and negatives obtained when testing the model to be tested using the test dataset; the cost/benefit weighting, responsive to user interaction, changes the reward or penalty associated with one of more of a true positive, a true negative, a false positive and a false negative, the confusion matrix dynamically updated based on the cost/benefit weighting the score includes one or more scoring metrics describing performance of the model to be tested subsequent to testing; and the interactive visualization presenting a visual representation of a portion of the results obtained by the testing. 8. The method of claim 7, wherein the fourth set of one or more graphical elements includes one or more of a graphical element associated with downloading one or more targets or labels, a graphical element associated with downloading one or more probabilities, and a graphical element that adjusts the probability threshold, wherein adjusting the probability threshold dynamically updates the score and the interactive visualization. 9. The method of claim 1, comprising: generating a visualization for presentation to the user, including one or more of a visualization of tuning results, a visualization of a tree, a visualization of importances, and a plot visualization, wherein the plot visualization includes one or more plots associated with one or more of a dataset, a model and a result. 10. A system comprising: one or more processors; and a memory including instructions that, when executed by the one or more processors, cause the system to: generate a data import interface for presentation to a user, the data import interface including a first set of one or more graphical elements that receive user interaction defining a dataset to be imported; generate a machine learning model creation interface for presentation to the user, the machine learning model creation interface including a second set of one or more graphical elements that receive user interaction defining a model to be generated; generate a model testing interface for presentation to the user, the model testing interface including a third set of one or more graphical elements defining a model to be tested and a test dataset; and generate a results interface for presentation to the user, the results interface including a fourth set of graphical elements informing the user of results obtained by testing the model to be tested with the test dataset. 11. The system of claim 10, wherein the first set of one or more graphical elements includes a first graphical element, a second graphical element and one or more of a third and a fourth graphical element, and the instructions, when executed by the one or more processors, cause the system to: receive, via the user interacting with the first graphical element of the data import interface a user-defined source of the dataset to be imported; receive, via the user interacting with the second graphical element of the data import interface, a user-defined file including the dataset to be imported; dynamically update the data import interface for the user to preview at least a sample of the dataset to be imported; receive, via user interaction with one or more of the third graphical element and the fourth graphical element of the data import interface, a selection of one or more of a text blob and identifier columns from the user, wherein the third graphical element, when interacted with by the user, selects a text blob column and the fourth graphical element, when interacted with by the user, selects an identifier column; and import the dataset based on the user's interaction with the first graphical element, the second graphical element and one or more of the third graphical element and the fourth graphical element. 12. The system of claim 10, the second set of one or more graphical elements includes a first graphical element, a second graphical element, a third graphical element, a fourth element and a fifth graphical element, and the instructions, when executed by the one or more processors, cause the system to: present to the user, via the first graphical element, a dataset used in generating the model to be generated; dynamically modify the second graphical element based on one or more columns of the dataset to be used in generating the model; receive, via user interaction with the second graphical element, a user-selected objective column to be used to generate the model, the objective column associated with the dataset to be used in generating the model; dynamically modify a third graphical element to identify a type of machine learning task based on the received, user-selected objective column; dynamically modify a fourth graphical element to include a set of one or more machine learning methods associated with the identified machine learning task; the set of machine learning methods omitting machine learning methods not associated with the machine learning task; dynamically modify a fifth graphical element such that the fifth graphical element is associated with a user-definable parameter set that is associated with a current selection from the set of a machine learning methods of the fourth graphical element; and generate, responsive to user input, the currently selected model using the user-definable parameter set for the user-selected objective column of the dataset to be used for model generation. 13. The system of claim 12, wherein the machine learning task is one of classification and regression. 14. The system of claim 12, wherein the machine learning task is classification when the objective column is categorical and the machine learning task is regression when the objective column is continuous. 15. The system of claim 12, wherein the machine learning task is one of classification and regression and the set of machine learning methods includes a plurality of machine learning methods associated with classification when the learning task is classification and the set of machine learning methods includes a plurality of machine learning methods associated with regression when the machine learning task is regression. 16. The system of claim 10, wherein the fourth set of one or more graphical elements includes one or more of a confusion matrix, a cost/benefit weighting, a score, and an interactive visualization of the results, wherein: the confusion matrix includes information about predicted positives and negatives and actual positives and negatives obtained when testing the model to be tested using the test dataset; the cost/benefit weighting, responsive to user interaction, changes the reward or penalty associated with one of more of a true positive, a true negative, a false positive and a false negative, the confusion matrix dynamically updated based on the cost/benefit weighting the score includes one or more scoring metrics describing performance of the model to be tested; and the interactive visualization presenting a visual representation of a portion of the results obtained by the testing. 17. The system of claim 16, wherein the fourth set of one or more graphical elements includes one or more of a graphical element associated with downloading one or more targets or labels, a graphical element associated with downloading one or more probabilities, and a graphical element that adjusts the probability threshold, wherein adjusting the probability threshold dynamically updates the score and the interactive visualization. 18. The system of claim 10, wherein the instructions, when executed by the one or more processors, cause the system to: generate a visualization for presentation to the user, including one or more of a visualization of tuning results, a visualization of a tree, a visualization of importances, and a plot visualization, wherein the plot visualization includes one or more plots associated with one or more of a dataset, a model and a result. 19. A system comprising: one or more processors; and a memory including instructions that, when executed by the one or more processors, cause the system to: generate a user interface associated with a machine learning project for presentation to a user, the user interface including a first graphical element, a second graphical element, a third graphical element, and a fourth graphical element, a data import interface for presentation to a user, wherein the first, second, third and fourth graphical elements are user selectable and a first portion of the user interface is modified based on which graphical element the user selects, the first, second, third and fourth graphical elements presented in a second portion of the user interface and the presentation of the first, second, third and fourth graphical elements is persistent regardless of which graphical element is selected except a selected graphical element is visually differentiated as the selected graphical element, the first graphical element associated with datasets for the machine learning project, and, when selected, the first portion of the user interface is modified to present a table of any datasets associated with the machine learning project and the first portion includes a graphical element to import a dataset, the second graphical element associated with models for the machine learning project, and, when selected, the first portion of the user interface is modified to present a table of any models associated with the machine learning project and the first portion includes a graphical element to create a new model, the third graphical element associated with results for the machine learning project, and, when selected, the first portion of the user interface is modified to present a table of any result sets associated with the machine learning project and the first portion includes a graphical element to create new results, and the fourth graphical element associated with plots for the machine learning project, and, when selected, the first portion of the user interface is modified to present any plots associated with the machine learning project and the first portion includes a graphical element to create a plot. 20. A system of claim 19, wherein: the first portion of the user interface, when modified to present the table of any datasets associated with the machine learning project, includes one or more datasets used for one or more of training and testing a first model associated with the machine learning project and information about the one or more datasets, the first portion of the user interface, when modified to present the table of any models associated with the machine learning project and the first portion, includes the first model and information about the first model, the first portion of the user interface, when modified to present the table of any result sets associated with the machine learning project, includes a first set of results associated with a test of the first model and a test dataset and information about the first set of results, and the first portion of the user interface, when modified to present any plots associated with the machine learning project, includes a first set of one or more plots associated with one or more of a dataset, a model and a result.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A system and method for providing various user interfaces is disclosed. In one embodiment, the various user interfaces include a series of user interfaces that guide a user through the machine learning process. In one embodiment, the various user interfaces are associated with a unified, project-based data scientist workspace to visually prepare, build, deploy, visualize and manage models, their results and datasets.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A system and method for providing various user interfaces is disclosed. In one embodiment, the various user interfaces include a series of user interfaces that guide a user through the machine learning process. In one embodiment, the various user interfaces are associated with a unified, project-based data scientist workspace to visually prepare, build, deploy, visualize and manage models, their results and datasets.
A method for object classification in a decision tree based adaptive boosting (AdaBoost) classifier implemented on a single-instruction multiple-data (SIMD) processor is provided that includes receiving feature vectors extracted from N consecutive window positions in an image in a memory coupled to the SIMD processor and evaluating the N consecutive window positions concurrently by the AdaBoost classifier using the feature vectors and vector instructions of the SIMD processor, in which the AdaBoost classifier concurrently traverses decision trees for the N consecutive window positions until classification is complete for the N consecutive window positions.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for object classification in a decision tree based adaptive boosting (AdaBoost) classifier implemented on a single-instruction multiple-data (SIMD) processor, the method comprising: receiving feature vectors extracted from N consecutive window positions in an image in a memory coupled to the SIMD processor, in which N is a vector width of the SIMD processor divided by a bit size of a feature, and in which a feature vector includes N feature values, one feature value for each of the N consecutive window positions; and evaluating the N consecutive window positions concurrently by the AdaBoost classifier using the feature vectors and vector instructions of the SIMD processor, in which the AdaBoost classifier concurrently traverses decision trees for the N consecutive window positions until classification is complete for the N consecutive window positions, in which a decision tree includes a plurality of nodes, a threshold value for each node, and a plurality of leaves, each leaf including a partial score. 2. The method of claim 1, in which evaluating the N consecutive window positions includes: loading a plurality of the feature vectors using a vector load instruction of the SIMD processor, in which one feature vector is loaded for each node of a single decision tree of the AdaBoost classifier; comparing each feature vector to a corresponding threshold vector using a vector compare instruction of the SIMD processor to generate a mask vector for each node, in which the corresponding threshold vector includes N copies of the threshold value for the node corresponding to the feature vector, and in which the mask vector includes N comparison results, one for each of the N features of the feature vector; generating a partial score vector based on the mask vectors and the partial score values of the leaves of the decision tree, the partial score vector including N partial score values, one for each of the N consecutive window positions; accumulating the partial scores into an accumulated score vector, the accumulated score vector including N accumulated score values, one for each of the N consecutive window positions; and comparing the accumulated score vector to an exit threshold vector using a vector compare instruction of the SIMD processor to determine whether or not object classification can be terminated for one or more of the N consecutive window positions. 3. The method of claim 2, in which generating a partial score vector includes: generating a leaf selection mask vector for each of the leaves of the decision tree based on the mask vectors, in which a leaf selection mask vector is a logical combination of mask vectors for nodes in a traversal path of the single decision tree that reaches the leaf corresponding to the leaf selection mask vector; and performing a logical and operation of each leaf selection mask vector with a corresponding leaf vector to select partial scores for each of the N window positions from the leaf vectors, in which a corresponding leaf vector includes N copies of a partial score of the leaf. 4. The method of claim 1, in which the decision trees are two-level binary decision trees. 5. The method of claim 1, in which the AdaBoost classifier is trained for pedestrian classification. 6. The method of claim 1, in which the SIMD processor is a digital signal processor. 7. A digital system comprising: a single-instruction multiple-data (SIMD) processor; a memory component coupled to the SIMD processor, the memory component configured to store features extracted from an image; a plurality of decision trees stored in the memory component, in which each decision tree includes a plurality of nodes, a threshold value for each node, and a plurality of leaves, each leaf including a partial score; and a decision tree based adaptive boosting (AdaBoost) classifier trained for object classification stored in the memory component, the AdaBoost classifier executable on the SIMD processor, in which the AdaBoost classifier uses the plurality of decision trees for object classification, the AdaBoost classifier configured to evaluate N consecutive window positions concurrently using the features and vector instructions of the SIMD processor, in which the AdaBoost classifier concurrently traverses decision trees for the N consecutive window positions until classification is complete for the N consecutive window positions and in which N is a vector width of the SIMD processor divided by a bit size of a feature. 8. The digital system of claim 7, including a feature extraction component coupled to the memory component and configured to extract the features from the N consecutive window positions in an image. 9. The digital system of claim 8, including a camera coupled to the feature extraction component to provide the image. 10. The digital system of claim 7, in which in which the AdaBoost classifier is configured to evaluate the N consecutive window positions by loading a plurality of feature vectors from the memory component using a vector load instruction of the SIMD processor, in which one feature vector is loaded for each node of a single decision tree of the plurality of decision trees and in which a feature vector includes N feature values, one feature value for each of the N consecutive window positions; comparing each feature vector to a corresponding threshold vector using a vector compare instruction of the SIMD processor to generate a mask vector for each node, in which the corresponding threshold vector includes N copies of the threshold value for the node corresponding to the feature vector, and in which the mask vector includes N comparison results, one for each of the N features of the feature vector; generating a partial score vector based on the mask vectors and the partial score values of the leaves of the decision tree, the partial score vector including N partial score values, one for each of the N consecutive window positions; accumulating the partial scores into an accumulated score vector, the accumulated score vector including N accumulated score values, one for each of the N consecutive window positions; and comparing the accumulated score vector to an exit threshold vector using a vector compare instruction of the SIMD processor to determine whether or not object classification can be terminated for one or more of the N consecutive window positions. 11. The digital system of claim 10, in which generating a partial score vector includes: generating a leaf selection mask vector for each of the leaves of the decision tree based on the mask vectors, in which a leaf selection mask vector is a logical combination of mask vectors for nodes in a traversal path of the single decision tree that reaches the leaf corresponding to the leaf selection mask vector; and performing a logical and operation of each leaf selection mask vector with a corresponding leaf vector to select partial scores for each of the N window positions from the leaf vectors, in which a corresponding leaf vector includes N copies of a partial score of the leaf. 12. The digital system of claim 7, in which the decision trees are two level binary decision trees. 13. The digital system of claim 7, in which the AdaBoost classifier is trained for pedestrian classification. 14. The digital system of claim 7, in which the SIMD processor is a digital signal processor. 15. A non-transitory computer readable medium storing software instructions that, when executed on a single-instruction multiple-data (SIMD) processor, cause a method for object classification in a decision tree based adaptive boosting (AdaBoost) classifier to be executed, the method comprising: receiving feature vectors extracted from N consecutive window positions in an image in a memory coupled to the SIMD processor, in which N is a vector width of the SIMD processor divided by a bit size of a feature, and in which a feature vector includes N feature values, one feature value for each of the N consecutive window positions; and evaluating the N consecutive window positions concurrently by the AdaBoost classifier using the feature vectors and vector instructions of the SIMD processor, in which the AdaBoost classifier concurrently traverses decision trees for the N consecutive window positions until classification is complete for the N consecutive window positions, in which a decision tree includes a plurality of nodes, a threshold value for each node, and a plurality of leaves, each leaf including a partial score. 16. The computer readable of claim 15, in which evaluating the N consecutive window positions includes: loading a plurality of the feature vectors using a vector load instruction of the SIMD processor, in which one feature vector is loaded for each node of a single decision tree of the AdaBoost classifier; comparing each feature vector to a corresponding threshold vector using a vector compare instruction of the SIMD processor to generate a mask vector for each node, in which the corresponding threshold vector includes N copies of the threshold value for the node corresponding to the feature vector, and in which the mask vector includes N comparison results, one for each of the N features of the feature vector; generating a partial score vector based on the mask vectors and the partial score values of the leaves of the decision tree, the partial score vector including N partial score values, one for each of the N consecutive window positions; accumulating the partial scores into an accumulated score vector, the accumulated score vector including N accumulated score values, one for each of the N consecutive window positions; and comparing the accumulated score vector to an exit threshold vector using a vector compare instruction of the SIMD processor to determine whether or not object classification can be terminated for one or more of the N consecutive window positions. 17. The computer readable medium of claim 16, in which generating a partial score vector includes: generating a leaf selection mask vector for each of the leaves of the decision tree based on the mask vectors, in which a leaf selection mask vector is a logical combination of mask vectors for nodes in a traversal path of the single decision tree that reaches the leaf corresponding to the leaf selection mask vector; and performing a logical and operation of each leaf selection mask vector with a corresponding leaf vector to select partial scores for each of the N window positions from the leaf vectors, in which a corresponding leaf vector includes N copies of a partial score of the leaf. 18. The computer readable medium of claim 15, in which the decision trees are two-level binary decision trees. 19. The computer readable medium of claim 15, in which the AdaBoost classifier is trained for pedestrian classification. 20. The computer readable medium of claim 15, in which the SIMD processor is a digital signal processor.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method for object classification in a decision tree based adaptive boosting (AdaBoost) classifier implemented on a single-instruction multiple-data (SIMD) processor is provided that includes receiving feature vectors extracted from N consecutive window positions in an image in a memory coupled to the SIMD processor and evaluating the N consecutive window positions concurrently by the AdaBoost classifier using the feature vectors and vector instructions of the SIMD processor, in which the AdaBoost classifier concurrently traverses decision trees for the N consecutive window positions until classification is complete for the N consecutive window positions.