output
stringlengths 7
3.46k
| input
stringclasses 1
value | instruction
stringlengths 129
114k
|
---|---|---|
G06N5048 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An efficient fact checking system analyzes and determines the factual accuracy of information and/or characterizes the information by comparing the information with source information. The efficient fact checking system automatically monitors information, processes the information, fact checks the information efficiently and/or provides a status of the information. |
|
Methods for identifying a case with a missing decision from a set of decision rules in violation of a decision requirement are provided. The set of decision rules and decision requirement are received, and a set of decisions made by the decision rules is obtained. A decision detection constraint graph is built, which represents, for each case used by the set of decision rules, whether each decision in the set of decisions is made or not by a decision rule in the set of decision rules. A decision requirement constraint graph is built from the decision requirement, which represents, for each case used by the set of decision rules, the decisions required. For each case used by the set of decision rules, the decision requirement constraint graph and the decision detection constraint graph for the case are used to identify if the case is a case with a missing decision. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method of identifying a case with a missing decision from a set of decision rules in violation of a decision requirement, wherein the decision rules determine whether a decision is made for a case, and wherein the decision requirement determines the decisions required for a case, the method comprising: receiving the set of decision rules; receiving the decision requirement; obtaining a set of decisions made by the decision rules; building a decision detection constraint graph that represents, for each case used by the set of decision rules, whether each decision in the set of decisions is made or not for the case by a decision rule in the set of decision rules; building a decision requirement constraint graph from the decision requirement, that represents for each case used by the set of decision rules the decisions required for that case; and for each case used by the set of decision rules, using the decision requirement constraint graph and the decision detection constraint graph for the case to identify if the case is a case with a missing decision. 2. A computer-implemented method as claimed in claim 1, wherein the set of decisions is generated from the decision rules. 3. A computer-implemented method as claimed in claim 1, wherein a set of cases is generated from the decision rules. 4. A computer-implemented method as claimed in claim 3, further comprising generating a new set of cases used by the set of decision rules, and using the new set of cases to determine further cases with missing decisions. 5. A computer-implemented method as claimed in claim 4, wherein the method is repeated for each possible set of cases used by the set of decision rules. 6. A computer-implemented method as claimed in claim 1, further comprising generating a ghost decision rule that determines that a decision is made for the identified case with a missing decision. 7. A computer-implemented method as claimed in claim 6, wherein the method is repeated with the ghost decision rule added to the set of decision rules. 8. A computer-implemented method as claimed in claim 7, wherein the method is repeated until no further cases with missing decisions are identified. 9. A computer system for identifying a case with a missing decision from a set of decision rules in violation of a decision requirement, wherein the decision rules determine whether a decision is made for a case, wherein the computer system comprises memory and a processor system and is arranged to: receive the set of decision rules and store it in the memory; receive the decision requirement and store it in the memory; obtain a set of decisions made by the decision rules and store it in the memory; use the processor system to build a decision detection constraint graph that represents, for each case used by the set of decision rules, whether each decision in the set of decisions is made or not by a decision rule in the set of decision rules; use the processor system to build a decision requirement constraint graph from the decision requirement that represents, for each case used by the set of decision rules, the decisions required; and for each case used by the set of decision rules, using the decision requirement constraint graph and the decision detection constraint graph for the case to identify if the case is a case with a missing decision. 10. A computer system as claimed in claim 9, arranged to generate the set of decisions from the decision rules. 11. A computer system as claimed in claim 9, arranged to generate a set of cases from the decision rules. 12. A computer system as claimed in claim 11, further arranged to generate a new set of cases used by the set of decision rules, and use the new set of cases to determine further cases with missing decisions. 13. A computer system as claimed in claim 12, arranged to determine further cases with missing decisions for each possible set of cases used by the set of decision rules. 14. A computer system as claimed in claim 9, further arranged to generate a ghost decision rule that determines that a decision is made for the identified case with a missing decision. 15. A computer system as claimed in claim 14, further arranged to add the ghost decision rule to the set of decision rules, and use the new set of decision rules to determine further cases with missing decisions. 16. A computer system as claimed in claim 15, arranged to generate new ghost decision rules until no further cases with missing decisions are identified. 17. A computer program product for identifying a case with a missing decision from a set of decision rules in violation of a decision requirement, wherein the decision rules determine whether a decision is made for a case, and wherein the decision requirement determines the decisions required for a case, the computer program product comprising a computer-readable storage medium having computer-readable program code embodied therewith, the computer-readable program code configured to perform the steps of: receiving the set of decision rules; receiving the decision requirement; obtaining a set of decisions made by the decision rules; building a decision detection constraint graph that represents, for each case used by the set of decision rules, whether each decision in the set of decisions is made or not by a decision rule in the set of decision rules; building a decision requirement constraint graph from the decision requirement, that represents, for each case used by the set of decision rules, the decisions required; and for each case used by the set of decision rules, using the decision requirement constraint graph and the decision detection constraint graph for the case to identify if the case is a case with a missing decision. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: Methods for identifying a case with a missing decision from a set of decision rules in violation of a decision requirement are provided. The set of decision rules and decision requirement are received, and a set of decisions made by the decision rules is obtained. A decision detection constraint graph is built, which represents, for each case used by the set of decision rules, whether each decision in the set of decisions is made or not by a decision rule in the set of decision rules. A decision requirement constraint graph is built from the decision requirement, which represents, for each case used by the set of decision rules, the decisions required. For each case used by the set of decision rules, the decision requirement constraint graph and the decision detection constraint graph for the case are used to identify if the case is a case with a missing decision. |
|
G06N5047 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Methods for identifying a case with a missing decision from a set of decision rules in violation of a decision requirement are provided. The set of decision rules and decision requirement are received, and a set of decisions made by the decision rules is obtained. A decision detection constraint graph is built, which represents, for each case used by the set of decision rules, whether each decision in the set of decisions is made or not by a decision rule in the set of decision rules. A decision requirement constraint graph is built from the decision requirement, which represents, for each case used by the set of decision rules, the decisions required. For each case used by the set of decision rules, the decision requirement constraint graph and the decision detection constraint graph for the case are used to identify if the case is a case with a missing decision. |
|
In order to provide a 1H-magnitude neuro-semiconductor device, a semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other. The semiconductor device includes the synapse bonds that perform non-contact communications using magnetic coupling, and the neuron sections including a wired connection and a logical circuit. The semiconductor device has a connection array in which the synapse bonds and the neuron sections are arranged three-dimensionally. The semiconductor device has a function for enabling reconfiguration of at least some of groupings of the connection array or wired short-distance, intermediate-distance, or long-distance connections. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other, the semiconductor device comprising: the synapse bonds that perform non-contact communication using magnetic coupling; and the neuron sections that include a wired connection and a logical circuit. 2. A semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other, the semiconductor device comprising: the synapse bonds that perform non-contact communication using magnetic coupling; and the neuron sections that include a wired connection and a logical circuit, wherein the semiconductor device has a connection array where the synapse bonds and the neuron sections are spread three-dimensionally. 3. A semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other, the semiconductor device comprising: the synapse bonds that perform non-contact communication using magnetic coupling; and the neuron sections that include a wired connection and a logical circuit, wherein the semiconductor device has a connection array where the synapse bonds and the neuron sections are spread three-dimensionally, and grouping of the connection array is performed. 4. A semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other, the semiconductor device comprising: the synapse bonds that perform non-contact communication using magnetic coupling; and the neuron sections that include a wired connection and a logical circuit, wherein the semiconductor device has a connection array where the synapse bonds and the neuron sections are spread three-dimensionally, and grouping of the connection array is performed and the semiconductor device has a function of reconfiguring a configuration including the number of groupings and the magnitude. 5. A semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other, the semiconductor device comprising: the synapse bonds that perform non-contact communication using magnetic coupling; and the neuron sections that include a wired connection and a logical circuit, wherein the semiconductor device has a connection array where the synapse bonds and the neuron sections are spread three-dimensionally, and the semiconductor device has wired connections of short-distance, intermediate-distance, or long-distance connections between a plurality of connection arrays. 6. A semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other, the semiconductor device comprising: the synapse bonds that perform non-contact communication using magnetic coupling; and the neuron sections that include a wired connection and a logical circuit, wherein the semiconductor device has a connection array where the synapse bonds and the neuron sections are spread three-dimensionally, and the semiconductor device has a function of reconfiguring at least some of the wired connections of short-distance, intermediate-distance, or long-distance connections between a plurality of connection arrays. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: In order to provide a 1H-magnitude neuro-semiconductor device, a semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other. The semiconductor device includes the synapse bonds that perform non-contact communications using magnetic coupling, and the neuron sections including a wired connection and a logical circuit. The semiconductor device has a connection array in which the synapse bonds and the neuron sections are arranged three-dimensionally. The semiconductor device has a function for enabling reconfiguration of at least some of groupings of the connection array or wired short-distance, intermediate-distance, or long-distance connections. |
|
G06N3063 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: In order to provide a 1H-magnitude neuro-semiconductor device, a semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other. The semiconductor device includes the synapse bonds that perform non-contact communications using magnetic coupling, and the neuron sections including a wired connection and a logical circuit. The semiconductor device has a connection array in which the synapse bonds and the neuron sections are arranged three-dimensionally. The semiconductor device has a function for enabling reconfiguration of at least some of groupings of the connection array or wired short-distance, intermediate-distance, or long-distance connections. |
|
A neural network training tool selects from a plurality of parallelizing techniques and selects from a plurality of forward-propagation computation techniques. The neural network training tool performs a forward-propagation phase to train a neural network using the selected parallelizing technique and the selected forward-propagation computation technique based on one or more inputs. Additionally, the neural network training tool selects from a plurality computation techniques and from a plurality of parallelizing techniques for a backward-propagation phase. The neural network training tool performs a backward-propagation phase of training the neural network using the selected backward-propagation parallelizing technique and the selected backward-propagation computation technique to generate error gradients and weight deltas and to update weights associated with one or more layers of the neural network. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: receiving one or more inputs for training a neural network; selecting a parallelizing technique from a plurality of parallelizing techniques; selecting a forward-propagation computation technique from a plurality of computation techniques; directing the neural network to process the one or more inputs using the selected parallelizing technique and the selected computation technique; and receiving from the neural network, one or more outputs resulting from the neural network processing the one or more inputs. 2. A method as recited in claim 1, wherein the plurality of parallelizing techniques include: parallel processing; and processing in parallel. 3. A method as recited in claim 1, wherein the plurality of computation techniques include: matrix multiplication; and stencil-based computation. 4. A method as recited in claim 1, wherein selecting a parallelizing technique from the plurality of parallelizing techniques is based, at least in part, on properties associated with the neural network. 5. A method as recited in claim 4, wherein the properties associated with the neural network comprise one or more of: a number of layers within the neural network; a number of feature maps associated with individual layers of the neural network; a data sparsity associated with individual layers of the neural network; a size associated with a convolution filter used to process the inputs; or a stride size. 6. A method as recited in claim 1, wherein selecting a computation technique from the plurality of computation techniques is based, at least in part, on properties associated with the neural network. 7. A method as recited in claim 6, wherein the properties associated with the neural network comprise one or more of: a size of the inputs; a number of inputs; a number of feature maps of the inputs; a stride size; or a size associated with a convolution filter that is used to process the inputs. 8. A method as recited in claim 1, wherein: the neural network includes at least a first layer and a second layer; selecting the parallelizing technique comprises: selecting a first parallelizing technique from the plurality of parallelizing techniques to use for the first layer; and selecting a second parallelizing technique from the plurality of parallelizing techniques to use for the second layer; and selecting the computation technique comprises: selecting a first computation technique from the plurality of computation techniques to use for the first layer; and selecting a second computation technique from the plurality of computation techniques to use for the second layer. 9. A method as recited in claim 1, further comprising: determining, based at least in part on the one or more inputs and the one or more outputs, one or more output activation errors; selecting a backward-propagation computation technique from a plurality of backward-propagation computation techniques; and processing the neural network based, at least in part, on the one or more output activation errors, using the selected backward-propagation technique. 10. A method as recited in claim 9, wherein the plurality of backward-propagation computation techniques include: matrix multiplication; and sparse-dense matrix computation. 11. A method as recited in claim 9, wherein processing the neural network based, at least in part, on the one or more output activation errors, includes updating weights associated with one or more layers of the neural network. 12. A method as recited in claim 9, further comprising: selecting a backward-propagation parallelization technique from a plurality of backward-propagation parallelization techniques, wherein processing the neural network based, at least in part, on the one or more output activation errors, using the selected backward-propagation technique, further includes processing the neural network based on the selected backward-propagation parallelization technique. 13. A device comprising: a processor; and a computer-readable medium communicatively coupled to the processor; a parallelizing decision module stored on the computer-readable medium and executable by the processor to select, based at least in part on properties of a neural network, a parallelizing technique from a plurality of parallelizing techniques; a forward propagation decision module stored on the computer-readable medium and executable by the processor to select, based at least in part on properties of the neural network, a computation technique from a plurality of computation techniques; and a forward-propagation processing module configured to: receive one or more inputs for training the neural network; cause the neural network to process, based at least in part on the selected parallelizing technique and the selected computation technique, the one or more inputs; and receive, from the neural network, one or more outputs resulting from the neural network processing the one or more inputs. 14. A device as recited in claim 13, wherein: the plurality of parallelizing techniques include: parallel processing; and processing in parallel; and the plurality of computation techniques include: matrix multiplication; and stencil-based computation. 15. A device as recited in claim 13, further comprising a backward-propagation decision module stored on the computer-readable media and executable by the processor to: determine, based at least in part on the one or more inputs and the one or more outputs, one or more output activation errors for the neural network; select, based at least in part on properties of the neural network, a backward-propagation technique from a plurality of backward-propagation techniques and a parallelizing technique from a plurality of parallelizing techniques; and process the neural network using the selected backward-propagation technique and the selected parallelizing technique to update weights associated with one or more layers of the neural network. 16. One or more computer-readable media storing computer-executable instructions that, when executed on one or more processors, configure a computer to train a neural network by performing acts comprising: causing the neural network to process one or more inputs; receiving from the neural network, one or more outputs resulting from the neural network processing the one or more inputs; determining, based at least in part on the one or more inputs and the one or more outputs, one or more output activation errors for the neural network; selecting, based at least in part on one or more properties associated with the neural network, a backward-propagation technique from a plurality of backward-propagation techniques; using the selected backward-propagation technique and the one or more output activation errors to calculate error gradients and weight deltas for the neural network; and updating weights associated with one or more layers of the neural network based, at least in part, on the error gradients or the weight deltas. 17. One or more computer-readable media as recited in claim 16, wherein: the selected backward-propagation technique is a sparse-dense matrix multiplication technique; and using the selected backward-propagation technique and the one or more output activation errors to generate input activation errors and weight deltas for the neural network includes: generating one or more sparse matrices using the one or more output activation errors; representing an individual sparse matrix of the one or more sparse matrices using a row index array, a column index array, and a value array; calculating the error gradients and the weight deltas based, at least in part, on the one or more sparse matrices. 18. One or more computer-readable media as recited in claim 16, wherein the one or more properties associated with the neural network comprise at least one of: a number of layers within the neural network; a number of feature maps associated with individual layers of the neural network; a data sparsity associated with individual layers of the neural network; a size associated with a kernel; and a stride size. 19. One or more computer-readable media as recited in claim 18, wherein the data sparsity is represented as a percentage of values within the individual layers of the neural network that include a zero value. 20. One or more computer-readable media as recited in claim 19, wherein selecting the backward-propagation technique includes selecting a sparse-dense matrix multiplication technique based, at least in part, on the data sparsity being greater than a threshold percentage of values that include a zero value. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: A neural network training tool selects from a plurality of parallelizing techniques and selects from a plurality of forward-propagation computation techniques. The neural network training tool performs a forward-propagation phase to train a neural network using the selected parallelizing technique and the selected forward-propagation computation technique based on one or more inputs. Additionally, the neural network training tool selects from a plurality computation techniques and from a plurality of parallelizing techniques for a backward-propagation phase. The neural network training tool performs a backward-propagation phase of training the neural network using the selected backward-propagation parallelizing technique and the selected backward-propagation computation technique to generate error gradients and weight deltas and to update weights associated with one or more layers of the neural network. |
|
G06N308 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A neural network training tool selects from a plurality of parallelizing techniques and selects from a plurality of forward-propagation computation techniques. The neural network training tool performs a forward-propagation phase to train a neural network using the selected parallelizing technique and the selected forward-propagation computation technique based on one or more inputs. Additionally, the neural network training tool selects from a plurality computation techniques and from a plurality of parallelizing techniques for a backward-propagation phase. The neural network training tool performs a backward-propagation phase of training the neural network using the selected backward-propagation parallelizing technique and the selected backward-propagation computation technique to generate error gradients and weight deltas and to update weights associated with one or more layers of the neural network. |
|
Examples of the present disclosure describe systems and methods for improving the recommendations provided to a user by a recommendation system using viewed content as implicit feedback. In some aspects, attention models are created/updated to infer the user attention of a user that has viewed or is viewing content on a computing device. The attention model may be used to convert inferences of user attention into inferences of user satisfaction with the viewed content. The inferences of user satisfaction may be used to generate inferences of fatigue with the viewed content. The inferences of user satisfaction and inferences of user fatigue may then be used as implicit feedback to improve the content selection, content triggering and/or content presentation by the recommendation system. Other examples are also described. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system for modeling user satisfaction, the system comprising: at least one processor; and memory coupled to the at least one processor, the memory comprising computer executable instructions that, when executed by the at least one processor, performs a method comprising: receiving a first viewing session data; determining at least a first content item in the first viewing session data, wherein the at least a first content item has a first content type; determining a first aggregated display time for the first content type; determining a first content density for the first content type; generating a first viewing time based on the first aggregated display time for the first content type and the first content density for the first content type; determining a satisfaction value for the first content type; and updating a satisfaction model based on the satisfaction value. 2. The system of claim 1, wherein first session viewing data comprises one or more viewports, the one or more viewports comprising at least a portion of one or more content items. 3. The system of claim 1, wherein determining the first aggregated display time comprises aggregating one or more content items in the viewing session data and attributing a duration to each of the aggregated one or more content items. 4. The system of claim 3, wherein an attributed duration of the one or more content items determines a display time for one or more content items, wherein the display time is based on the visible area of the one or more content items within the one or more viewports. 5. The system of claim 4, wherein the visible area excludes occluded areas within the viewing session data. 6. The system of claim 1, wherein determining a first content density comprises determining at least one of: the number of characters within the first content item and the size in pixels of the first content item. 7. The system of claim 1, wherein the first viewing time is used to update an attention value. 8. The system of claim 1, wherein the satisfaction model is one of: a rule-based model, a machine-learned regressor, and a machine-learned classifier. 9. The system of claim 1, further comprising: receiving a second viewing session data; determining at least a second content item in the second viewing session data, wherein the at least a second content item has the first content type; determining a second aggregated display time for the first content type; determining a second content density for the first content type; generating a second viewing time based on the aggregated display time for the first content type and the second content density for the first content type; comparing the first viewing time to the second viewing time; and determining a fatigue value based at least on the comparison. 10. The system of claim 9, wherein the fatigue value is further based at least on determining whether the at least a first content item is different from the at least a second content item. 11. The system of claim 10, wherein the fatigue model is updated based on the fatigue value. 12. The system of claim 10, further comprising: optimizing a presentation of the first content type based upon at least one of: the satisfaction value and the fatigue value. 13. The system of claim 10, wherein optimizing a presentation of the first content type comprises prioritizing the first content type by at least one of: content type selection, content type triggering, and content type ranking. 14. A system for providing recommendations using viewable content, the system comprising: a processor; a recommendation component; and a memory coupled to the processor, the memory comprising computer executable instructions that, when executed by the processor, performs a method comprising: receiving viewing session data; creating an user attention model from the received viewing session data; using the attention model, creating a satisfaction model for the received viewing session data; selecting a content selection related to the received viewing session data; using the satisfaction model, prioritizing as prioritized content a portion of content from at least one of the viewing session data and the content selection related to the viewing session data; and integrating the prioritized content with the recommendation component. 15. The system of claim 14, further comprising: using the satisfaction model, creating a fatigue model for the received viewing session data. 16. The system of claim 14, wherein selecting a content selection comprises: determining a criteria in the received viewing session data, wherein the criteria is at least one of: a content type, a time, a location, a user, and a user group; and selecting content with the criteria. 17. The system of claim 14, wherein the prioritized content is prioritized based on at least one of: a content of the content selection and a ranking of the content selection. 18. The system of claim 14, wherein the recommendation component provides recommendations based at least upon the prioritized content. 19. The system of claim 14, wherein the recommendation component updates a profile based upon at least one of the attention model, the satisfaction model, and the prioritized content. 20. A method for providing recommendations using viewable content, the method comprising: receiving a first viewing session data; determining at least a first content in the first viewing session data, wherein the first content has a first content type; determining a first aggregated display time for the first content type; generating a first viewing time based on the first aggregated display time for the first content type; determining a satisfaction value for the first content type; receiving a second viewing session data; determining at least a second content in the second viewing session data, wherein the second content has the first content type; determining a second aggregated display time for the first content type; generating a second viewing time based on the second aggregated display time for the first content type; comparing the first viewing time and the second viewing time; determining a fatigue value based at least on the comparison; and providing a recommendation based at least in part on at least one of the satisfaction value and the fatigue value. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: Examples of the present disclosure describe systems and methods for improving the recommendations provided to a user by a recommendation system using viewed content as implicit feedback. In some aspects, attention models are created/updated to infer the user attention of a user that has viewed or is viewing content on a computing device. The attention model may be used to convert inferences of user attention into inferences of user satisfaction with the viewed content. The inferences of user satisfaction may be used to generate inferences of fatigue with the viewed content. The inferences of user satisfaction and inferences of user fatigue may then be used as implicit feedback to improve the content selection, content triggering and/or content presentation by the recommendation system. Other examples are also described. |
|
G06N504 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Examples of the present disclosure describe systems and methods for improving the recommendations provided to a user by a recommendation system using viewed content as implicit feedback. In some aspects, attention models are created/updated to infer the user attention of a user that has viewed or is viewing content on a computing device. The attention model may be used to convert inferences of user attention into inferences of user satisfaction with the viewed content. The inferences of user satisfaction may be used to generate inferences of fatigue with the viewed content. The inferences of user satisfaction and inferences of user fatigue may then be used as implicit feedback to improve the content selection, content triggering and/or content presentation by the recommendation system. Other examples are also described. |
|
A method of operating a spiking neural network having neurons coupled together with a synapse includes monitoring a timing of a presynaptic spike and monitoring a timing of a postsynaptic spike. The method also includes determining a time difference between the postsynaptic spike and the presynaptic spike. The method further includes calculating a stochastic update of a delay for the synapse based on the time difference between the postsynaptic spike and the presynaptic spike. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of operating a spiking neural network having a plurality of neurons coupled together with at least one synapse, comprising: monitoring a timing of a presynaptic spike; monitoring a timing of a postsynaptic spike; determining a time difference between the postsynaptic spike and the presynaptic spike; and calculating a stochastic update of a delay for the at least one synapse based at least in part on the time difference. 2. The method of claim 1, in which the stochastic update is based at least in part on an evaluation of a probability function. 3. The method of claim 2, in which the probability function is based at least in part on an increase in the delay. 4. The method of claim 2, in which the probability function is based at least in part on a decrease in the delay. 5. The method of claim 2, in which at least one region of a probability distribution is parameterized. 6. The method of claim 2, in which the probability function is piecewise linear. 7. The method of claim 1, in which the update is based at least in part on a look up table. 8. The method of claim 1, in which the update is based at least in part on a calculation. 9. An apparatus for operating a spiking neural network having a plurality of neurons coupled together with at least one synapse, comprising: a memory; and at least one processor coupled to the memory, the at least one processor being configured: to monitor a timing of a presynaptic spike; to monitor a timing of a postsynaptic spike; to determine a time difference between the postsynaptic spike and the presynaptic spike; and to calculate a stochastic update of a delay for the at least one synapse based at least in part on the time difference. 10. The apparatus of claim 9, in which the at least one processor is configured to calculate the stochastic update based at least in part on an evaluation of a probability function. 11. The apparatus of claim 10, in which the probability function is based at least in part on an increase in the delay. 12. The apparatus of claim 10, in which the probability function is based at least in part on a decrease in the delay. 13. The apparatus of claim 10, in which at least one region of a probability distribution is parameterized. 14. The apparatus of claim 10, in which the probability function is piecewise linear. 15. The apparatus of claim 9, in which the at least one processor is configured to calculate the stochastic update based at least in part on a look up table. 16. The apparatus of claim 9, in which the at least one processor is configured to calculate the stochastic update based at least in part on a calculation. 17. An apparatus for operating a spiking neural network having a plurality of neurons coupled together with at least one synapse, comprising: means for monitoring a timing of a presynaptic spike; means for monitoring a timing of a postsynaptic spike; means for determining a time difference between the postsynaptic spike and the presynaptic spike; and means for calculating a stochastic update of a delay for the at least one synapse based at least in part on the time difference. 18. The apparatus of claim 17, in which the means for calculating the stochastic update calculates the stochastic update based at least in part on an evaluation of a probability function. 19. The apparatus of claim 18, in which the probability function is based at least in part on an increase in the delay. 20. The apparatus of claim 18, in which the probability function is based at least in part on a decrease in the delay. 21. The apparatus of claim 18, in which at least one region of a probability distribution is parameterized. 22. The apparatus of claim 18, in which the probability function is piecewise linear. 23. The apparatus of claim 17, in which the means for calculating the stochastic update calculates the stochastic update based at least in part on a look up table. 24. The apparatus of claim 17, in which the means for calculating the stochastic update calculates the stochastic update based at least in part on a calculation. 25. A computer program product for operating a spiking neural network having a plurality of neurons coupled together with at least one synapse, comprising: a non-transitory computer readable medium have encoded thereon program code, the program code comprising: program code to monitor a timing of a presynaptic spike; program code to monitor a timing of a postsynaptic spike; program code to determine a time difference between the postsynaptic spike and the presynaptic spike; and program code to calculate a stochastic update of a delay for the at least one synapse based at least in part on the time difference. |
|
ACCEPTED | Please predict whether this patent is acceptable.PATENT ABSTRACT: A method of operating a spiking neural network having neurons coupled together with a synapse includes monitoring a timing of a presynaptic spike and monitoring a timing of a postsynaptic spike. The method also includes determining a time difference between the postsynaptic spike and the presynaptic spike. The method further includes calculating a stochastic update of a delay for the synapse based on the time difference between the postsynaptic spike and the presynaptic spike. |
|
G06N308 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method of operating a spiking neural network having neurons coupled together with a synapse includes monitoring a timing of a presynaptic spike and monitoring a timing of a postsynaptic spike. The method also includes determining a time difference between the postsynaptic spike and the presynaptic spike. The method further includes calculating a stochastic update of a delay for the synapse based on the time difference between the postsynaptic spike and the presynaptic spike. |
|
A machine receives a first set of global parameters from a global parameter server. The first set of global parameters includes data that weights one or more operands used in an algorithm that models an entity type. Multiple learner processors in the machine execute the algorithm using the first set of global parameters and a mini-batch of data known to describe the entity type. The machine generates a consolidated set of gradients that describes a direction for the first set of global parameters in order to improve an accuracy of the algorithm in modeling the entity type when using the first set of global parameters and the mini-batch of data. The machine transmits the consolidated set of gradients to the global parameter server. The machine then receives a second set of global parameters from the global parameter server, where the second set of global parameters is a modification of the first set of global parameters based on the consolidated set of gradients. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method for parameter data sharing, the computer-implemented method comprising: receiving, by a first machine, a first set of global parameters from a global parameter server, wherein the first set of global parameters comprises data that weights one or more operands used in an algorithm that models an entity type; executing, by multiple learner processors in the first machine, the algorithm using the first set of global parameters and a first mini-batch of data known to describe the entity type; generating, by the first machine, a first consolidated set of gradients that describes a direction for the first set of global parameters in order to improve an accuracy of the algorithm in modeling the entity type when using the first set of global parameters and the first mini-batch of data known to describe the entity type; transmitting, from the first machine, the first consolidated set of gradients to the global parameter server; and receiving, by the first machine, a second set of global parameters from the global parameter server, wherein the second set of global parameters is a modification of the first set of global parameters based on the first consolidated set of gradients. 2. The computer-implemented method of claim 1, further comprising: receiving, by a second machine, the first set of global parameters from the global parameter server; executing, by multiple learner processors in the second machine, the algorithm using the first set of global parameters and a second mini-batch of data known to describe the entity type; generating, by the second machine, a second consolidated set of gradients that describes a direction for the first set of global parameters in order to improve the accuracy of the algorithm in modeling the entity type when using the first set of global parameters; transmitting, from the second machine, the second consolidated set of gradients to the global parameter server; and receiving, by the first machine and the second machine, a third set of global parameters from the global parameter server, wherein the third set of global parameters is a modification of the first set of global parameters based on the first consolidated set of gradients and the second consolidated set of gradients. 3. The computer-implemented method of claim 2, further comprising: testing, by a third machine, a set of unknown data using the third set of global parameters in order to determine whether the set of unknown data matches the entity type. 4. The computer-implemented method of claim 1, further comprising: generating each of the first consolidated set of gradients by a different learner processor in the first machine; writing, by each of the multiple learner processors in the first machine, each gradient generated by each of the multiple learner processors in the first machine; and consolidating, by the first machine, gradients generated by all of the multiple learner processors in the first machine in order to create the first consolidated set of gradients. 5. The computer-implemented method of claim 1, further comprising: reading, by all of the multiple learner processors in the first machine, the first set of global parameters and the second set of global parameters from a shared memory in the first machine. 6. The computer-implemented method of claim 1, further comprising: storing, by one or more processors, global parameters currently in use by the first machine in a first memory in the first machine; and storing, by one or more processors, global parameters being downloaded from the global parameter server for future use, in a second memory in the first machine. 7. The computer-implemented method of claim 1, wherein the first set of global parameters further weight results from one or more particular operators used in the algorithm that models the entity type. 8. A computer program product for parameter data sharing, the computer program product comprising a computer readable storage device having program instructions embodied therewith, the program instructions readable and executable by a computer to perform a method comprising: receiving, by a first machine, a first set of global parameters from a global parameter server, wherein the first set of global parameters comprises data that weights one or more operands used in an algorithm that models an entity type; executing, by multiple learner processors in the first machine, the algorithm using the first set of global parameters and a first mini-batch of data known to describe the entity type; generating, by the first machine, a first consolidated set of gradients that describes a direction for the first set of global parameters in order to improve an accuracy of the algorithm in modeling the entity type when using the first set of global parameters and the first mini-batch of data known to describe the entity type; transmitting, from the first machine, the first consolidated set of gradients to the global parameter server; and receiving, by the first machine, a second set of global parameters from the global parameter server, wherein the second set of global parameters is a modification of the first set of global parameters based on the first consolidated set of gradients. 9. The computer program product of claim 8, wherein the method further comprises: receiving, by a second machine, the first set of global parameters from the global parameter server; executing, by multiple learner processors in the second machine, the algorithm using the first set of global parameters and a second mini-batch of data known to describe the entity type; generating, by the second machine, a second consolidated set of gradients that describe a direction for the first set of global parameters in order to improve the accuracy of the algorithm in modeling the entity type when using the first set of global parameters; transmitting, from the second machine, the second consolidated set of gradients to the global parameter server; and receiving, by the first machine and the second machine, a third set of global parameters from the global parameter server, wherein the third set of global parameters is a modification of the first set of global parameters based on the first consolidated set of gradients and the second consolidated set of gradients. 10. The computer program product of claim 9, wherein the method further comprises: testing, by a third machine, a set of unknown data using the third set of global parameters in order to determine whether the set of unknown data matches the entity type. 11. The computer program product of claim 8, wherein the method further comprises: generating each of the first consolidated set of gradients by a different learner processor in the first machine; writing, by each of the multiple learner processors in the first machine, each gradient generated by each of the multiple learner processors in the first machine; and consolidating, by the first machine, gradients generated by all of the multiple learner processors in the first machine in order to create the first consolidated set of gradients. 12. The computer program product of claim 8, wherein the method further comprises: reading, by all of the multiple learner processors in the first machine, the first set of global parameters and the second set of global parameters from a shared memory in the first machine. 13. The computer program product of claim 8, wherein the method further comprises: storing global parameters currently in use by the first machine in a first memory in the first machine; and storing global parameters being downloaded from the global parameter server for future use, in a second memory in the first machine. 14. The computer program product of claim 8, wherein the first set of global parameters further weight results from one or more particular operators used in the algorithm that models the entity type. 15. The computer program product of claim 8, wherein the program instructions are provided as a service in a cloud environment. 16. A computer system comprising one or more processors, one or more computer readable memories, and one or more computer readable storage mediums, and program instructions stored on at least one of the one or more storage mediums for execution by at least one of the one or more processors via at least one of the one or more memories, the stored program instructions comprising: program instructions to receive, by a first machine, a first set of global parameters from a global parameter server, wherein the first set of global parameters comprises data that weights one or more operands used in an algorithm that models an entity type; program instructions to execute, by multiple learner processors in the first machine, the algorithm using the first set of global parameters and a first mini-batch of data known to describe the entity type; program instructions to generate, by the first machine, a first consolidated set of gradients that describes a direction for the first set of global parameters in order to improve an accuracy of the algorithm in modeling the entity type when using the first set of global parameters and the first mini-batch of data known to describe the entity type; program instructions to transmit, from the first machine, the first consolidated set of gradients to the global parameter server; and program instructions to receive, by the first machine, a second set of global parameters from the global parameter server, wherein the second set of global parameters is a modification of the first set of global parameters based on the first consolidated set of gradients. 17. The computer system of claim 16, further comprising: program instructions to receive, by a second machine, the first set of global parameters from the global parameter server; program instructions to execute, by multiple learner processors in the second machine, the algorithm using the first set of global parameters and a second mini-batch of data known to describe the entity type; program instructions to generate, by the second machine, a second consolidated set of gradients that describes a direction for the first set of global parameters in order to improve the accuracy of the algorithm in modeling the entity type when using the first set of global parameters; program instructions to transmit, from the second machine, the second consolidated set of gradients to the global parameter server; and program instructions to receive, by the first machine and the second machine, a third set of global parameters from the global parameter server, wherein the third set of global parameters is a modification of the first set of global parameters based on the first consolidated set of gradients and the second consolidated set of gradients. 18. The computer system of claim 17, further comprising: program instructions to test, by a third machine, a set of unknown data using the third set of global parameters in order to determine whether the set of unknown data matches the entity type. 19. The computer system of claim 16, further comprising: program instructions to generate each of the first consolidated set of gradients by a different learner processor in the first machine; program instructions to write, by each of the multiple learner processors in the first machine, each gradient generated by each of the multiple learner processors in the first machine; and program instructions to consolidate, by the first machine, gradients generated by all of the multiple learner processors in the first machine in order to create the first consolidated set of gradients. 20. The computer system of claim 16, further comprising: program instructions to read, by all of the multiple learner processors in the first machine, the first set of global parameters and the second set of global parameters from a shared memory in the first machine. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: A machine receives a first set of global parameters from a global parameter server. The first set of global parameters includes data that weights one or more operands used in an algorithm that models an entity type. Multiple learner processors in the machine execute the algorithm using the first set of global parameters and a mini-batch of data known to describe the entity type. The machine generates a consolidated set of gradients that describes a direction for the first set of global parameters in order to improve an accuracy of the algorithm in modeling the entity type when using the first set of global parameters and the mini-batch of data. The machine transmits the consolidated set of gradients to the global parameter server. The machine then receives a second set of global parameters from the global parameter server, where the second set of global parameters is a modification of the first set of global parameters based on the consolidated set of gradients. |
|
G06N99005 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A machine receives a first set of global parameters from a global parameter server. The first set of global parameters includes data that weights one or more operands used in an algorithm that models an entity type. Multiple learner processors in the machine execute the algorithm using the first set of global parameters and a mini-batch of data known to describe the entity type. The machine generates a consolidated set of gradients that describes a direction for the first set of global parameters in order to improve an accuracy of the algorithm in modeling the entity type when using the first set of global parameters and the mini-batch of data. The machine transmits the consolidated set of gradients to the global parameter server. The machine then receives a second set of global parameters from the global parameter server, where the second set of global parameters is a modification of the first set of global parameters based on the consolidated set of gradients. |
|
A system and method improves the probability of correctly detecting an object from a collection of source data and reduces the processing load. A plurality of algorithms for a given data type are selected and ordered based on a cumulative trained probability of correctness (Pc) that each of the algorithms, which are processed in a chain and conditioned upon the result of the preceding algorithms, produce a correct result and a processing. The algorithms cull the source data to pass forward a reduced subset of source data in which the conditional probability of detecting the object is higher than the a priori probability of the algorithm detecting that same object. The Pc and its confidence interval is suitably computed and displayed for each algorithm and the chain and the final object detection. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method for global object recognition comprising: receiving, by the one or more hardware processors, object metadata including a plurality of characteristics that define an object to be detected; receiving, by one or more hardware processors, search metadata including a plurality of context parameters that define a search for the object; retrieving, based on the object and search metadata, a plurality of source data of a given data type; selecting, from a plurality of algorithms, a subset of algorithms to be used in processing the retrieved source data based on a cumulative trained probability of correctness (Pc) that each of the algorithms, which are processed in a chain and conditioned upon the result of the preceding algorithms, produce a correct result; ordering the algorithms in the subset based on algorithm metadata including a plurality of algorithm characteristics to reduce an expected processing load of the retrieved source data; and processing the retrieved source data in order according to the chain of the selected subset of algorithms to obtain a plurality of results and to reduce the number of source data that is processed by the next algorithm in the chain, at least one result indicating whether the object was detected in corresponding source data output from the last algorithm in the chain. 2. The method of claim 1, further comprising: computing a cumulative object Pc and confidence interval representing whether the objected was detected in the plurality of source data. 3. The method of claim 2, further comprising: computing a cumulative algorithm Pc and confidence interval representing whether the result of each algorithm in the chain was correct in the plurality of source data. 4. The method of claim 1, further comprising configuring the algorithms in the subset to reduce the number of retrieved source data by a nominal culling percentage. 5. The method of claim 4, further comprising: receiving, by one or more hardware processors, a mission time critical targeting (TCT) requirement; and reconfiguring one or more of the image processing algorithms to adjust the nominal culling percentage based on the mission TCT requirement. 6. The method of claim 1, wherein the steps of selecting and ordering the subset of algorithms further comprises: for a plurality of objects, non-real-time evaluation of a plurality of candidate subsets of different chained algorithms configured based on object and algorithm metadata, selection of a subset based on its cumulative trained Pc and expected processing load for each said object and storing of selected subsets in a repository; and real-time selection of the subset from the repository based on the object metadata. 7. The method of claim 6, wherein a candidate subset's cumulative trained Pc is evaluated against a threshold after each image processing algorithm in the chain and disqualified if the trained Pc does not exceed the threshold. 8. The method of claim 7, wherein the plurality of candidate subsets' cumulate trained Pc are evaluated against a different threshold at each level of a multi-path tree to disqualify algorithms within the subset chain at that level and to identify one or more candidate algorithms to replace the disqualified algorithm at that level to continue the chain, complete the subset, and ensure that at least one subset's cumulative Pc exceeds a final threshold. 9. The method of claim 6, further comprising using a stochastic math model (SMM) to compute the cumulative trained Pc and using a discrete event simulator (DES) to implement the SMM and perform a Monte Carlo simulation on multiple instances of the object in the source data to generate the confidence interval corresponding to the Pc at each level. 10. The method of claim 6, wherein the expected processing load is based on both the processing resources required to run each algorithm and a nominal culling percentage for that algorithm. 11. The method of claim 6, further comprising: selecting multiple subsets each configured to detect the object, each said subset's algorithms configured to process a different data type of source data; retrieving multiple source data for each data type; processing the source data according to the selected subset for the data type to obtain one or more results for each subset indicating whether the object was detected; and fusing the one or more results for each subset to obtain one or more results indicating whether the object was detected 12. The method of claim 11, further comprising: computing and displaying a cumulative fused Pc and confidence interval for the detected object. 13. The method of claim 6, wherein the algorithms in the candidate subsets are configured to reduce the number of retrieved source data by a nominal culling percentage in total. 14. The method of claim 13, further comprising: receiving a mission time critical targeting (TCT) requirement; and reconfiguring one or more of the algorithms to adjust the nominal culling percentage based on the mission TCT requirement. 15. The method of claim 6, further comprising: computing and displaying a cumulative object Pc and confidence interval for the detected object. 16. The method of claim 15, further comprising: for each algorithm in the chain, computing and displaying a cumulative algorithm Pc and confidence interval. 17. The method of claim 6, further comprising: if the object metadata is not a match for a subset, selecting in real-time a subset of algorithms based on the object and algorithm metadata and ordering the algorithms to reduce an expected processing load. 18. The method of claim 6, further comprising: receiving operator feedback as to whether the detected object was correct or incorrect. 19. The method of claim 6, further comprising: receiving operator feedback as to whether the result for each said algorithm was correct or incorrect. 20. The method of claim 6, further comprising using a the non-real-time evaluation global object recognition server and cluster of processing nodes to perform the non-real-time evaluation and using a client device to perform the real-time processing. 21. A computer-implemented method for global object recognition comprising: receiving, by the one or more hardware processors, object metadata including a plurality of characteristics that define the object to be located; receiving, by one or more hardware processors, for each of a plurality of algorithms configured to process source data of a given data type, algorithm metadata including a plurality of algorithm characteristics that describe the algorithm; retrieving a plurality of training source data of the given data type; selecting, from the plurality of algorithms, based on the object and algorithm metadata a plurality of candidate subsets of algorithms to be used in processing the retrieved source data; for each candidate subset, ordering the algorithms in the chain based on algorithm metadata to reduce an expected processing load; for each candidate subset, processing the retrieved source data in order according to the chain of algorithms to obtain a plurality of results and to reduce the number of training source data that is processed by the next algorithm in the chain, at least one result indicating whether the object was identified in corresponding source data output from the last algorithm in the chain; for each candidate subset, computing a cumulative trained probability of correctness (Pc) and corresponding confidence interval that each of the algorithms, which are processed in the chain and conditioned upon the result of the preceding algorithms, produce a correct result; selecting a candidate subset based on its trained Pc and corresponding confidence interval and expected processing load; pairing the selected candidate subset of algorithms with the object to be detected; and repeating the steps for a plurality of different objects. 22. A computer-implemented method for global object recognition comprising: receiving, by the one or more hardware processors, object metadata including a plurality of characteristics that define the object to be detected; receiving, by one or more hardware processors, search metadata including a plurality of context parameters that define a search for the object; receiving, by one or more hardware processors, a plurality algorithms, algorithm metadata including a plurality of algorithm characteristics that describe each algorithm, and a plurality of defined subsets of chained algorithms configured to detect different objects, each said defined subset selected based on a cumulative trained probability of correctness (Pc) and corresponding confidence interval that each one of the algorithms, which are processed in the chain and conditioned upon the result of the preceding algorithms, produce a correct result and an expected processing load of the chain; selecting, from the plurality of defined subsets, based on the object metadata one of the defined subsets, said algorithms in the selected subsets configured to process source data of a given data type; if none of the defined subsets match the object to be located, based on the object and algorithm metadata selecting and ordering a plurality of algorithms, configured to process source data of a given data type, to define a selected subset; retrieving, based on the plurality of context parameters, a plurality of source data of the given data type; processing the retrieved one or more source data in order according to the chain of the selected subset of algorithms to obtain a plurality of results and to reduce the number of source data that is processed by the next algorithm in the chain, at least one result indicating whether the object was detected in corresponding source data output from the last algorithm in the chain; and determining a cumulative object Pc and confidence interval representing whether the object was detected in one or more of the retrieved source data output from the last algorithm based on the at least one result. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: A system and method improves the probability of correctly detecting an object from a collection of source data and reduces the processing load. A plurality of algorithms for a given data type are selected and ordered based on a cumulative trained probability of correctness (Pc) that each of the algorithms, which are processed in a chain and conditioned upon the result of the preceding algorithms, produce a correct result and a processing. The algorithms cull the source data to pass forward a reduced subset of source data in which the conditional probability of detecting the object is higher than the a priori probability of the algorithm detecting that same object. The Pc and its confidence interval is suitably computed and displayed for each algorithm and the chain and the final object detection. |
|
G06N7005 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A system and method improves the probability of correctly detecting an object from a collection of source data and reduces the processing load. A plurality of algorithms for a given data type are selected and ordered based on a cumulative trained probability of correctness (Pc) that each of the algorithms, which are processed in a chain and conditioned upon the result of the preceding algorithms, produce a correct result and a processing. The algorithms cull the source data to pass forward a reduced subset of source data in which the conditional probability of detecting the object is higher than the a priori probability of the algorithm detecting that same object. The Pc and its confidence interval is suitably computed and displayed for each algorithm and the chain and the final object detection. |
|
In an embodiment, an improved computer-implemented method of efficiently determining actions to perform based on data from a streaming continuous queries in a distributed computer system comprises, at a central control computer, receiving a streaming continuous query and a rule-set; wherein the rule-set comprises decision data representing decisions based on attributes produced by the query, and action data representing end actions based on the decisions, wherein the attributes comprise data processed by one or more networked computers; separating the streaming continuous query into a sub-query executable at one or more edge computers; categorizing end actions from the set based on decisions requiring attributes available from the sub-query into a set of one or more edge expressions that are configured to be evaluated at an edge agent to cause an action; providing the set of edge expressions and the sub-query to at least one edge computer with instructions to process visible attributes on the edge computer and to evaluate the set of one or more edge expressions independently from the central control computer; wherein the method is performed by one or more computing devices. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. An improved computer-implemented method of efficiently determining actions to perform based on data from streaming continuous queries in a distributed computer system, the method comprising: at a central control computer: receiving a streaming continuous query and a rule-set; wherein the rule-set comprises decision data representing a plurality of decisions based on a plurality of attributes produced by the streaming continuous query, and action data representing end actions based on the plurality of decisions, wherein the plurality of attributes comprise data processed by one or more networked computers; separating the streaming continuous query into a sub-query executable at one or more edge computers; categorizing each rule from the rule-set into a set of one or more edge expressions that are configured to be evaluated at an edge computer to cause an action; providing the set of one or more edge expressions and the sub-query to at least one edge computer with instructions to process visible attributes on the edge computer and to evaluate the set of one or more edge expressions independently from the central control computer; wherein the method is performed by one or more computing devices. 2. The method of claim 1, wherein separating the streaming continuous query further comprises: retrieving a first set of attributes available at a particular edge computer of the one or more edge computers; comparing the first set of attributes available at the particular edge computer with a second set of attributes requested in the streaming continuous query; creating, from the streaming continuous query, the sub-query to request a third set of attributes, wherein the third set of attributes comprises an intersection of attributes from the first set of attributes available at the particular edge computer and the second set of attributes requested in the streaming continuous query. 3. The method of claim 2, wherein retrieving the first set of attributes available at the particular edge computer includes scanning the particular edge computer for metadata of visible attributes at the particular edge computer. 4. The method of claim 1, wherein separating the streaming continuous query comprises separating the streaming continuous query into the sub-query executable at the one or more edge computers, and a super-query comprising at least some attributes and syntax from the streaming continuous query and not in the sub-query; wherein the super-query aggregates attributes provided by a propagation action performed at a plurality of edge computers including the one or more edge computers. 5. The method of claim 1, wherein the rule-set is expressed as a decision tree, wherein each branch in the tree represents a true outcome of a decision applied to a set of one or more attributes and each leaf in the tree represents an end action taken on a networked computer, wherein each rule in the rule-set is derived by combining each branch in a path to an end action with an AND operator, and combining multiple paths to a single end action with an OR operator. 6. The method of claim 1, wherein the categorizing step includes applying a set of one or more computer specific constraints to the plurality of attributes within each expression to determine whether evaluation of any expression results permanently in a false decision such that the central control computer determines not to provide that particular expression to the edge computer. 7. The method of claim 1, wherein categorizing each rule from the rule-set includes parsing each rule into a first set of expressions based on decisions requiring attributes available at the edge computer and a second set of expressions based on decisions requiring attributes from a plurality of edge computers; wherein the first set of expressions are categorized into the set of one or more edge expressions that are configured to be evaluated at the edge computer to cause the action. 8. The method of claim 7, further comprising creating a separate rule for a propagation action when parsing a given rule from the rule-set results in a first decision from the given rule in the first set of expressions and a second decision from the given rule in the second set of expressions. 9. The method of claim 8, further comprising combining the separate rule for the propagation action with another rule for the propagation action when both rules result in the propagation action of a same attribute. 10. The method of claim 1, wherein the networked computers comprise a multi-tiered hierarchy, wherein edge specific attributes with respect to an intermediate computer represent attributes from more than one computer with respect to a lower tiered computer, and the intermediate computer represents the central control computer with respect to the lower tiered computer; wherein the intermediate computer represents the edge computer to a higher tiered computer; wherein the steps are applied recursively to available networked computers except for any computer on a lowest tier. 11. A system comprising: a controller computer, coupled to one or more edge computers; receiving logic, in the controller computer, that is configured to receive a streaming continuous query and a rule-set; wherein the rule-set comprises decisions based on attributes produced by the query, and end actions based on the decisions, wherein the attributes comprise data processed by one or more computers on a network; separating logic, in the controller computer, that is configured to separate the streaming continuous query into a sub-query executable at one or more edge computers; categorizing logic, in the controller computer, that is configured to categorize each rule from the rule-set based on decisions requiring attributes available from the sub-query into a set of one or more edge expressions evaluable at an edge computer to cause an action; distributing logic, in the controller computer, that is configured to provide the set of one or more edge expressions and the sub-query to at least one edge computer to enable processing of visible attributes on the edge computer and evaluation of an action independently from the controller computer. 12. The system of claim 11, wherein the receiving logic is configured to receive the rule-set expressed as a decision tree, wherein each branch in the tree represents a true outcome of a decision applied to a set of one or more attributes and each leaf in the tree represents an end action taken on the network, wherein the rule-set is derived by combining each branch in a path to an end action with an AND operator, and combining multiple paths to a single end action with an OR operator. 13. The system of claim 11, wherein the separating logic, in the controller computer, is configured to separate the streaming continuous query by looping through each edge computer from the one or more edge computers on the network to determine attributes that are visible at a particular edge computer, and creating the sub-query for the particular edge computer by removing, from the streaming continuous query, a statement requiring an attribute unavailable at the particular edge computer. 14. The system of claim 13, wherein the separating logic, in the controller computer, is configured to separate the streaming continuous query by separating the streaming continuous query into the sub-query executable at one or more edge computers, and a super-query comprising at least some attributes and syntax from the streaming continuous query but not in the sub-query. 15. The system of claim 11, wherein the categorizing logic, in the controller computer, is configure to review metadata on each computer to determine actions requiring expressions based on edge specific attributes and expressions based on attributes from more than one computer, wherein an action requiring both expressions is separated into two actions, wherein a propagation action is created for expressions based on edge specific attributes. 16. A system comprising: a controller computer, coupled to one or more intermediate computers each of which is coupled to one or more lower tiered computers; receiving logic, in the controller computer, that is configured to receive a streaming continuous query and a rule-set; wherein the rule-set comprises decisions based on attributes produced by the query, and end actions based on the decisions, wherein the attributes comprise data processed by one or more computers on a network; separating logic, in the controller computer, that is configured to separate the streaming continuous query into a sub-query executable at the intermediate computer; categorizing logic, in the controller computer, that is configured to categorize each rule from the rule-set based on decisions requiring attributes available from the sub-query into a set of one or more edge expressions evaluable at the intermediate computer to cause an action; distributing logic, in the controller computer, that is configured to provide the set of one or more edge expressions and the sub-query to at least one intermediate computer to enable processing of visible attributes on the intermediate computer and evaluation of an action independently from the controller computer. 17. The system of claim 16, wherein the receiving logic is configured to receive the rule-set expressed as a decision tree, wherein each branch in the tree represents a true outcome of a decision applied to a set of one or more attributes and each leaf in the tree represents an end action taken on the network, wherein the rule-set is derived by combining each branch in a path to an end action with an AND operator, and combining multiple paths to a single end action with an OR operator. 18. The system of claim 16, wherein the separating logic, in the controller computer, is configured to separate the streaming continuous query by looping through each edge computer from the one or more edge computers on the network to determine attributes that are visible at a particular edge computer, and creating the sub-query for the particular edge computer by removing, from the streaming continuous query, a statement requiring an attribute unavailable at the particular edge computer. 19. The system of claim 18, wherein the separating logic, in the controller computer, is configured to separate the streaming continuous query by separating the streaming continuous query into the sub-query executable at one or more edge computers, and a super-query comprising at least some attributes and syntax from the streaming continuous query but not in the sub-query. 20. The system of claim 16, wherein the categorizing logic, in the controller computer, is configure to review metadata on each computer to determine actions requiring expressions based on edge specific attributes and expressions based on attributes from more than one computer, wherein an action requiring both expressions is separated into two actions, wherein a propagation action is created for expressions based on edge specific attributes. |
|
ACCEPTED | Please predict whether this patent is acceptable.PATENT ABSTRACT: In an embodiment, an improved computer-implemented method of efficiently determining actions to perform based on data from a streaming continuous queries in a distributed computer system comprises, at a central control computer, receiving a streaming continuous query and a rule-set; wherein the rule-set comprises decision data representing decisions based on attributes produced by the query, and action data representing end actions based on the decisions, wherein the attributes comprise data processed by one or more networked computers; separating the streaming continuous query into a sub-query executable at one or more edge computers; categorizing end actions from the set based on decisions requiring attributes available from the sub-query into a set of one or more edge expressions that are configured to be evaluated at an edge agent to cause an action; providing the set of edge expressions and the sub-query to at least one edge computer with instructions to process visible attributes on the edge computer and to evaluate the set of one or more edge expressions independently from the central control computer; wherein the method is performed by one or more computing devices. |
|
G06N5025 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: In an embodiment, an improved computer-implemented method of efficiently determining actions to perform based on data from a streaming continuous queries in a distributed computer system comprises, at a central control computer, receiving a streaming continuous query and a rule-set; wherein the rule-set comprises decision data representing decisions based on attributes produced by the query, and action data representing end actions based on the decisions, wherein the attributes comprise data processed by one or more networked computers; separating the streaming continuous query into a sub-query executable at one or more edge computers; categorizing end actions from the set based on decisions requiring attributes available from the sub-query into a set of one or more edge expressions that are configured to be evaluated at an edge agent to cause an action; providing the set of edge expressions and the sub-query to at least one edge computer with instructions to process visible attributes on the edge computer and to evaluate the set of one or more edge expressions independently from the central control computer; wherein the method is performed by one or more computing devices. |
|
To enable work safely in a space where a robot and a worker coexist without defining an area in a work space using a monitoring boundary or the like and thus improve productivity, there is provided a robot controlling apparatus which controls the robot by detecting time-series states of the worker and the robot, and comprises: a detecting unit configured to detect a state of the worker; a learning information holding unit configured to hold learning information obtained by learning the time-series states of the robot and the worker; and a controlling unit configured to control an operation of the robot based on the state the worker output from the detecting unit and the learning information output from the learning information holding unit. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A robot controlling apparatus which controls a robot by detecting time-series states of a worker and the robot, comprising: a detecting unit configured to detect a state of the worker; a learning information holding unit configured to hold learning information obtained by learning the time-series states of the robot and the worker; and a controlling unit configured to control an operation of the robot based on the state of the worker output from the detecting unit and the learning information output from the learning information holding unit. 2. The robot controlling apparatus according to claim 1, wherein the controlling unit further comprises a deciding unit configured to obtain the time-series state of the worker from the detected state of the worker, and decide whether or not the obtained time-series state of the worker is similar to the time-series state of the worker included in the learning information, and the controlling unit controls the operation of the robot based on a decision result of the deciding unit. 3. The robot controlling apparatus according to claim 2, wherein, in a case where it is decided by the deciding unit that the obtained time-series state of the worker is not similar to the time-series state of the worker included in the learning information, the controlling unit stops or decelerates the operation of the robot. 4. The robot controlling apparatus according t claim 3, further comprising a notifying unit configured to, in the case where it is decided by the deciding unit that the obtained time-series state of the worker is not similar to the time-series state of the worker included in the learning information, notify the worker of information for urging to restart work after the operation of the robot is stopped or decelerated. 5. The robot controlling apparatus according to claim 3, wherein in the case where it is decided by the deciding unit that the obtained time-series state of the worker is not similar to the time-series state of the worker included in the learning information, the controlling unit decides whether or not the worker pays attention to the robot, in a case where it is decided that the worker pays attention to the robot, the controlling unit continues the current operation of the robot, and in a case where it is decided that the worker does not pay attention to the robot, the controlling unit stops or decelerates the operation of the robot. 6. The robot controlling apparatus according to claim 2, wherein, in a case where it is decided by the deciding unit that the obtained time-series state of the worker is similar to the time-series state of the worker included in the learning information, the controlling unit continues the operation of the robot. 7. The robot controlling apparatus according to claim 1, wherein the state of the worker includes at least either a position and orientation of a predetermined part of the worker and a position and orientation of an object grasped by the worker. 8. The robot controlling apparatus according to claim 2, wherein the deciding unit further decides the operation of the robot based on a state of the robot. 9. The robot controlling apparatus according claim 8, wherein the state of the robot corresponds to position information of a hand or a joint of the robot. 10. The robot controlling apparatus according to claim 1, further comprising a learning information updating unit configured to update the learning information based on the state of the worker output from the detecting unit and the learning information output from learning information holding unit, and output the updated learning information to the learning information holding unit. 11. A robot controlling method which controls a robot by detecting time-series states of a worker and the robot, comprising: detecting a state of the worker; holding learning information obtained by learning the time-series states of the robot and the worker; and controlling an operation of the robot based on the detected state of the worker and the held learning information. 12. A non-transitory computer-readable storage medium which stores a program for causing a computer to perform a robot controlling method of controlling a robot by detecting time-series states of a worker and the robot, the controlling method comprising: detecting a state of the worker; holding learning information obtained by learning the time-series states of the robot and the worker; and controlling an operation of the robot based on the detected state of the worker and the held learning information. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: To enable work safely in a space where a robot and a worker coexist without defining an area in a work space using a monitoring boundary or the like and thus improve productivity, there is provided a robot controlling apparatus which controls the robot by detecting time-series states of the worker and the robot, and comprises: a detecting unit configured to detect a state of the worker; a learning information holding unit configured to hold learning information obtained by learning the time-series states of the robot and the worker; and a controlling unit configured to control an operation of the robot based on the state the worker output from the detecting unit and the learning information output from the learning information holding unit. |
|
G06N99005 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: To enable work safely in a space where a robot and a worker coexist without defining an area in a work space using a monitoring boundary or the like and thus improve productivity, there is provided a robot controlling apparatus which controls the robot by detecting time-series states of the worker and the robot, and comprises: a detecting unit configured to detect a state of the worker; a learning information holding unit configured to hold learning information obtained by learning the time-series states of the robot and the worker; and a controlling unit configured to control an operation of the robot based on the state the worker output from the detecting unit and the learning information output from the learning information holding unit. |
|
An interface apparatus, system and method for providing interaction over a communication network between a user and network entities are described. The interface apparatus includes a front-end communication system configured for receiving user input information and for outputting output signals in response to the input information. The interface apparatus also includes a communication processing system for coding the input information and forwarding it to the network entity. The interface apparatus can also include a front-end monitoring system for generating user state patterns indicative of the state of the user, a decision-making system for processing the patterns and taking a decision as to how to respond thereto. The interface apparatus includes a configuration and control system configured for reconfiguration and control of functionality of the interface apparatus and for reconfiguration and control of functionality of the network entities. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. An interface apparatus for providing interaction over a communication network between a user and a plurality of network entities cooperating with said interface apparatus under a predetermined service agreement stored in the interface apparatus, the interface apparatus comprising: a front-end communication system including: at least one front-end communication input device configured for interaction with the user for receiving user input information and generating user information input signals, and at least one front-end communication output device configured for interaction with the user for outputting user information output signals obtained as a reaction to the user input information; a communication processing system coupled to the front-end communication system and configured for (i) receiving said user information input signals for coding thereof to a format suitable for data transferring the coded information input signals to at least one network entity selected from said plurality of network entities over said communication network to handle said coded information input signals at the end of said at least one network entity, thereby to generate information coded output signals responsive to said user information input signals; and (ii) receiving said information coded output signals generated by said at least one network entity; and decoding these signals to a format suitable for outputting thereof by said at least one front-end output device; and a configuration and control system configured for (i) automatic reconfiguration and control of a functionality of the interface apparatus, including: selecting desired functional characteristics of the interface apparatus; and adjusting said interface apparatus to operating conditions of the communication network, including network availability; and (ii) automatic reconfiguration and control of functionality of interaction of said at least one network entity with the interface apparatus, including adjusting said interaction to predetermined requirements imposed on said at least one network entity for desired cooperation with said interface apparatus in accordance with said predetermined service agreement; and a wireless network connector electrically coupled to said communication processing system, and to said configuration and control system; said wireless network connector configured for providing a wireless signal linkage between the interface apparatus and said plurality of network entities over the communication network. 2. The interface apparatus of claim 1, further comprising: a front-end monitoring system including at least one front-end monitoring device configured for interacting with the user, collecting user state information related to a state of the user and generating user state patterns indicative of the state of the user; a decision-making system coupled to said front-end monitoring system and to wireless network connector, and configured for receiving the user state patterns collected by said at least one front-end monitoring device, and processing thereof for taking a decision as to how to respond to the received user state patterns. 3. The interface apparatus of claim 2, further comprising an interface for remote monitoring coupled to said wireless network connector, said communication processing system and to said decision-making system, and configured for interaction of the interface apparatus with said plurality of network entities via said wireless network connector. 4. The interface apparatus of claim 1, wherein said at least one front-end communication input device of the front-end communication system is selected from a microphone configured for receiving said user input information provided verbally and converting said user information into the user information input signals corresponding to the user verbal input information; and a video camera configured for receiving said user information provided visually and converting said user information into the user information input signals corresponding to the visual user information. 5. The interface apparatus of claim 4, wherein said at least one front-end communication output device of the front-end communication system is selected from a speaker configured for audio outputting said user information output signals, and a display configured for video outputting said user information output signals, wherein said user information output signals are indicative of a reaction of said at least one network entity to said user information input signals. 6. The interface apparatus of claim 5, wherein said communication processing system comprises: an encoding and decoding module coupled to said at least one front-end communication input device and to said at least one front-end communication output device of the front-end communication system, said encoding and decoding module configured (i) for receiving the user information input signals including audio and video signals from said at least one front-end communication input device, coding thereof to obtain coded information input signals and forwarding said coded information input signals to the wireless network connector for relaying the coded information input signals to said at least one network entity; and (ii) for receiving coded information output signals and decoding these signals to obtain said user information output signals; a speech synthesizer coupled to the speaker and to the encoding and decoding module for encoding and decoding audio signals, and configured to receive decoded information output signals and to generate electrical signals in a format suitable for audio outputting thereof by the speaker; and a view synthesizer coupled to the display and to the encoding and decoding module for encoding and decoding video signals, and configured to receive decoded information output signals and to generate electrical signals in a format suitable for video outputting thereof by the display. 7. The interface apparatus of claim 3, further comprising a local dialogue organization device coupled to the speech synthesizer and to said interface for remote monitoring and configured for organization of local dialogues between the user and the interface apparatus. 8. The interface apparatus of claim 3, wherein said at least one front-end monitoring device of the front-end monitoring system is selected from the list including: a tactile sensor configured to provide user state information indicative of a force applied by the user to the interface apparatus; at least one user physiological parameter sensor configured for measuring at least one vital sign of the user; a user location sensor configured for determination of a location of the interface apparatus; an accelerometer configured for detecting motion of the interface apparatus; and a gyroscope configured for measuring orientation of the interface apparatus in space. 9. The interface apparatus of claim 8, wherein said at least one user physiological parameter sensor is selected from the list including: a temperature sensor, a pulse rate sensor, a blood pressure sensor, a pulse oximetry sensor, and a plethysmography sensor. 10. The interface apparatus of claim 3, wherein said decision-making system comprises: a sensor data collection device configured for receiving the user state patterns measured by the front-end monitoring system and formatting thereof; a pattern recognition device coupled to the sensor data collection device and configured for comparing the user state patterns with reference state patterns stored in the interface apparatus, and generating an identification signal indicative of whether at least one of the user state patterns matches or does not match at least one reference state pattern, said reference state patterns being indicative of various predetermined states of the user and being used as a reference for determining a monitored state of the user; a pattern storage device coupled to the pattern recognition device and configured for storing said reference state patterns; a decision maker device coupled to said pattern recognition device, and configured for receiving said identification signal from the pattern recognition device, and in response to said identification signal, generating said coded information output signals indicative of at least one policy for taking said decision; and a policy storage device coupled to the decision maker device and configured for storing policies for the taking of the decision. 11. The interface apparatus of claim 10, wherein the policy for the taking of the decision includes: (i) if at least one of the user state patterns matches at least one reference state pattern, to generate said coded information output signals including advice of the decision-making system as a reaction to the monitored state of the user, and provide said coded information output signals to at least one receiver selected from a corresponding at least one network entity selected from said plurality of network entities configured for handling the advice, and said communication processing module of the interface apparatus further configured for decoding said coded information output signals for extracting the advice, and outputting the advice to the user; and (ii) if none of the user state patterns matches at least one reference state pattern, to forward the monitored user state patterns to at least one network entity being configured for handling the user patterns. 12. The interface apparatus of claim 3, wherein said configuration and control system includes: a cyber certificate database comprising at least one record selected from: a record with a description of functional characteristics of the interface apparatus, a record with a description of functional characteristics of the network entities selected to cooperate with the interface apparatus for a predetermined purpose; a record with a description of functional characteristics of said plurality of network entities providing services to which the interface apparatus has a right to access; an archive record for interaction of the user with the interface apparatus; and a cyber portrait of the user including at least one kind of characteristics selected from: cognitive characteristics of the user, behavioral characteristics of the user, physiological characteristics of the user, and mental characteristics of the user; a cyber certificate database controller coupled to the cyber certificate database, and configured for controlling an access to said at least one record stored in the cyber certificate database to read and update said at least one record; and a reconfiguration device coupled to said cyber certificate database controller, and configured for dynamic reconfiguration of functionality of the interface apparatus, and interaction of said at least one network entity with the interface apparatus in accordance with said predetermined service agreement. 13. The interface apparatus of claim 12, wherein said dynamic reconfiguration of the functionality includes at least one of the following actions: receiving external signals for (i) adjusting said interface apparatus to the operating conditions of the communication network, and (ii) adjusting operation of said at least one external entity to said predetermined requirements imposed on said at least one external entity for cooperation with said interface apparatus in accordance with said predetermined service agreement; and providing instruction signals to said cyber certificate database controller to read and update said at least one record. 14. The interface apparatus of claim 13, wherein said at least one entity includes an entities control system configured for receiving from said configuration and control system of the interface apparatus a request for finding at least one network entity providing services desired to the interface apparatus in accordance with the conditions of the predetermined service agreement, conducting a semantic search of said at least one network entity, adjusting the interaction between the interface apparatus and said at least one network entity to the conditions of the predetermined service agreement, and providing addresses and access conditions of said at least one network entity to said configuration and control system. 15. The interface apparatus of claim 14, wherein said at least one entity providing services desired to the interface apparatus in accordance with the conditions of the predetermined service agreement includes at least one system selected from: (a) an external dialogue system configured for organization and conduction of natural language dialogues with the user, configured for receiving at least one type of input signals selected from said coded information input signals originating from the front-end communication system, and said user state patterns provided from the decision-making system; and analyzing said at least one type of the input signals and generating said coded information output signals indicative of reaction on said coded information input signals; (b) a supervisor communication support system configured for finding a supervisor communication device used by a supervisor of the user and supporting communication of said at least one user interface apparatus with the supervisor communication device; and (c) a peer communication support system configured for finding at least one other interface apparatus used by a peer to the user, and for supporting communication between the interface apparatus of the user and said at least one other interface apparatus. 16. The interface apparatus of claim 14, wherein said at least one entity providing cloud services desired to the interface apparatus in accordance with the conditions of the predetermined service agreement includes a situation identification system configured for receiving said coded information input signals from the front-end communication system and said user state patterns forwarded by the decision-making system, and providing analysis thereof for identifying various situations occurring with the user and notifying said supervisor communication support system of the situations as they are discovered. 17. The interface apparatus of claim 15, wherein said external dialogue system comprises a speech recognition system configured for receiving said coded information input signals originating from the front-end communication system and transforming these signals into data suitable for computer processing, and a dialogue manager coupled to the speech recognition system, and configured to process said data and to generate said coded information output signals produced as a reaction to said coded information input signals. 18. The interface apparatus of claim 17, wherein said coded information input signals include a query signal; and wherein the external dialogue system further comprises a search engine associated with the dialogue manager and configured for receiving a processed query signal from the dialogue manager, conducting a search based on a query related to said query signal and providing search results to the dialogue manager for targeting thereof to the user; wherein said search results are included in said coded information output signals. 19. The interface apparatus of claim 17, wherein said coded information input signals include said user state patterns forwarded by the decision-making system, and wherein the external dialogue system is further configured to analyze said user state patterns forwarded by the decision-making system, and generate advice of the entity as a reaction to the monitored state of the user, wherein the entity advice is included in said coded information output signals. 20. The interface apparatus of claim 15, wherein the user is a child, and the supervisor is a parent of the child, and said supervisor communication device is a communication device of the parent. 21. The interface apparatus of claim 16, wherein the situation identification system is configured to communicate with at least one system providing a medical diagnostics service. 22. The interface apparatus of claim 15, wherein the user is a child, and the peer is another child. 23. A method for providing interaction of users with a plurality of network entities over a communication network by the interface apparatus configured to provide interaction between a user and a plurality of network entities cooperating with said interface apparatus under a predetermined service agreement stored in the interface apparatus, the method comprising at the interface apparatus end: automatically reconfiguring and controlling functionality of the interface apparatus, including automatic selecting of desired functional characteristics of the interface apparatus; and adjusting said interface apparatus to operating conditions in the communication network, including network availability; automatically reconfiguring and controlling functionality of interaction of said at least one network entity with the interface apparatus, including adjusting said interaction to predetermined requirements imposed on said at least one network entity for desired cooperation with said interface apparatus in accordance with said predetermined agreement; receiving user input information from the user; processing said user input information and forwarding the corresponding processed signal to at least one entity selected from said plurality of entities configured for handling a communication with the user; and receiving coded information output signals from said at least one entity, processing thereof to obtain user information output signals in a format suitable for outputting to the user. 24. The method of claim 23, comprising: collecting user state information related to a state of the user and generating user state patterns indicative of the state of the user; receiving the user state patterns and processing thereof; and taking a decision as to how to respond to the received user state patterns; wherein said taking of the decision as to how to respond to the received user state patterns comprises: (i) if at least one of the user state patterns matches at least one reference state pattern, taking a decision to generate said coded information output signals including advice indicative of reaction on the monitored state of the user, and processing said coded information output signals for extracting the advice and outputting it to the user; and (ii) if none of the user state patterns matches at least one reference state pattern, forwarding the monitored user state patterns to a corresponding at least one entity configured for handling the user patterns. 25. The method of claim 24, wherein the processing of the user state patterns includes comparing the user state patterns with reference state patterns stored in the interface apparatus, said reference state patterns being indicative of various predetermined states of the user and being used as a reference for determining a monitored state of the user; and taking a decision as to how to respond to the received user state patterns. 26. The method of claim 23, further comprising at the end of at least one entity: receiving said coded information input signals from the interface apparatus; analyzing said coded information input signals and generating said coded information output signals indicative of reaction on said coded information input signals; and relaying said coded information output signals to the interface apparatus. 27. The method of claim 24, further comprising at the end of at least one entity: receiving said user state patterns from the interface apparatus; analyzing said user state patterns and generating said coded information output signals indicative of reaction on said coded information input signals; and relaying said coded information output signals to the interface apparatus. 28. The method of claim 23, comprising at the end of at least one entity: receiving said coded information input signals from the interface apparatus; providing analysis thereof for identifying various situations occurring with the user; finding a supervisor communication device used by a supervisor of the user; and providing communication of the supervisor communication device with the interface apparatus of the user. 29. The method of claim 24, comprising at the end of at least one entity: receiving said user state patterns from the interface apparatus; providing analysis thereof for identifying various situations occurring with the user; finding a supervisor communication device used by a supervisor of the user; and providing communication of the supervisor communication device with the interface apparatus of the user. 30. The method of claim 23, comprising at the end of at least one entity: receiving said coded information input signals from the interface apparatus; finding at least one another interface apparatus used by a peer to the user; and providing communication between the interface apparatus of the user and said at least one another interface apparatus. |
|
ACCEPTED | Please predict whether this patent is acceptable.PATENT ABSTRACT: An interface apparatus, system and method for providing interaction over a communication network between a user and network entities are described. The interface apparatus includes a front-end communication system configured for receiving user input information and for outputting output signals in response to the input information. The interface apparatus also includes a communication processing system for coding the input information and forwarding it to the network entity. The interface apparatus can also include a front-end monitoring system for generating user state patterns indicative of the state of the user, a decision-making system for processing the patterns and taking a decision as to how to respond thereto. The interface apparatus includes a configuration and control system configured for reconfiguration and control of functionality of the interface apparatus and for reconfiguration and control of functionality of the network entities. |
|
G06N3004 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An interface apparatus, system and method for providing interaction over a communication network between a user and network entities are described. The interface apparatus includes a front-end communication system configured for receiving user input information and for outputting output signals in response to the input information. The interface apparatus also includes a communication processing system for coding the input information and forwarding it to the network entity. The interface apparatus can also include a front-end monitoring system for generating user state patterns indicative of the state of the user, a decision-making system for processing the patterns and taking a decision as to how to respond thereto. The interface apparatus includes a configuration and control system configured for reconfiguration and control of functionality of the interface apparatus and for reconfiguration and control of functionality of the network entities. |
|
A computer-implemented method for knowledge based ontology editing, is provided. The method receives a language instance to update a knowledge base, using a computer. The method semantically parses the language instance to detect an ontology for editing. The method maps one or more nodes for the ontology for editing based on an ontology database and the knowledge base. The method determines whether the mapped nodes are defined or undefined within the knowledge base. The method calculates a first confidence score based on a number of the defined and undefined mapped nodes. Furthermore, the method updates the knowledge base when the first confidence score meets a pre-defined threshold. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer system for knowledge based ontology editing, comprising: one or more computer devices each having one or more processors and one or more tangible storage devices; and a program embodied on at least one of the one or more storage devices, the program having a plurality of program instructions for execution by the one or more processors, the program instructions comprising instructions for: receiving, from a remote server through a network, by spoken words of a user into a microphone of the computer, a language instance to update a knowledge base, using a computer, the network being an Internet connection; semantically parsing the language instance to detect an ontology for editing using predetermined grammatical values comprising identifying a subject for the received language instance as the ontology to be edited and POS tags, wherein POS tags comprise word-category disambiguation and the marking up a word in text corpus as corresponding to particular part of speech, based on the word's definition and context; mapping one or more nodes for the ontology for editing based on an ontology database and the knowledge base comprising a Linking Open data (LOD) web data form; determining whether the mapped nodes are defined or undefined within the Linking Open data (LOD) web data form; inquiring the user as to information regarding the undefined nodes by displaying a question regarding the undefined nodes on the user interface on the computer; receiving an answer to the question regarding the undefined nodes on the user interface from the user; calculating a first confidence score based on a number of the defined and undefined mapped nodes; and updating the Linking Open data (LOD) web data form when the first confidence score meets a pre-defined threshold. |
|
ACCEPTED | Please predict whether this patent is acceptable.PATENT ABSTRACT: A computer-implemented method for knowledge based ontology editing, is provided. The method receives a language instance to update a knowledge base, using a computer. The method semantically parses the language instance to detect an ontology for editing. The method maps one or more nodes for the ontology for editing based on an ontology database and the knowledge base. The method determines whether the mapped nodes are defined or undefined within the knowledge base. The method calculates a first confidence score based on a number of the defined and undefined mapped nodes. Furthermore, the method updates the knowledge base when the first confidence score meets a pre-defined threshold. |
|
G06N502 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A computer-implemented method for knowledge based ontology editing, is provided. The method receives a language instance to update a knowledge base, using a computer. The method semantically parses the language instance to detect an ontology for editing. The method maps one or more nodes for the ontology for editing based on an ontology database and the knowledge base. The method determines whether the mapped nodes are defined or undefined within the knowledge base. The method calculates a first confidence score based on a number of the defined and undefined mapped nodes. Furthermore, the method updates the knowledge base when the first confidence score meets a pre-defined threshold. |
|
An approach is provided for providing predictive classification of actionable network alerts. The approach includes receiving the plurality of alerts. Each alert of the plurality of alerts indicates an alarm condition occurring at a monitored network system, and is a data record comprising one or more data fields describing the alarm condition. The approach also includes classifying said each alert using a predictive machine learning model. The predictive machine learning model is trained to classify said each alert as actionable or non-actionable using the one or more data fields of said each alert as one or more respective classification features, and to calculate a respective probability that said each alert is actionable or non-actionable. The approach further includes presenting the plurality of alerts in a network monitoring user interface based on the respective probability of said each alert. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of presenting a plurality of alerts based on a predictive classification, comprising: receiving the plurality of alerts, wherein each alert of the plurality of alerts indicates an alarm condition occurring at a monitored network system, and wherein said each alert is a data record comprising one or more data fields describing the alarm condition; classifying said each alert using a predictive machine learning model, wherein the predictive machine learning model is trained to classify said each alert as actionable or non-actionable using the one or more data fields of said each alert as one or more respective classification features, and to calculate a respective probability that said each alert is actionable or non-actionable; and initiating a presentation of the plurality of alerts in a network monitoring user interface based on the respective probability of said each alert. 2. The method of claim 1, further comprising: appending said each alert with an additional data field storing the respective probability. 3. The method of claim 2, further comprising: transmitting said each appended alert to a network monitoring service, wherein the network monitoring user interface is presented by the network monitoring service. 4. The method of claim 1, further comprising: receiving a set of historical alerts that are labeled as actionable or non-actionable, wherein the predictive machine learning model is trained using the set of historical alerts. 5. The method of claim 4, further comprising: transforming the one or more data fields of the set of historical alerts into the numeric matrix based on a variable type of the one or more data fields, wherein the predictive machine learning model is trained using the numeric matrix. 6. The method of claim 5, further comprising: binarizing the one or more data fields into a categorical vector based on one or more categorical labels when the variable type is a categorical variable type, wherein the categorical vector is included in the numeric matric for training of the predictive machine learning model. 7. The method of claim 5, further comprising: extracting one or more keywords from the one or more data fields when the variable type is a text variable type; and generating a hashed vector of the one or more keywords, wherein the hashed vector is included in the numeric matrix for training of the predictive machine learning model. 8. The method of claim 5, wherein the variable type is a decision tree variable type, the method further comprising: transforming the one or more data fields using a decision tree to correlate the one or more data fields to a likelihood of being associated with an actionable alert, wherein the decision tree includes one or more decision rules indicating a non-linear relationship between the one or more data fields and the likelihood of being associated with the actionable alert. 9. The method of claim 8, wherein the number variable type is a temporal variable type including a time-of-day variable. 10. The method of claim 1, further comprising: receiving a feedback input that specifies a labeled classification for at least one of the plurality of alerts; and updating a training of the predictive machine learning model based on the feedback input. 11. A non-transitory computer-readable non-transitory storage medium to present a plurality of alerts based on a predictive classification, carrying one or more sequences of one or more instructions which, when executed by one or more processors, cause an apparatus to perform: receiving a plurality of alerts, wherein each alert in the plurality of alerts indicates an alarm condition occurring in a monitored network system, and wherein said each alert is labeled as either an actionable alert or a non-actionable alert; and training a predictive machine language model to classify a subsequent alert as actionable or non-actionable using the one or more data fields of said each alert as one or more respective classification features, wherein the predictive machine language model is configured to calculate a probability that the subsequent alert is actionable or non-actionable; and wherein the subsequent alert is presented in a network monitoring user interface based on the probability. 12. The non-transitory computer-readable storage medium of claim 11, wherein the apparatus is further caused to perform: pre-processing said each alert to add one or more additional data fields to record contextual information about the alarm condition, the monitored system, or a combination thereof, wherein the predictive machine learning model is further trained using the one or more additional data fields. 13. The non-transitory computer-readable storage medium of claim 11, wherein the apparatus is further caused to perform: segmenting the plurality of alerts based on a training window and a validation window, wherein the plurality of alerts falling within the training window is used to train the predictive machine learning model; and wherein the plurality of alerts falling within the validation window are used to validate the predictive machine learning model after training. 14. The non-transitory computer-readable storage medium of claim 11, wherein the apparatus is further caused to perform: determining that said each alert is labeled as an actionable alert when said each alert is associated with an incident number. 15. The non-transitory computer-readable storage medium of claim 11, wherein the apparatus is further caused to perform: training the predictive machine language model by applying a regression analysis on the one or more respective classification features. 16. The non-transitory computer-readable storage medium of claim 11, wherein the apparatus is further caused to perform: selecting the one or more data fields to designate as the one or more respective classification features based on a variance threshold value. 17. The non-transitory computer-readable storage medium of claim 11, wherein the apparatus is further caused to perform: initiating a retraining of the predictive machine learning model based on a change in the monitored network system, an addition of a new monitored network system, or a combination thereof. 18. An apparatus to present a plurality of network alerts based on a predictive classification, comprising: at least one processor; and at least one memory including computer program code for one or more programs, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following, receive the plurality of alerts, wherein each alert of the plurality of alerts indicates an alarm condition occurring in a monitored network system, and wherein said each alert is a data record comprising one or more data fields describing the alarm condition; designate the one or more data fields as one or more classification features of a predictive machine learning model configured to classify said each alert as an actionable alert or a non-actionable alert; calculate a respective probability that said each alert is actionable or non-actionable using the predictive machine learning model; and present the plurality of alerts in a network monitoring user interface based on the calculated respective probability of said each alert. 19. The apparatus of claim 18, wherein the apparatus is further caused to: append said each alert with an additional data field storing the respective probability. 20. The apparatus of claim 18, wherein the apparatus is further caused to: determining whether to be present said each alert in the network monitoring user interface, a sort order for presenting said each alert, or a combination thereof based on the respective probability. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: An approach is provided for providing predictive classification of actionable network alerts. The approach includes receiving the plurality of alerts. Each alert of the plurality of alerts indicates an alarm condition occurring at a monitored network system, and is a data record comprising one or more data fields describing the alarm condition. The approach also includes classifying said each alert using a predictive machine learning model. The predictive machine learning model is trained to classify said each alert as actionable or non-actionable using the one or more data fields of said each alert as one or more respective classification features, and to calculate a respective probability that said each alert is actionable or non-actionable. The approach further includes presenting the plurality of alerts in a network monitoring user interface based on the respective probability of said each alert. |
|
G06N7005 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An approach is provided for providing predictive classification of actionable network alerts. The approach includes receiving the plurality of alerts. Each alert of the plurality of alerts indicates an alarm condition occurring at a monitored network system, and is a data record comprising one or more data fields describing the alarm condition. The approach also includes classifying said each alert using a predictive machine learning model. The predictive machine learning model is trained to classify said each alert as actionable or non-actionable using the one or more data fields of said each alert as one or more respective classification features, and to calculate a respective probability that said each alert is actionable or non-actionable. The approach further includes presenting the plurality of alerts in a network monitoring user interface based on the respective probability of said each alert. |
|
A method is provided for calculating a relation indicator for a relation between entities based on an optimization procedure. The method combines the strong relational learning ability and the good scalability of the RESCAL model with the linear regression model, which may deal with observed patterns to model known relations. The method may be used to determine relations between objects, for instance entries in a database, such as a shopping platform, medical treatments, production processes, or in the context of the Internet of Things, in a fast and precise manner. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for calculating a relation indicator for a relation between entities, the method comprising: providing a measurement tensor X of measurement tensor components, Xijk, with i, j=1 . . . N, comprising measurement data as relation indicators, wherein the relation indicator Xijk indicates a k-th relation between an i-th and a j-th of a number, N, of entities; providing a rules tensor M of rules tensor components, Mijn, describing a prediction of an n-th rule; calculating a weighting tensor of weighting tensor components, Wnk, indicating relative weights of the rules for the k-th relation between the entities; calculating a relationship tensor R of relationship tensor components, Rabk, with a, b=1 . . . r, indicating relations between a set of a number, r, of properties of the entities; calculating a transformation tensor A of transformation tensor components, Aia, describing the i-th entity via r latent properties, wherein the transformation tensor A, the weighting tensor W, and the relationship tensor R are calculated as minimum solutions to the following equation: min A , R , W X ijk - X ijk ′ F 2 + λ A A F 2 + λ R R F 2 + λ W W F 2 , with λA, λR, and λW as Lagrange parameters and with result tensor components Xijk′ of a result tensor X′ given by: Xijk′=Σa,b,n(AiaRabkAbjT+MijnWnk), where AT is the transposed tensor corresponding to the transformation tensor A; and calculating a value of the relation indicator for the k-th relation between the i-th and the j-th entity based on the result tensor component Xijk′. 2. The method of claim 1, further comprising: generating at least one control signal, based on the predicted value of the relation indicator, for controlling one or more of an actuator, a sensor, a controller, a field device, or a display. 3. The method of claim 2, wherein a visual signal, an acoustic signal, or the visual and the acoustic signal are created based on the control signal. 4. The method of claim 1, further comprising: expanding the measurement tensor with additional measurement tensor components Xi(N+1)k for i=1 . . . N, X(N+1)jk for j=1 . . . N, and X(N+1)(N+1)k, comprising measurement data as relation indicators between the (N+1)-th additional entity and the entities; and expanding the rules tensor with additional rules tensor components, Mi(N+1)n for i=1 . . . N, M(N+1)jn for j=1 . . . N and M(N+1)(N+1)n, wherein a value of a relation indicator to be predicted is set to a predetermined value. 5. The method of claim 1, further comprising: monitoring a relation between at least two of the entities; and setting a value of at least one relation indicator based on the monitored relation between the at least two of the entities. 6. The method of claim 1, wherein at least some of the measurement data are provided by at least one sensor, are read out from at least one database, or are both provided by the at least one sensor and read out from the at least one database. 7. The method of claim 1, wherein the calculating of the result tensor comprises using an alternating least-squares method, wherein the transformation tensor, the relationship tensor, and the weighting tensor are updated alternatingly until convergence. 8. A computer program for calculating a relation indicator for a relation between entities, comprising program instructions configured to, when executed: provide a measurement tensor X of measurement tensor components, Xijk, with i, j=1 . . . N, comprising measurement data as relation indicators, wherein the relation indicator Xijk indicates a k-th relation between an i-th and a j-th of a number, N, of entities; provide a rules tensor M of rules tensor components, Mijn, describing a prediction of an n-th rule; calculate a weighting tensor W of weighting tensor components, Wnk, indicating relative weights of the rules for the k-th relation between the entities; calculate a relationship tensor R of relationship tensor components, Rabk, with a, b=1 . . . r, indicating relations between a set of a number, r, of properties of the entities; calculate a transformation tensor A of transformation tensor components, Aia, describing the i-th entity via r latent properties, wherein the transformation tensor A, the weighting tensor W, and the relationship tensor R are calculated as minimum solutions to the following equation: min A , R , W X ijk - X ijk ′ F 2 + λ A A F 2 + λ R R F 2 + λ W W F 2 , with λA, λR, and λW as Lagrange parameters and with result tensor components Xijk′ of a result tensor X′ given by Xijk′=Σa,b,n(AiaRabkAbjT+MijnWnk), where AT is the transposed tensor corresponding to the transformation tensor A; and calculate a value of the relation indicator for the k-th relation between the i-th and the j-th entity based on the result tensor component Xijk′. 9. A computer-readable, non-transitory storage medium comprising stored program instructions configured to, when executed: provide a measurement tensor X of measurement tensor components, Xijk, with i, j=1 . . . N, comprising measurement data as relation indicators, wherein the relation indicator Xijk indicates a k-th relation between an i-th and a j-th of a number, N, of entities; provide a rules tensor M of rules tensor components, Mijn, describing a prediction of an n-th rule; calculate a weighting tensor W of weighting tensor components, Wnk, indicating relative weights of the rules for the k-th relation between the entities; calculate a relationship tensor R of relationship tensor components, Rabk, with a, b=1 . . . r, indicating relations between a set of a number, r, of properties of the entities; calculate a transformation tensor A of transformation tensor components, Aia, describing the i-th entity via r latent properties, wherein the transformation tensor A, the weighting tensor W, and the relationship tensor R are calculated as minimum solutions to the following equation: min A , R , W X ijk - X ijk ′ F 2 + λ A A F 2 + λ R R F 2 + λ W W F 2 , with λA, λR, and λW as Lagrange parameters and with result tensor components Xijk′ of a result tensor X′ given by: Xijk′=Σa,b,n(AiaRabkAbjT+MijnWnk), where AT is the transposed tensor corresponding to the transformation tensor A; and calculate a value of the relation indicator for the k-th relation between the i-th and the j-th entity based on the result tensor component Xijk′. 10. An apparatus for calculating a relation indicator for a relation between entities, comprising: a measurement tensor module configured to provide a measurement tensor X of measurement tensor components, Xijk, with i, j=1 . . . N, comprising measurement data as relation indicators, wherein the relation indicator Xijk indicates a k-th relation between an i-th and a j-th of a number, N, of entities; a rules tensor module M configured to provide a rules tensor of rules tensor components, Mijn, describing a prediction of an n-th rule; a weighting tensor module configured to calculate a weighting tensor W of weighting tensor components, Wnk, indicating relative weights of the rules for the k-th relation between the entities; a relationship tensor module configured to calculate a relationship tensor R of relationship tensor components, Rabk, with a, b=1 . . . r, indicating relations between a set of a number, r, of properties of the entities; a transformation tensor module configured to calculate a transformation tensor A of transformation tensor components, Aia, describing the i-th entity via r latent variables, wherein the transformation tensor A, the weighting tensor W, and the relationship tensor R are calculated as minimum solutions to the following equation: min A , R , W X ijk - X ijk ′ F 2 + λ A A F 2 + λ R R F 2 + λ W W F 2 , with λA, λR, and λW as Lagrange parameters and with result tensor components Xijk′ of a result tensor X′ given by: Xijk′=Σa,b,n(AiaRabkAbjT+MijnWnk), where AT is the transposed tensor corresponding to the transformation tensor A; a result tensor calculation module configured to calculate a result tensor X′ of result tensor components, Xijk′; and a relation indicator calculation module configured to calculate a value of the relation indicator for the k-th relation between the i-th and the j-th entity based on the result tensor component Xijk′. 11. The apparatus of claim 10, further comprising: a control signal generation module configured to generate at least one control signal, based on the predicted value of the relation indicator, for controlling one or more of an actuator, a sensor, a controller, a field device, or a display. 12. The apparatus of claim 11, further comprising: an output module configured to create a visual signal, an acoustic signal, or the visual signal and the acoustic signal based on the control signal. 13. The apparatus of claim 10, further comprising: a measurement tensor expansion module configured to: (1) expand the measurement tensor with additional measurement tensor components Xi(N+1)k for i=1 . . . N, X(N+1)jk for j=1 . . . N and X(N+1)(N+1)k, comprising measurement data as relation indicators between the (N+1)-th additional entity and the entities, and (2) set a value of a relation indicator to be predicted to a predetermined value; and a rules tensor expansion module configured to expand the rules tensor with additional rules tensor components, Mi(N+1)n for i=1 . . . N, M(N+1)jn for j=1 . . . N and M(N+1)(N+1)n. 14. The apparatus of claim 10, further comprising: a monitoring module configured to monitor a relation between at least two of the entities; and a setting module configured to set a value of at least one relation indicator based on the monitored relation between the at least two of the entities. 15. The apparatus of claim 10, further comprising: a measurement module configured to provide at least some of the measurement data to the measurement tensor module. 16. The apparatus of claim 10, further comprising: at least one database; and a readout module configured to read out at least some of the measurement data from the at least one database. 17. The method of claim 10, wherein the result tensor calculation module is configured to use an alternating least-squares method, wherein the transformation tensor, the relationship tensor, and the weighting tensor are updated alternatingly until convergence. 18. A system for calculating a relation indicator for a relation between entities, comprising: a number, N, of entities; a measurement tensor module configured to provide a measurement tensor X of measurement tensor components, Xijk, with i, j=1 . . . N, comprising measurement data as relation indicators, wherein the relation indicator Xijk indicates a k-th relation between an i-th and a j-th of the number of entities; a rules tensor module configured to provide a rules tensor M of rules tensor components, Mijn, describing a prediction of an n-th rule; a weighting tensor module configured to calculate a weighting tensor W of weighting tensor components, Wnk, indicating relative weights of the rules for the k-th relation between the entities; a relationship tensor module configured to calculate a relationship tensor R of relationship tensor components, Rabk, with a, b=1 . . . r, indicating relations between a set of a number, r, of properties of the entities; a transformation tensor module configured to calculate a transformation tensor A of transformation tensor components, Aia, describing the i-th entity via r latent properties, wherein the transformation tensor A, the weighting tensor W, and the relationship tensor R are calculated as minimum solutions to the following equation: min A , R , W X ijk - X ijk ′ F 2 + λ A A F 2 + λ R R F 2 + λ W W F 2 , with λA, λR, and λW as Lagrange parameters and with result tensor components Xijk′ of a result tensor X′ given by: Xijk′=Σa,b,n(AiaRabkAbjT+MijnWnk), where AT is the transposed tensor corresponding to the transformation tensor A; a result tensor calculation module configured to calculate a result tensor X′ of result tensor components, Xijk′, and a relation indicator calculation module configured to calculate a value of the relation indicator for the k-th relation between the i-th and the j-th entity based on the result tensor component Xijk′. 19. The system of claim 18, wherein at least one of the entities is a sensor, an actuator, a field device, a controller, a display, or a section of a conveyer belt assembly. 20. The system of claim 18, further comprising: a control signal generation module configured to generate at least one control signal, based on the predicted value of the relation indicator, for controlling at least one of the entities of the system. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: A method is provided for calculating a relation indicator for a relation between entities based on an optimization procedure. The method combines the strong relational learning ability and the good scalability of the RESCAL model with the linear regression model, which may deal with observed patterns to model known relations. The method may be used to determine relations between objects, for instance entries in a database, such as a shopping platform, medical treatments, production processes, or in the context of the Internet of Things, in a fast and precise manner. |
|
G06N700 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method is provided for calculating a relation indicator for a relation between entities based on an optimization procedure. The method combines the strong relational learning ability and the good scalability of the RESCAL model with the linear regression model, which may deal with observed patterns to model known relations. The method may be used to determine relations between objects, for instance entries in a database, such as a shopping platform, medical treatments, production processes, or in the context of the Internet of Things, in a fast and precise manner. |
|
A mobile user borne brain activity data and surrounding environment data correlation system comprising a brain activity sensing subsystem, a recording subsystem, a measurement computer subsystem, a user sensing subsystem, a surrounding environment sensing subsystem, a correlation subsystem, a user portable electronic device, a non-transitory computer readable medium, and a computer processing device. The mobile user borne system collects and records brain activity data and surrounding environment data and statistically correlates and processes the data for communicating the data into a recipient biological, mechanical, or bio-mechanical system. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A user borne system comprising: a brain activity sensing subsystem configured to collect data corresponding to brain activity of a user; a measurement computer subsystem configured to quantify perceptions of the user; a user sensing subsystem configured to collect data corresponding to user events; a surrounding environment sensing subsystem configured to collect data corresponding to the user's surrounding environment; a recording subsystem configured to record said data; a user mobile electronic device in communication with said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, and recording subsystem, said user mobile electronic device including an interactive graphic user interface and being configured to: operate as a host computer processing subsystem for command, control, and processing of signals to and from said brain activity sensing subsystem, user sensing subsystem, surrounding environment sensing subsystem, and correlation subsystem; command said brain activity sensing subsystem to transmit brain activity and pattern data to said correlation subsystem; and command said user sensing subsystem and surrounding environment sensing subsystem to transmit processed sensor data to said correlation subsystem; a correlation subsystem configured to: create relationships between said data corresponding to said brain activity of said user and said data corresponding to said user events and surrounding environment; and receive and perform correlation processing operations to determine an extent of neural relationships between data received from said user mobile electronic device and said brain activity sensing subsystem, user sensing subsystem, and surrounding environment sensing subsystem to derive neural correlates of consciousness of conscious precepts of the user; a non-transitory computer readable medium configured to store data from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem for performing queries on real-time and near real-time data received from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem for determining whether to keep or disregard said data based on pre-established rule-sets and user interactive command and control from said user mobile electronic device; and a computer processing device configured to process and communicate at least a portion of said data logged and derived by said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem into at least one of a recipient biological, mechanical, or bio-mechanical system. 2. The user borne system according to claim 1, wherein said user borne system and recipient system include at least one of intrusion detection software, hardware, or firmware application and information security software, hardware, or firmware application that provides at least one of firewall protection, virus protection, privacy protection, or user authentication capabilities. 3. The user borne system according to claim 1, wherein said user borne system and recipient biological, mechanical, or bio-mechanical system further comprise at least one remote sensing technology that measures distance by illuminating a target with a laser and analyzing the reflected light for at least one of remote sensing, contour mapping, or aiding in determining distance to the target in a surrounding environment. 4. The user borne system according to claim 1, wherein said brain activity sensing subsystem is configured to record signatures and derive measurements on at least one of a whole, region, neural, or electro-chemical interaction in the synaptic cleft of the brain at the molecular level in order to identify neural correlates of consciousness such that various components of perception across the brain that form the totality of a conscious precept identify the minimal set of components of neural material, chemical, electrical, and associated activity that define a thought or memory of the conscious precept in the users mind or the surrounding environment, said correlation subsystem being configured to correlate and filter said components of perception to create a relational database for input into an artificial neural network that at least one mimics, supplements, or enhances the brain of said user or said recipient biological, mechanical, or bio-mechanical system. 5. The user borne system according to claim 1, wherein said brain activity sensing subsystem includes at least one of a neuro-stimulation fiber optic light emitter or neuro-stimulation micro-electrode for at least one of diagnostic purposes, performance enhancement, or detecting performance degradation of said user's brain functions. 6. The user borne system according to claim 1, wherein said user borne system is designed to blend into the user's natural appearance by incorporating at least one of a prosthetic device with human skin color and shape, grafted skin, synthetic skin, a display, a fashion accessory, body art, hair piece, skull cap, jewelry, cap, hat, material covering, or clothing. 7. The user borne system according to claim 1, wherein said recipient biological, mechanical, or bio-mechanical system looks substantially like a human. 8. The user borne system according to claim 1, wherein said recipient biological, mechanical, or bio-mechanical is configured to act substantially like a human. 9. The user borne system according to claim 1, wherein said surrounding environment sensing subsystem data comprises video recording play back capability for playing back video derived from brain activity signatures for comparison of real world recorded imagery versus brain signal motion imagery of the user. 10. The user borne system according to claim 1, wherein at least a portion of said user borne system is configured to be supported by an exoskeleton worn by said user. 11. The user borne system according to claim 1, wherein said user borne system includes an e-commerce payment system. 12. The user borne system according to claim 1, wherein said user borne system further comprises a spherical field-of-view camera sensor and camera sensor supporting structure configured to automatically extend in front of the user when a phone call is initiated for face-to-face video teleconferencing and retract when the phone call is completed. 13. The user borne system according to claim 1, further comprising a video teleconferencing system configured to overlay video representations of teleconference users over geographical information or imagery at each of said teleconference user's geographical or spatial location and allow said teleconference users to interact with the geographical information or imagery. 14. The user borne system according to claim 1, wherein said user borne system is configured to operate on real-time and near real-time data from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem to determine a threat to said user or said surrounding environment. 15. The user borne system according to claim 1, wherein said user borne system includes a cognitive memory system comprising a neuromorphic computing system including at least one of an analog and/or digital circuit, Application Specific Integrated Circuit (ASIC), microprocessor, or other logic in hardware, software, or firmware with auto associative artificial neural networks that at least receive and process some portion of said data logged and derived by said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem to organize, remember, and update said data communicated to at least one of said user borne system and recipient biological, mechanical, or bio-mechanical systems such that said user borne system and/or recipient biological, mechanical, or bio-mechanical system learn through experience. 16. The user borne system according to claim 1, wherein said user mobile electronic device is a head mounted device housing said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem. 17. The user borne system according to claim 1, wherein said user borne system includes a cognitive model realized as a modern dynamic system with behavioral dynamics coded into a neural network system that achieves embodied cognition in at least one of said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem, information from said at least one of brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and/or correlation subsystem informing the user's behavior and the user's actions through back propagation of said neural network in an iterative repetitive manner such that the user affects the environment and the environment affects the user in perception-action cycles. 18. The user borne system according to claim 1, wherein said user borne system includes an artificial neural network configured to process said logged or derived data. 19. The user borne system according to claim 1, wherein said user borne system includes an auto-associative neural network. 20. The user borne system according to claim 1, wherein said user borne system includes a neuromorphic system configured to process said logged or derived data. 21. The user borne system according to claim 1, wherein said user borne system includes a self-learning neural network. 22. The user borne system according to claim 1, wherein said user borne system further comprises a portable magnetoencephalography (MEG) brain activity system configured to derive some of said brain activity data. 23. The user borne system according to claim 1, wherein said user borne system is configured to perform artificial neural network backward propagation algorithms and processing for achieving iterative learning as said user borne system receives new data from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, correlation subsystem, or recipient biological, mechanical, or bio-mechanical system. 24. The user borne system according to claim 1, wherein said user borne system includes an auto-association neural network configured to perform repetitive, iterative, supervised, and unsupervised machine learning to yield an output action potential into said user borne system and/or recipient biological, mechanical, or bio-mechanical system. 25. The user borne system according to claim 1, wherein said user borne system includes a cognitive model realized as a dynamic system with behavioral dynamics coded into a neural network system that achieves embodied cognition in at least one of said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, correlation subsystem, wherein said information embodies the user's behavior and actions through back propagation of the neural network in an iterative and repetitive manner such that the user affects the environment and the environment affects the user in perception action cycles. 26. The user borne system according to claim 1, wherein said brain activity data is operated upon when a thought generated by said user is translated by said system into at least one of text, audio, imagery, or machine language that is communicated wirelessly to said user or recipient biological, mechanical, or bio-mechanical system. 27. The user borne system according to claim 1, wherein said user borne system or said recipient is configured to operate on data from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, correlation subsystem, or user mobile electronic device to operate at least one actuator to assist said user or recipient system in responding to an event. 28. A system according to claim 1, wherein at least some portion of said data logged and derived by said system is communicated to and operated upon by a biological, mechanical, or bio-mechanical system that performs diagnostic medicine or life support. 29. A system according to claim 1, wherein said user portable system or said recipient system includes an integrated wireless communication system for command and control of said user or said recipient system from a remote location. 30. The system according to claim 1, wherein said user portable system includes a graphical user interface software or firmware application that depicts a user's body from which the user may interactively select which of said user brain activity sensing subsystem, user periphery sensing subsystem, and surrounding environment sensing subsystems and associated sensors said user wants to control or turn on and off. 31. A user borne system comprising: a robotic system; a computer system for operating said robotic system, said computer system including a neural network, said robotic system being configured to train at least a portion of said neural network and use output from said neural network to learn, negotiate, and survive in an environment; and a biological or bio-mechanical life-logging database installed on said computer system and operated upon on a non-transitory computer readable medium, said database being logged by sensors borne by a user to record perceptions of said user and surrounding environment perceptions. 32. A biological or bio-mechanical system user borne system comprising: a brain activity sensing subsystem configured to collect data corresponding to brain activity of a biological or bio-mechanical system user; a measurement computer subsystem configured to quantify perceptions of the biological or bio-mechanical system user; a user sensing subsystem configured to collect data corresponding to biological or bio-mechanical system user events; a surrounding environment sensing subsystem configured to collect data corresponding to the biological or bio-mechanical system user's surrounding environment; a recording subsystem configured to record said data; a user mobile electronic device in communication with said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, and recording subsystem, said user mobile electronic device including an interactive graphic user interface and being configured to: operate as a host computer processing subsystem for command, control, and processing of signals to and from said brain activity sensing subsystem, user sensing subsystem, surrounding environment sensing subsystem, and correlation subsystem; command said brain activity sensing subsystem to transmit brain activity and pattern data to said correlation subsystem; and command said user sensing subsystem and surrounding environment sensing subsystem to transmit processed sensor data to said correlation subsystem; a correlation subsystem configured to: create relationships between said data corresponding to said brain activity of said biological or bio-mechanical system user and said data corresponding to said biological or bio-mechanical system user events and surrounding environment; and receive and perform correlation processing operations to determine an extent of neural relationships between data received from said user mobile electronic device and said brain activity sensing subsystem, user sensing subsystem, and surrounding environment sensing subsystem to derive neural correlates of consciousness of conscious precepts of the biological or bio-mechanical system user; a non-transitory computer readable medium configured to store data from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem for performing queries on real-time and near real-time data received from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem for determining whether to keep or disregard said data based on pre-established rule-sets and biological or bio-mechanical system user interactive command and control from said user mobile electronic device; and a computer processing device configured to process and communicate at least a portion of said data logged and derived by said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem into said biological or bio-mechanical system user. |
|
ACCEPTED | Please predict whether this patent is acceptable.PATENT ABSTRACT: A mobile user borne brain activity data and surrounding environment data correlation system comprising a brain activity sensing subsystem, a recording subsystem, a measurement computer subsystem, a user sensing subsystem, a surrounding environment sensing subsystem, a correlation subsystem, a user portable electronic device, a non-transitory computer readable medium, and a computer processing device. The mobile user borne system collects and records brain activity data and surrounding environment data and statistically correlates and processes the data for communicating the data into a recipient biological, mechanical, or bio-mechanical system. |
|
G06N308 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A mobile user borne brain activity data and surrounding environment data correlation system comprising a brain activity sensing subsystem, a recording subsystem, a measurement computer subsystem, a user sensing subsystem, a surrounding environment sensing subsystem, a correlation subsystem, a user portable electronic device, a non-transitory computer readable medium, and a computer processing device. The mobile user borne system collects and records brain activity data and surrounding environment data and statistically correlates and processes the data for communicating the data into a recipient biological, mechanical, or bio-mechanical system. |
|
A method for managing and automating user customization of a device based on observed user behavior is disclosed. First, the method collects data on the user's activities on a device for a period of time. Second, the method learns about the user's behavior for routine repetitive operations by analyzing the user's activities data. Third, the method generates automation settings of the device based on the user's behavior for routine repetitive and predictive operations, and then presents the automation settings of the device to the user for customization of the device. These automation settings help to make the device operate more efficiently and more conveniently for the user, because they help perform the user's own routine repetitive operations. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for managing and automating customization of a device based on learning about a user's behavior, the method comprising: collecting data on the user's activities; learning about the user's behavior by analyzing the data on the user's activities; generating an automation setting of the device based on the user's behavior; and presenting the automation setting of the device to the user for customizing the device. 2. The method of claim 1, wherein collecting data on the user's activities comprises: collecting data of the user's activities based on the user's usual behavior of using options on the device or turning on/off options on the device. 3. The method of claim 2, wherein collecting data on the user's activities further comprises: collecting data of the user's activities that includes one or more of the following variables: time, location, and a device state. 4. The method of claim 3, wherein collecting data on the user's activities comprises: collecting data of the user's activities for a period of time. 5. The method of claim 4, wherein the period of time is associated with a given number of repetitive operations. 6. The method of claim 4, wherein the period of time is for a given number of days. 7. The method of claim 2, wherein learning about the user's behavior by analyzing the data on the user's activities comprises: learning about the user's behavior by analyzing the data on the user's activities that are routine. 8. The method of claim 7, wherein the automation setting of the device comprises: a workflow of operations that is automated based on the user's activities. 9. The method of claim 1 further comprising: implementing the automation setting of the device after the user accepts the automation setting. 10. The method of claim 1 further comprising: implementing the automation setting of the device after the user fine tunes the automation setting. 11. A device that manages and automates customization based on learning about a user's behavior, the device comprising: a processor; and a memory storing computer executable instructions that when executed by the processor causes the processor to: collect data on the user's activities; learn about the user's behavior by analyzing the data on the user's activities; generate an automation setting of the device based on the user's behavior; and present the automation setting of the device to the user for customizing the device. 12. The device of claim 11, wherein the data on the user's activities is collected for a period of time. 13. The device of claim 11, wherein the automation setting of the device comprises: a workflow of operations that is automated based on the user's activities. 14. The device of claim 11, wherein the memory further stores computer executable instructions that when executed by the processor causes the processor to: implement the automation setting of the device after the user accepts the automation setting. 15. The device of claim 11, wherein the memory further stores computer executable instructions that when executed by the processor causes the processor to: implement the automation setting of the device after the user fine tunes the automation setting. 16. A computer program product encoded in a non-transitory computer readable medium for managing and automating customization of a device based on learning about a user's behavior, the computer program product comprising: computer code for collecting data on the user's activities; computer code for learning about the user's behavior by analyzing the data on the user's activities; computer code for generating an automation setting of the device based on the user's behavior; and computer code for presenting the automation setting of the device to the user for customizing the device. 17. The computer program product of claim 16, wherein the data on the user's activities is collected for a period of time. 18. The computer program product of claim 16, wherein the automation setting of the device comprises: a workflow of operations that is automated based on the user's activities. 19. The computer program product of claim 16, wherein the computer program product further comprises: computer code for implementing the automation setting of the device after the user accepts the automation setting. 20. The computer program product of claim 16, wherein the computer program product further comprises: computer code for implementing the automation setting of the device after the user fine tunes the automation setting. |
|
ACCEPTED | Please predict whether this patent is acceptable.PATENT ABSTRACT: A method for managing and automating user customization of a device based on observed user behavior is disclosed. First, the method collects data on the user's activities on a device for a period of time. Second, the method learns about the user's behavior for routine repetitive operations by analyzing the user's activities data. Third, the method generates automation settings of the device based on the user's behavior for routine repetitive and predictive operations, and then presents the automation settings of the device to the user for customization of the device. These automation settings help to make the device operate more efficiently and more conveniently for the user, because they help perform the user's own routine repetitive operations. |
|
G06N99005 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method for managing and automating user customization of a device based on observed user behavior is disclosed. First, the method collects data on the user's activities on a device for a period of time. Second, the method learns about the user's behavior for routine repetitive operations by analyzing the user's activities data. Third, the method generates automation settings of the device based on the user's behavior for routine repetitive and predictive operations, and then presents the automation settings of the device to the user for customization of the device. These automation settings help to make the device operate more efficiently and more conveniently for the user, because they help perform the user's own routine repetitive operations. |
|
To precisely predict future data even when the number of pieces of time-series data is small, in predicting the future data, using the time-series data. When the future data is predicted using the time-series data, whether present time data is used is determined based on prediction variation or a data transition, and then the prediction of the future data is performed. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A prediction device comprising: a data acquisition unit configured to acquire present time data; a data generation unit configured to generate time-series data from the data acquired by the data acquisition unit at plurality of times; a determination unit configured to determine whether the generated time-series data satisfies a predetermined condition; and a final prediction unit configured to predict future data, based on past time data without using the present time data, when the predetermined condition is determined to be satisfied, and to predict the future data, based on the present time data and the past time data, when the predetermined condition is determined not to be satisfied, wherein past time data is previously acquired present time data which has been acquired prior to the present time data. 2. The prediction device according to claim 1, wherein, when the predetermined condition is determined not to be satisfied by the determination unit, the final prediction unit uses, as past time data, previously acquired present time data that falls within a predetermined range. 3. The prediction device according to claim 1, wherein the determination unit includes: a present time prediction unit configured to calculate a prediction result in a present time from the present time data and the past time data; and a comparison unit configured to determine whether the predetermined condition is satisfied, based on a first difference between the calculated prediction result in the present time and the prediction result in a past time calculated by the present time prediction unit in a past, and a second difference between the prediction results in past times calculated by the present time prediction unit in the past. 4. The prediction device according to claim 3, wherein the comparison unit determines that the predetermined condition is satisfied when the first difference is larger than the second difference. 5. The prediction device according to claim 3, wherein the comparison unit determines whether the predetermined condition is satisfied based on the first difference, and a statistic of a plurality of the second differences. 6. The prediction device according to claim 5, wherein the statistic is any of a maximum value, an average value, a median, and a most frequent value. 7. The prediction device according to claim 3, wherein the comparison unit determines whether the time-series data satisfies the predetermined condition, based on the first difference, and a transition of a plurality of the second differences. 8. The prediction device according to claim 1, wherein the determination unit includes: a detection unit configured to detect a transition of data similar to a transition of a part of the time-series data from past time-series data; and a transition determination unit configured to determine whether the predetermined condition is satisfied based on the detected similar transition of data. 9. The prediction device according to claim 1, wherein the determination unit includes: a detection unit configured to detect a transition of data similar to a transition of a part of the time-series data from time-series data of another machine number; and a transition determination unit configured to determine whether the predetermined condition is satisfied based on the detected similar transition of data. 10. The prediction device according to claim 8, wherein the detection unit detects a plurality of the similar transitions of data, and wherein the transition determination unit determines whether the predetermined condition is satisfied based on the plurality of similar transitions of data. 11. The prediction device according to claim 8, wherein the transition determination unit determines whether the predetermined condition is satisfied based on the plurality of weighted similar transitions of data. 12. The prediction device according to claim 1, wherein the future data is data about a degree of consumption, a degree of deterioration, or a possibility of occurrence of failure, of a component that configures a product. 13. A prediction method comprising: acquiring present time data; generating time-series data from the data acquired at a plurality of times; determining whether the generated time-series data satisfies a predetermined condition; and predicting future data, based on past time data without using the present time data, when the predetermined condition is determined to be satisfied, and predicting the future data, based on the present time data and the past time data, when the predetermined condition is determined not to be satisfied, wherein past time data is previously acquired present time data which has been acquired prior to the present time data. 14. A non-transitory computer-readable recording medium that stores a program for causing a computer to function as the units of a prediction device comprising: a data acquisition unit configured to acquire present time data; a data generation unit configured to generate time-series data from the data acquired by the data acquisition unit at plurality of times; a determination unit configured to determine whether the generated time-series data satisfies a predetermined condition; and a final prediction unit configured to predict future data, based on past time data without using the present time data, when the predetermined condition is determined to be satisfied, and to predict the future data, based on the present time data and the past time data, when the predetermined condition is determined not to be satisfied, wherein past time data is previously acquired present time data which has been acquired prior to the present time data. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: To precisely predict future data even when the number of pieces of time-series data is small, in predicting the future data, using the time-series data. When the future data is predicted using the time-series data, whether present time data is used is determined based on prediction variation or a data transition, and then the prediction of the future data is performed. |
|
G06N504 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: To precisely predict future data even when the number of pieces of time-series data is small, in predicting the future data, using the time-series data. When the future data is predicted using the time-series data, whether present time data is used is determined based on prediction variation or a data transition, and then the prediction of the future data is performed. |
|
A non-transitory computer-readable recording medium stores a program that causes a computer to execute a process including: performing a first conversion processing to convert a value indicating each event, and to convert, based on conversion information that indicates a group of the value and an identification value that corresponds to values belonging to the group; constructing information with occurrence probabilities by connecting identification values; performing second conversion processing to convert a value indicating each event included in event data, and to convert values that belong to a group indicated in the conversion information into an identical identification value corresponding to the group based on the conversion information; and detecting an anomaly based on a result of comparison between the constructed information and the identification value. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A non-transitory computer-readable recording medium having stored therein a detection program that causes a computer to execute a process including: performing a first conversion processing to convert a value indicating each event that is included in history log into an identification value corresponding to the value, and to convert, based on conversion information that indicates a group of the value and an identification value that corresponds to values belonging to the group, values that belong to a group indicated in the conversion information into an identical identification value that corresponds to the group; constructing information with occurrence probabilities by connecting identification values that are obtained by conversion by the first conversion processing in order of occurrence of the event sequentially from a root, and by assigning an occurrence probability of an event that corresponds to the identification value per identification value; performing second conversion processing to convert a value indicating each event included in event data that is input according to an event has occurred into an identification value corresponding to the value, and to convert values that belong to a group indicated in the conversion information into an identical identification value corresponding to the group based on the conversion information; and detecting an anomaly based on a result of comparison between the constructed information with occurrence probabilities and the identification value that is obtained by conversion by the second conversion processing. 2. The non-transitory computer-readable recording medium according to claim 1, wherein the conversion information indicates a range of values as the group, and the first and the second conversion processing converts values within the range indicated in the conversion information into an identical identification value that corresponds to the range. 3. The non-transitory computer-readable recording medium according to claim 2, wherein the process further including: calculating a statistical distribution of values indicating respective events that are included in the history log, and of creating conversion information in which a range of the values and an identification value corresponding to the range is defined, wherein the first and the second conversion processing performs conversion processing based on the created conversion information. 4. The non-transitory computer-readable recording medium according to claim 1, wherein the conversion information indicates order of array of values as the group, and the first and the second conversion processing converts values arranged in the order of array indicated in the conversion information into an identical identification value corresponding to the order of array. 5. The non-transitory computer-readable recording medium according to claim 4, wherein the process further including: calculating an appearance frequency according to order of array of values that indicate respective events included in the history log, and of creating conversion information in which the order of array having the appearance frequency equal to or higher than a predetermined value and an identification value that corresponds to the order of array are defined, wherein the first and the second conversion processing performs conversion processing based on the created conversion information. 6. A detection method comprising: performing a first conversion processing to convert a value indicating each event that is included in history log into an identification value corresponding to the value, and to convert, based on conversion information that indicates a group of the value and an identification value that corresponds to values belonging to the group, values that belong to a group indicated in the conversion information into an identical identification value that corresponds to the group by a processor; constructing information with occurrence probabilities by connecting identification values that are obtained by conversion by the first conversion processing in order of occurrence of the event sequentially from a root, and by assigning an occurrence probability of an event that corresponds to the identification value per identification value by the processor; performing second conversion processing to convert a value indicating each event included in event data that is input according to an event has occurred into an identification value corresponding to the value, and to convert values that belong to a group indicated in the conversion information into an identical identification value corresponding to the group based on the conversion information by the processor; and detecting an anomaly based on a result of comparison between the constructed information with occurrence probabilities and the identification value that is obtained by conversion by the second conversion processing by the processor. 7. The detection method according to claim 6, wherein the conversion information indicates a range of values as the group, and the first and the second conversion processing converts values within the range indicated in the conversion information into an identical identification value that corresponds to the range. 8. The detection method according to claim 7, further comprising: calculating a statistical distribution of values indicating respective events that are included in the history log, and of creating conversion information in which a range of the values and an identification value corresponding to the range is defined, by the processor, wherein the first and the second conversion processing performs conversion processing based on the created conversion information. 9. The detection method according to claim 6, wherein the conversion information indicates order of array of values as the group, and the first and the second conversion processing converts values arranged in the order of array indicated in the conversion information into an identical identification value corresponding to the order of array. 10. The detection method according to claim 9, further comprising: calculating an appearance frequency according to order of array of values that indicate respective events included in the history log, and of creating conversion information in which the order of array having the appearance frequency equal to or higher than a predetermined value and an identification value that corresponds to the order of array are defined, by the processor, wherein the first and the second conversion processing performs conversion processing based on the created conversion information. 11. A detection apparatus comprising a processor that executes a process comprising: performing a first conversion processing to convert a value indicating each event that is included in history log into an identification value corresponding to the value, and to convert, based on conversion information that indicates a group of the value and an identification value that corresponds to values belonging to the group, values that belong to a group indicated in the conversion information into an identical identification value that corresponds to the group; constructing information with occurrence probabilities by connecting identification values that are obtained by conversion by the first conversion processing in order of occurrence of the event sequentially from a root, and by assigning an occurrence probability of an event that corresponds to the identification value per identification value; performing second conversion processing to convert a value indicating each event included in event data that is input according to an event has occurred into an identification value corresponding to the value, and to convert values that belong to a group indicated in the conversion information into an identical identification value corresponding to the group based on the conversion information; and detecting an anomaly based on a result of comparison between the constructed information with occurrence probabilities and the identification value that is obtained by conversion by the second conversion processing. 12. The detection apparatus according to claim 11, wherein the conversion information indicates a range of values as the group, and the first and the second conversion processing converts values within the range indicated in the conversion information into an identical identification value that corresponds to the range. 13. The detection apparatus according to claim 12, wherein the process further comprising: calculating a statistical distribution of values indicating respective events that are included in the history log, and of creating conversion information in which a range of the values and an identification value corresponding to the range is defined, by the processor, wherein the first and the second conversion processing performs conversion processing based on the created conversion information. 14. The detection apparatus according to claim 11, wherein the conversion information indicates order of array of values as the group, and the first and the second conversion processing converts values arranged in the order of array indicated in the conversion information into an identical identification value corresponding to the order of array. 15. The detection apparatus according to claim 14, wherein the process further comprising: calculating an appearance frequency according to order of array of values that indicate respective events included in the history log, and of creating conversion information in which the order of array having the appearance frequency equal to or higher than a predetermined value and an identification value that corresponds to the order of array are defined, by the processor, wherein the first and the second conversion processing performs conversion processing based on the created conversion information. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: A non-transitory computer-readable recording medium stores a program that causes a computer to execute a process including: performing a first conversion processing to convert a value indicating each event, and to convert, based on conversion information that indicates a group of the value and an identification value that corresponds to values belonging to the group; constructing information with occurrence probabilities by connecting identification values; performing second conversion processing to convert a value indicating each event included in event data, and to convert values that belong to a group indicated in the conversion information into an identical identification value corresponding to the group based on the conversion information; and detecting an anomaly based on a result of comparison between the constructed information and the identification value. |
|
G06N7005 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A non-transitory computer-readable recording medium stores a program that causes a computer to execute a process including: performing a first conversion processing to convert a value indicating each event, and to convert, based on conversion information that indicates a group of the value and an identification value that corresponds to values belonging to the group; constructing information with occurrence probabilities by connecting identification values; performing second conversion processing to convert a value indicating each event included in event data, and to convert values that belong to a group indicated in the conversion information into an identical identification value corresponding to the group based on the conversion information; and detecting an anomaly based on a result of comparison between the constructed information and the identification value. |
|
Systems and methods are provided herein for generating personalized timeline-based feeds to a user. A computer-implemented method for generating feeds to a user may be provided. The method may include generating a timeline comprising a plurality of milestones and needs associated with an event, and providing the feeds based on community wisdom. The feeds may be provided for each milestone on the time-line specific to the user, and may be configured to address the user's needs at each milestone. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method for assisting a plurality of users in navigating one or more life events, the method comprising: providing interactive media to a plurality of computing devices associated with the plurality of users, wherein the interactive media is provided via an event navigation portal that is designed to aid the plurality of users in navigating the one or more life events, wherein the interactive media comprises a set of visual objects associated with the one or more life events, and wherein the set of visual objects are displayed to the users on graphical displays of the computing devices; receiving input data from the computing devices when the users interact with the set of visual objects in the event navigation portal; and analyzing the input data derived from the users' interaction with the set of visual objects to: (1) determine the life event(s) that each user is currently experiencing, has experienced, or is likely to experience, (2) predict one or more steps relating to the life event(s) for each user, wherein each step further comprises information to aid the user in navigating said step, and (3) map the step(s) relating to the life event(s) for each user on a timeline, wherein the timeline and the step(s) are included in the set of visual objects and displayed to the users on the graphical displays of the computing devices. 2. The method of claim 1, wherein the input data comprises questions, answers, comments, and/or insights in the form of text, audio, video, and/or photographs that are (1) provided by the plurality of users and (2) associated with the one or more life events. 3. The method of claim 2, wherein the input data is obtained from a social media or a social networking website visited by the plurality of users. 4. The method of claim 1, wherein the users interact with the set of visual objects on the graphical displays using at least one of the following input devices: a mouse, a keyboard, a touchscreen monitor, a voice recognition software, or a virtual reality and/or augmented reality headset. 5. The method of claim 1, wherein the timeline and the step(s) are configured to be manipulated on the graphical displays by the users, and wherein said manipulation by the users comprises at least one of the following: (1) modifying the timeline to display a desired time period, (2) increasing or decreasing a duration of the timeline, (3) modifying the location of each step along the timeline, (4) displaying the information included within each step, (5) modifying the information displayed within each step, (6) overlaying a plurality of timelines for the plurality of users onto a common timeline, or (7) linking different timelines for different life events. 6. The method of claim 1, wherein the one or more life events include at least one of the following: diagnosis with a terminal illness, death, marriage, divorce, or retirement. 7. The method of claim 1, wherein the input data is analyzed using a natural language processing (NLP) algorithm. 8. The method of claim 1, wherein the input data further comprises information indicative of the physical locations of the plurality of users, and wherein the physical locations of the users are extracted from the input data. 9. The method of claim 8, wherein the information indicative of the physical locations of the users is dynamically updated in real-time as the users move between different places. 10. The method of claim 1, further comprising: generating a predictive model by applying machine learning to the input data received from the plurality of users, wherein the predictive model is used to predict the one or more steps relating to the life event(s) for each user. 11. The method of claim 10, wherein the predictive model is configured to predict each user's needs at each step along the timeline, and wherein the information in each step is customized for each user depending on the user's predicted needs. 12. The method of claim 10, wherein the predictive model is configured to extract temporal status of the users based on the users' interactions within the event navigation portal, wherein the temporal status of the users corresponds to mental states or physical states of the users at a given moment in time, and wherein the temporal status of the users is extracted based on thoughts, feelings, opinions, statements, and/or comments made by the users in the interactions within the event navigation portal. 13. The method of claim 1, wherein two or more of the steps relating to the life event(s) are mapped in chronological order on the timeline for each user. 14. The method of claim 1, wherein two or more of the steps occur at a same point in time or at different points in time along the timeline for each user. 15. The method of claim 1, wherein the step(s) and the information in each step are updated dynamically on the timeline as the users experience the life event(s). 16. The method of claim 1, wherein the information in each step comprises insights or comments that are provided by one or more users about the corresponding step. 17. The method of claim 1, further comprising: filtering one or more user insights from the input data, and matching the one or more user insights to the one or more steps. 18. The method of claim 17, wherein said matching is based on at least one of the following: (1) a crowd-sourced rating of each user insight, (2) a credentials rating of a user associated with a user insight, (3) a popularity rating of each user insight, or (4) a popularity rating of a user associated with the corresponding user insight. 19. The method of claim 17, wherein said matching is based on a plurality of predefined topics associated with the life event(s). 20. The method of claim 17, further comprising: determining a frequency at which each user insight is matched to the corresponding step, and ranking the matched user insights based on their frequencies. 21. The method of claim 1, further comprising: displaying a plurality of different possible journeys to the users on the graphical displays of the computing devices, wherein the plurality of different possible journeys are generated through different combinations of the steps on the timeline. 22. The method of claim 21, wherein the plurality of different journeys and steps are selectable by the users, to allow the users to observe the effects of selecting different journeys and/or steps for the life event(s). 23. The method of claim 21, wherein the plurality of different journeys and steps are included in the set of visual objects that are displayed on the graphical displays of the computing devices. 24. The method of claim 21, wherein the plurality of different journeys and steps are configured to be spatially manipulated by the users on the graphical displays of the computing devices using drag-and-drop functions. 25. The method of claim 24, wherein at least some of the journeys and/or steps are configured to be (1) expanded into a plurality of sub-journeys and/or sub-steps, or (2) collapsed into a main journey and/or main step. 26. The method of claim 1, wherein the timeline further comprises a graphical plot indicative of a significance level of each step on the timeline to each user. 27. The method of claim 1, wherein the users' interactions with the set of visual objects comprise the users entering alphanumeric text, image data, and/or audio data via one or more of the visual objects on the graphical displays. 28. The method of claim 1, wherein the set of visual objects are provided on the graphical displays in a plurality of different colors, shapes, dimensions, and/or sizes, and wherein the timeline and step(s) for different users are displayed in different visual coding schemes. 29. A system for implementing an event navigation portal that is designed to aid a plurality of users in navigating one or more life events, the system comprising: a server in communication with a plurality of computing devices associated with a plurality of users, wherein the server comprises a memory for storing interactive media and a first set of software instructions, and one or more processors configured to execute the first set of software instructions to: provide the interactive media via the event navigation portal to the plurality of computing devices associated with the plurality of users, wherein the interactive media comprises a set of visual objects associated with the one or more life events, and wherein the set of visual objects are displayed to the users on graphical displays of the computing devices; receive input data from the computing devices when the users interact with the set of visual objects in the event navigation portal; and analyze the input data derived from the users' interaction with the set of visual objects to: (1) determine the life event(s) that each user is currently experiencing, has experienced, or is likely to experience, (2) predict one or more steps relating to the life event(s) for each user, wherein each step further comprises information to aid the user in navigating said step, and (3) map the step(s) relating to the life event(s) for each user on a timeline, wherein the timeline and the step(s) are included in the set of visual objects; and wherein the plurality of computing devices comprise a memory for storing a second set of software instructions, and one or more processors configured to execute the second set of software instructions to: receive the interactive media from the server; display the set of visual objects visually on the graphical displays of the computing devices to the users; generate the input data when the users interact with the set of visual objects in the event navigation portal; transmit the input data to the server for analysis of the input data; receive the analyzed input data comprising the timeline and the step(s); and display the timeline and the step(s) on the graphical displays of the computing devices to the users. 30. A tangible computer readable medium storing instructions that, when executed by one or more servers, causes the one or more servers to perform a computer-implemented method for assisting a plurality of users in navigating one or more life events, the method comprising: providing interactive media to a plurality of computing devices associated with the plurality of users, wherein the interactive media is provided via an event navigation portal that is designed to aid the plurality of users in navigating the one or more life events, wherein the interactive media comprises a set of visual objects associated with the one or more life events, and wherein the set of visual objects are displayed to the users on graphical displays of the computing devices; receiving input data from the computing devices when the users interact with the set of visual objects in the event navigation portal; and analyzing the input data derived from the users' interaction with the set of visual objects to: (1) determine the life event(s) that each user is currently experiencing, has experienced, or is likely to experience, (2) predict one or more steps relating to the life event(s) for each user, wherein each step further comprises information to aid the user in navigating said step, and (3) map the step(s) relating to the life event(s) for each user on a timeline, wherein the timeline and the step(s) are included in the set of visual objects and displayed to the users on the graphical displays of the computing devices. |
|
ACCEPTED | Please predict whether this patent is acceptable.PATENT ABSTRACT: Systems and methods are provided herein for generating personalized timeline-based feeds to a user. A computer-implemented method for generating feeds to a user may be provided. The method may include generating a timeline comprising a plurality of milestones and needs associated with an event, and providing the feeds based on community wisdom. The feeds may be provided for each milestone on the time-line specific to the user, and may be configured to address the user's needs at each milestone. |
|
G06N7005 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Systems and methods are provided herein for generating personalized timeline-based feeds to a user. A computer-implemented method for generating feeds to a user may be provided. The method may include generating a timeline comprising a plurality of milestones and needs associated with an event, and providing the feeds based on community wisdom. The feeds may be provided for each milestone on the time-line specific to the user, and may be configured to address the user's needs at each milestone. |
|
An analysis device which analyzes a system that inputs input data including a plurality of input parameters and outputs output data, including an acquisition unit that acquires learning data including a plurality of sets of the input data and the output data, and a learning processing unit that learns, based on the acquired learning data, the amount of difference of output data corresponding to a difference between input parameters of two pieces of input data, an analysis method using the analysis device, and a program used in the analysis device are provided. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. An analysis device for analyzing a system that inputs input data including a plurality of input parameters and outputs output data, the device comprising: an acquisition unit that acquires learning data including a plurality of sets of the input data and the output data; and a learning processing unit that learns, based on the acquired learning data, an amount of difference of output data corresponding to a difference between input parameters of two pieces of input data. 2. The analysis device according to claim 1, wherein the learning processing unit generates an estimation model for learning a distance between two pieces of output data with respect to the difference between the input parameters of the two pieces of input data and estimating a change of the output data with respect to a change of the input parameters. 3. The analysis device according to claim 1, wherein the learning processing unit performs pair-wise regression to perform regression analysis of the relationship between the difference between the input parameters and the amount of difference of the output data for each pair. 4. The analysis device according to claim 2, wherein the learning processing unit generates an estimation model for estimating, by using a degree of change for every range of a value between the two input parameters, the amount of difference of the output data for the every range of the value between the input parameters. 5. The analysis device according to claim 2, further comprising: an estimation unit that estimates, based on the estimation model, an amount of change of output data with respect to an amount of change of the input data. 6. The analysis device according to claim 1, further comprising: a normalization unit that performs, for the plurality of pieces of input data, normalization of the input parameters so that an average of the input parameters is 0 and a variance of the input parameters is 1. 7. The analysis device according to claim 2, further comprising: a display unit that displays, in accordance with an amount of change of the input parameters, an estimated amount of change of the output data. 8. The analysis device according to claim 1, wherein the input parameters include an initial condition in a collision simulation, and wherein the output data includes shape data of an object in the collision simulation. 9. An analysis method for analyzing a system that inputs input data including a plurality of input parameters and outputs output data, the method comprising: an acquisition step of acquiring learning data including a plurality of sets of the input data and the output data; and a learning processing step of learning, based on the acquired learning data, an amount of difference of output data corresponding to a difference between input parameters of two pieces of input data. 10. The analysis method according to claim 9, wherein the learning processing step includes generating an estimation model for learning a distance between two pieces of output data with respect to the difference between the input parameters of the two pieces of input data and estimating a change of the output data with respect to a change of the input parameters is generated. 11. The analysis method according to claim 10, wherein the learning processing step includes performing pair-wise regression to perform regression analysis of the relationship between the difference between the input parameters and the amount of difference of the output data for each pair. 12. The analysis method according to claim 10, wherein the learning processing step includes generating an estimation model for estimating, by using a degree of change for every range of a value between the two input parameters, the amount of difference of the output data for the every range of the value between the input parameters is generated. 13. The analysis method according to claim 10, further comprising: an estimation step of estimating, based on the estimation model, an amount of change of output data with respect to an amount of change of the input data. 14. The analysis method according to claim 9, further comprising: a normalization step of performing, for the plurality of pieces of input data, normalization of the input parameters so that an average of the input parameters is 0 and a variance of the input parameters is 1. 15. The analysis method according to claim 10, further comprising: a display step of displaying, in accordance with an amount of change of the input parameters, an estimated amount of change of the output data. 16. The analysis method according to claim 9, wherein the input parameters include an initial condition in a collision simulation, and wherein the output data includes shape data of an object in the collision simulation. 17. A computer program product for analyzing a system that inputs input data including a plurality of input parameters and outputs output data, the computer program product comprising at least one computer readable non-transitory storage medium having computer readable program instructions thereon for execution by a processor, the computer readable program instructions comprising program instructions for: acquiring learning data including a plurality of sets of the input data and the output data; and learning, based on the acquired learning data, an amount of difference of output data corresponding to a difference between input parameters of two pieces of input data. 18. The computer program product according to claim 17, wherein the learning includes generating an estimation model for learning a distance between two pieces of output data with respect to the difference between the input parameters of the two pieces of input data and estimating a change of the output data with respect to a change of the input parameters is generated. 19. The computer program product according to claim 18, wherein the learning includes performing pair-wise regression to perform regression analysis of the relationship between the difference between the input parameters and the amount of difference of the output data for each pair. 20. The computer program product according to claim 18, wherein the learning includes generating an estimation model for estimating, by using a degree of change for every range of a value between the two input parameters, the amount of difference of the output data for the every range of the value between the input parameters is generated. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: An analysis device which analyzes a system that inputs input data including a plurality of input parameters and outputs output data, including an acquisition unit that acquires learning data including a plurality of sets of the input data and the output data, and a learning processing unit that learns, based on the acquired learning data, the amount of difference of output data corresponding to a difference between input parameters of two pieces of input data, an analysis method using the analysis device, and a program used in the analysis device are provided. |
|
G06N99005 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An analysis device which analyzes a system that inputs input data including a plurality of input parameters and outputs output data, including an acquisition unit that acquires learning data including a plurality of sets of the input data and the output data, and a learning processing unit that learns, based on the acquired learning data, the amount of difference of output data corresponding to a difference between input parameters of two pieces of input data, an analysis method using the analysis device, and a program used in the analysis device are provided. |
|
A system and method for predicting search term popularity is disclosed herein. A database system may comprise a first database cluster H and a second database cluster L. A machine learning algorithm is trained to create a predictive model. Thereafter, for each record in a database system, the predictive model is used to calculate a probability of the record being accessed. If the calculated probability of the record being accessed is greater than a threshold value, then the record in the first database cluster H; otherwise, the record is placed in the second database cluster L. Training the machine learning algorithm comprises inputting a training feature vector associated with the record into the machine learning algorithm, inputting a cost vector into the machine learning algorithm, and iteratively operating the machine learning algorithm on each record in the set of records to create a predictive model. Other embodiments are also disclosed herein. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system comprising: one or more processing modules; and one or more non-transitory memory storage modules storing computing instructions configured to run on the one or more processing modules and perform acts of: for each record in a set of distinct records in a database system: inputting a training feature vector associated with the record into a machine learning algorithm, the training feature vector associated with the record comprising a list of characteristics of the record; and inputting a cost vector associated with the record into the machine learning algorithm, the cost vector associated with the record configured to train the machine learning algorithm to reduce a probability of a false negative prediction for the record; and iteratively operating the machine learning algorithm on each record in the set of distinct records to train the machine learning algorithm to create a predictive model. 2. The system of claim 1, wherein: the database system comprises a first database cluster H and a second database cluster L; and the one or more non-transitory memory storage modules storing the computing instructions are further configured to run on the one or more processing modules and perform acts of: for each record of the set of distinct records: using the predictive model to calculate a probability of the record being accessed; if the probability of the record being accessed as calculated is greater than a threshold value, then placing the record in the first database cluster H; and if the probability of the record being accessed as calculated is not greater than the threshold value, then placing the record in the second database cluster L; receiving a request from a requester for at least one record of the set of distinct records; and presenting the at least one record from the set of distinct records to the requester in response to the request. 3. The system of claim 2, wherein: the threshold value is determined such that at least approximately 99 percent of predicted accesses will access records of the set of distinct records placed in the first database cluster H; and using the predictive model to calculate the probability of the record being accessed comprises: for each record in the set of distinct records: inputting a list of prediction feature vectors, each prediction feature vector of the list of prediction feature vectors comprising the list of characteristics of the record; and using the predictive model to analyze the list of prediction feature vectors to calculate the probability of the record being accessed. 4. The system of claim 1, wherein the machine learning algorithm comprises at least one of: a decision tree, a bagging technique, a logistic regression, a perceptron, a support vector machine, or a relevance vector machine. 5. The system of claim 1, wherein iteratively operating the machine learning algorithm to train the machine learning algorithm to create the predictive model comprises: operating the machine learning algorithm on a periodic basis; and for each record of the set of distinct records: reviewing historical access data associated with the record; and comparing a probability of the record being accessed as calculated with the historical access data associated with the record. 6. The system of claim 1, wherein: the training feature vector further comprises a label configured to indicate, for each record in the set of distinct records, if the record has been accessed within a pre-defined time period; and the cost vector associated with the record represents an estimate of cost incurred when the record is placed in an incorrect database cluster of the first database cluster H or the second database cluster L. 7. A method implemented via execution of computer instructions configured to run on one or more processing modules and configured to be stored on one or more non-transitory memory storage modules, the method comprising: for each record in a set of distinct records in a database system: inputting a training feature vector associated with the record into a machine learning algorithm, the training feature vector associated with the record comprising a list of characteristics of the record; and inputting a cost vector associated with the record into the machine learning algorithm, the cost vector associated with the record configured to train the machine learning algorithm to reduce a probability of a false negative prediction for the record; and iteratively operating the machine learning algorithm on each record in the set of distinct records to train the machine learning algorithm to create a predictive model. 8. The method of claim 7, wherein: the database system comprises a first database cluster H and a second database cluster L; the method further comprises: for each record of the set of distinct records: using the predictive model to calculate a probability of the record being accessed; if the probability of the record being accessed as calculated is greater than a threshold value, then placing the record in the first database cluster H; and if the probability of the record being accessed as calculated is not greater than the threshold value, then placing the record in the second database cluster L; receiving a request from a requester for at least one record of the set of distinct records; and presenting the at least one record from the set of distinct records to the requester in response to the request. 9. The method of claim 8, wherein: the threshold value is determined such that at least approximately 99 percent of predicted accesses will access records of the set of distinct records placed in the first database cluster H; and using the predictive model to calculate the probability of the record being accessed comprises: for each record in the set of distinct records: inputting a list of prediction feature vectors, each prediction feature vector of the list of prediction feature vectors comprising the list of characteristics of the record; and using the predictive model to analyze the list of prediction feature vectors to calculate the probability of the record being accessed. 10. The method of claim 7, wherein the machine learning algorithm comprises at least one of: a decision tree, a bagging technique, a logistic regression, a perceptron, a support vector machine, or a relevance vector machine. 11. The method of claim 7, wherein iteratively operating the machine learning algorithm to train the machine learning algorithm to create the predictive model comprises: operating the machine learning algorithm on a periodic basis; and for each record of the set of distinct records: reviewing historical access data associated with the record; and comparing a probability of the record being accessed as calculated with the historical access data associated with the record. 12. The method of claim 7, wherein: the training feature vector further comprises a label configured to indicate, for each record in the set of distinct records, if the record has been accessed within a pre-defined time period; and the cost vector associated with the record represents an estimate of cost incurred when the record is placed in an incorrect database cluster of the first database cluster H or the second database cluster L. 13. A system comprising: one or more processing modules; and one or more non-transitory memory storage modules storing computing instructions configured to run on the one or more processing modules and perform acts of training a machine learning algorithm to create a predictive model; receiving, from a requesting party, a request to analyze a probability that a record of a database will be requested within a predetermined time period; retrieving a feature vector corresponding to the record; calculating a prediction of the probability that the record will be requested within the predetermined time period, the prediction being based on the predictive model used in conjunction with the feature vector; and presenting, to the requesting user, the prediction of the probability, as calculated. 14. The system of claim 13, wherein training the machine learning algorithm to create the predictive model comprises: for each record in a set of distinct records in the database: inputting a training feature vector associated with the record into the machine learning algorithm, the training feature vector comprising a list of characteristics of the record; and inputting a cost vector associated with the record into the machine learning algorithm, the cost vector configured to train the machine learning algorithm to reduce a probability of a false negative prediction for the record; and iteratively operating the machine learning algorithm on each record in the set of records to create the predictive model. 15. The system of claim 14, wherein the machine learning algorithm uses a MetaCost algorithm and a cost-insensitive machine learning algorithm used in conjunction with the MetaCost algorithm. 16. The system of claim 13, wherein the feature vector corresponding to the record comprises a list of characteristics of the record. 17. A method implemented via execution of computer instructions configured to run on one or more processing modules and configured to be stored on one or more non-transitory memory storage modules, the method comprising: training a machine learning algorithm to create a predictive model; receiving, from a requesting party, a request to analyze a probability that a record of a database will be requested within a predetermined time period; retrieving a feature vector corresponding to the record; calculating a prediction of the probability that the record will be requested within the predetermined time period, the prediction being based on the predictive model used in conjunction with the feature vector; and presenting, to the requesting user, the prediction of the probability, as calculated. 18. The method of claim 17, wherein training the machine learning algorithm to create the predictive model comprises: for each record in a set of distinct records in the database: inputting a training feature vector associated with the record into the machine learning algorithm, the training feature vector comprising a list of characteristics of the record; and inputting a cost vector associated with the record into the machine learning algorithm, the cost vector configured to train the machine learning algorithm to reduce a probability of a false negative prediction for the record; and iteratively operating the machine learning algorithm on each record in the set of records to create the predictive model. 19. The method of claim 118, wherein the machine learning algorithm uses a MetaCost algorithm and a cost-insensitive machine learning algorithm used in conjunction with the MetaCost algorithm. 20. The method of claim 17, wherein the feature vector corresponding to the record comprises a list of characteristics of the record. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: A system and method for predicting search term popularity is disclosed herein. A database system may comprise a first database cluster H and a second database cluster L. A machine learning algorithm is trained to create a predictive model. Thereafter, for each record in a database system, the predictive model is used to calculate a probability of the record being accessed. If the calculated probability of the record being accessed is greater than a threshold value, then the record in the first database cluster H; otherwise, the record is placed in the second database cluster L. Training the machine learning algorithm comprises inputting a training feature vector associated with the record into the machine learning algorithm, inputting a cost vector into the machine learning algorithm, and iteratively operating the machine learning algorithm on each record in the set of records to create a predictive model. Other embodiments are also disclosed herein. |
|
G06N7005 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A system and method for predicting search term popularity is disclosed herein. A database system may comprise a first database cluster H and a second database cluster L. A machine learning algorithm is trained to create a predictive model. Thereafter, for each record in a database system, the predictive model is used to calculate a probability of the record being accessed. If the calculated probability of the record being accessed is greater than a threshold value, then the record in the first database cluster H; otherwise, the record is placed in the second database cluster L. Training the machine learning algorithm comprises inputting a training feature vector associated with the record into the machine learning algorithm, inputting a cost vector into the machine learning algorithm, and iteratively operating the machine learning algorithm on each record in the set of records to create a predictive model. Other embodiments are also disclosed herein. |
|
A method for measuring technology trends that includes providing from a plurality of inventors in a technology field a baseline of technical documents published in a time period, and detecting a number of technical document publications having at least one inventor in the plurality of inventors in the technology field. The method further includes comparing the number of technical document publications to the baseline of technical documents published in the time period. If the technical document publications exceed the baseline, the number of technical document publications are trending. Comparative analysis of the content for the technical document publications that are trending determines a measurement of similarity in technical field subgroups. Trending technical subgroups are extracted from the technical document publications that are trending with a degree of similarity above a threshold as a target technical group that is a trend. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for measuring technology trends comprising: analyzing for each inventor of a plurality of inventors in a technology field, a time series of publication dates for technical documents in the technology field to provide a baseline of technical documents published in a time period; detecting with a hardware processor based counter a number of technical document publications having at least one inventor in the plurality of inventors in the technology field; comparing the number of technical document publications to said baseline of technical documents filed in the time period, wherein if the technical document publications exceed the baseline of technical documents, the number of technical document publications are trending; performing a comparative analysis of the content for the technical document publications that are trending to determine a measurement of similarity in technical field subgroups described in the technical document publications that are trending; and extracting trending technical subgroups from the technical document publications that are trending with a degree of similarity above a threshold as a technical subgroup that is a trend. 2. The method of claim 1, wherein the technical documents are patent applications published by a patent office. 3. The method of claim 1, wherein the technology field is selected from the classification system of a patent office. 4. The method of claim 1, wherein said detecting with a hardware processor based counter an increase in a number of publications of trending technical documents includes building a time-series model for the number of technical documents of each inventor and comparing an expected value with an actual value. 5. The method of claim 4, wherein the time-series model includes a Poisson process. 6. The method of claim 5, wherein said comparing the number of technical document publications to said baseline of technical documents filed in the time period calculating a score s from: S=−log(Px0) wherein P(x) is a distribution of a Poisson process modeled using an average number of patent applications published per month for said each inventor of said plurality of inventors in said technology field, and x0 is an observed number of patent applications published per month for said each inventor of said plurality of inventors in said technology field. 7. The method of claim 1, wherein said comparative analysis of the content for the trending technical documents to determine a measurement of similarity in technical field subgroups described in the trending technical documents further comprises identifying a keyword as a keyword indicating a trending technology from an extracted subgroup of technical documents by using an index. 8. The method of claim 5, wherein the comparative analysis comprises Kullback-Leibler divergence, Pointwise Mutual Information (PMI) or a combination thereof. 9. The method of claim 1 further comprising providing an index of terms for said technology subgroups that are trending. 10. The method of claim 9, wherein providing the index comprises Term Frequency-Inverse Document Frequency (TFIDF), Pointwise Mutual Information (PMI) or a combination thereof. 11. A system for detecting technology trends comprising: a database of inventors in a technology field; a baseline generator for providing a baseline frequency of technical publications published by each inventor of said database for a specified time period; a counter for determining from technical publications whether there is an increase in technical publications for at least one of the inventors in the database of inventors in the technological field; and a comparison module for determining whether the technical publications providing the increase in the technical publications have technology subgroups with a frequency that is greater than a target trend frequency that indicates a technical subgroup as a trend. 12. The system of claim 11, wherein the technical documents are patent applications published by a patent office. 13. The system of claim 11, wherein the technology field is selected from the classification system of a patent office. 14. The system of claim 11, wherein the counter detects an increase in a number of publications by creating a time-series model for the number of technical documents of each inventor and comparing an expected value of technical publication with an actual value of technical publications. 15. The system of claim 14, wherein said creating the time-series model includes a Poisson process. 16. The system of claim 15, wherein said comparing the number of expected technical document publications to the actual technical documents published in the time period calculating a score s from: S=−log(Px0) wherein P(x) is a distribution of a Poisson process modeled using an average number of patent applications published per month for said each inventor of said plurality of inventors in said technology field, and x0 is an observed number of patent applications published per month for said each inventor of said plurality of inventors in said technology field. 17. The system of claim 15, wherein said comparison module performs a comparative analysis of the content for the technical publications to determine a measurement of similarity in technical field subgroups described that comprises identifying a keyword as a keyword indicating an extracted subgroup of technical documents by using an index. 18. The system of claim 15, wherein the comparison module performs a comparative analysis comprising Kullback-Leibler divergence, Pointwise Mutual Information (PMI) or a combination thereof. 19. The system of claim 15 further comprising a term extractor that provides an index of terms for technology subgroups that are trending. 20. A non-transitory computer readable storage medium comprising a computer readable program for determining technology trends, wherein the computer readable program when executed on a computer causes the computer to perform the steps of: analyzing for each inventor of a plurality of inventors in a technology field, a time series of publication dates for technical documents in the technology field to provide a baseline of technical documents published in a time period; detecting with a counter a number of technical document publications having at least one inventor in the plurality of inventors in the technology field; comparing the number of technical document publications to said baseline of technical documents published in the time period, wherein if the technical document publications exceed the baseline of technical documents, the number of technical document publications are trending; performing a comparative analysis of the content for the technical document publications that are trending to determine a measurement of similarity in technical field subgroups described in the technical document publications that are trending; and extracting trending technical subgroups from the technical document publications that are trending with a degree of similarity above a threshold as a technical subgroup that is a trend. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: A method for measuring technology trends that includes providing from a plurality of inventors in a technology field a baseline of technical documents published in a time period, and detecting a number of technical document publications having at least one inventor in the plurality of inventors in the technology field. The method further includes comparing the number of technical document publications to the baseline of technical documents published in the time period. If the technical document publications exceed the baseline, the number of technical document publications are trending. Comparative analysis of the content for the technical document publications that are trending determines a measurement of similarity in technical field subgroups. Trending technical subgroups are extracted from the technical document publications that are trending with a degree of similarity above a threshold as a target technical group that is a trend. |
|
G06N5048 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method for measuring technology trends that includes providing from a plurality of inventors in a technology field a baseline of technical documents published in a time period, and detecting a number of technical document publications having at least one inventor in the plurality of inventors in the technology field. The method further includes comparing the number of technical document publications to the baseline of technical documents published in the time period. If the technical document publications exceed the baseline, the number of technical document publications are trending. Comparative analysis of the content for the technical document publications that are trending determines a measurement of similarity in technical field subgroups. Trending technical subgroups are extracted from the technical document publications that are trending with a degree of similarity above a threshold as a target technical group that is a trend. |
|
Systems and methods are disclosed that have and implement persona-based decision assistants and graphical user interfaces. The graphical user interfaces may present a view of one or more decision options and may include one or more user-selectable elements through which a selected decision option may be accessed or modified. In certain embodiments, user selections and similar traveler “look-alikes'” purchase behaviors may be processed to refine a persona corresponding to the search, in parallel to a search occurring and after an initial search result has been presented. In certain embodiments, the graphical user interface may show a subset of possible decision options. In certain embodiments, the graphical user interface may provide a selectable element to modify search, persona, and other preferences. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system comprising: a processor; and a memory accessible to the processor and storing instructions that, when executed, cause the processor to: provide, via a graphical user interface, a selected one of a plurality of decision options; and obscure others of the plurality of decision options. 2. The system of claim 1, wherein the memory further comprises instructions that, when executed, cause the processor to selectively reveal others of the plurality of decision options. 3. A system comprising: a processor; and a memory accessible to the processor and storing instructions that, when executed, cause the processor to: provide, via a graphical user interface (GUI), a plurality of decision options as a set of cards, each card representing a decision option of the plurality of decision options; and selectively alter an appearance of the card within the graphical user interface in response to an input. 4. The system of claim 3, wherein an appearance of the card is selectively altered, within the GUI, by providing a view that represents a back side of the card. 5. The system of claim 3, wherein the memory further includes instructions that, when executed cause the processor to: move the image of the card within the GUI in response to the input; store a decision option associated with the card when the card is moved in a first direction; and discard a decision option associated with the card when the card is moved in a second direction. 6. A system comprising: a processor; and a memory accessible to the processor and storing instructions that, when executed, cause the processor to: receive decision options corresponding to a plurality of possible options; provide, via a graphical user interface, a selected one of the plurality of possible options; and obscure others of the plurality of itineraries. 7. The system of claim 6, wherein the memory further includes instructions that, when executed, cause the processor to: include one or more user-selectable elements within the graphical user interface; receive input corresponding to one of the user-selectable elements; and provide one or more options to configure a continuous decision making process related to a selected decision option. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: Systems and methods are disclosed that have and implement persona-based decision assistants and graphical user interfaces. The graphical user interfaces may present a view of one or more decision options and may include one or more user-selectable elements through which a selected decision option may be accessed or modified. In certain embodiments, user selections and similar traveler “look-alikes'” purchase behaviors may be processed to refine a persona corresponding to the search, in parallel to a search occurring and after an initial search result has been presented. In certain embodiments, the graphical user interface may show a subset of possible decision options. In certain embodiments, the graphical user interface may provide a selectable element to modify search, persona, and other preferences. |
|
G06N5045 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Systems and methods are disclosed that have and implement persona-based decision assistants and graphical user interfaces. The graphical user interfaces may present a view of one or more decision options and may include one or more user-selectable elements through which a selected decision option may be accessed or modified. In certain embodiments, user selections and similar traveler “look-alikes'” purchase behaviors may be processed to refine a persona corresponding to the search, in parallel to a search occurring and after an initial search result has been presented. In certain embodiments, the graphical user interface may show a subset of possible decision options. In certain embodiments, the graphical user interface may provide a selectable element to modify search, persona, and other preferences. |
|
A system for diagnosing at least one component requiring maintenance in an appliance and/or installation, having a) a device, designed for data and/or message interchange with regard to states of one or more components with an analysis unit, b) a device, designed to receive historic data from the one or more components with regard to their life in collective form, c) a device, designed for data and/or message interchange with a learning machine unit that is designed to deliver a predictive model for identifying at least one component requiring maintenance to the device, d) an evaluation device that is designed to use the data and/or messages coming from the analysis unit, e) a device, designed for data and/or message interchange with a monitoring device that is designed to take the one or more identified components requiring maintenance, which can prompt a visual and/or audible display is provided. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system for diagnosing at least one component requiring maintenance in an appliance and/or installation, having a) a device, designed for data and/or message interchange with regard to states of one or more components with an analysis unit that is designed to monitor the states of the one or more components and/or events arising thereon and to output them to the device in a systematized form, b) a device, designed to receive historic data from the one or more components with regard to their life in collective form, c) a device, designed for data and/or message interchange with a learning machine unit that is designed to deliver a predictive model for identifying at least one component requiring maintenance to the device, d) an evaluation device that is designed to use the data and/or messages coming from the analysis unit in systematic form, the historic data in collective form and to use the predictive model to identify the one or more components requiring maintenance, e) a device, designed for data and/or message interchange with a monitoring device that is designed to take the one or more identified components requiring maintenance as a basis for outputting an error message to the monitoring device, which can prompt a visual and/or audible display. 2. The system as claimed in claim 1, wherein the learning machine unit is designed to identify, within a determined time window, one or more components requiring maintenance on the basis of a target value, specified by the respective affected component, for a training on the basis of classifications that are derivable from the historic data of the appliance and/or installation. 3. The system as claimed in claim 1, wherein events and/or states are provided in a systematized form according to their frequency, if need be in a manner provided with a weighting that corresponds to their relevance, within a time window. 4. The system as claimed in claim 1, wherein said collective form reproduces a correlation between the one or more components and other components of the appliance and/or installation. 5. The system as claimed in claim 1, wherein said life represents an expected life cycle, the average life cycle having been related to the ongoing life cycle. 6. The system as claimed in claim 1, wherein the predictive model is representable by a decision tree in which the leaves represent class tags and branches represent relationships to functions and/or rules that lead to these class tags. 7. The system as claimed in claim 1, wherein the evaluation device is integrated in said monitoring device remotely from the system. 8. A method for diagnosing at least one component requiring maintenance in an appliance and/or installation, having the following steps: a) accepting states from one or more components provided in a systematized form, wherein the states of the one or more components and/or events arising thereon are monitored by an analysis device, b) receiving historic data from the one or more components with regard to their life in collective form, c) accepting a predictive model from a learning machine unit that delivers the predictive model for identifying at least one component requiring maintenance, d) using the states coming from the analysis unit in systematic form, the historic data in collective form and using the predictive model to identify the one or more components requiring maintenance, e) outputting an error message on the basis of the identification of the one or more components requiring maintenance. 9. The method as claimed in claim 8, wherein one or more components requiring maintenance are identified, within a determined time window, on the basis of a target value, specified by the respective affected component, for a training on the basis of classifications that are derived from the historic data of the appliance and/or installation. 10. The method as claimed in claim 8, wherein events and/or states are provided in a systematized form according to their frequency, if need be in a manner provided with a weighting that corresponds to their relevance, within a time window. 11. The method as claimed in claim 8, wherein said collective form reproduces a correlation between the one or more components and other components of the appliance and/or installation. 12. The method as claimed in claim 8, wherein said life represents an expected life cycle, the average life cycle being related to the ongoing life cycle. 13. The method as claimed in claim 8, wherein the predictive model is represented by a decision tree in which the leaves represent class tags and branches represent relationships to functions and/or rules that lead to these class tags. 14. A computer program having means for performing the method as claimed in claim 8 when the computer program is executed on a system or on the devices of the system as claimed in one of the aforementioned system claims. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: A system for diagnosing at least one component requiring maintenance in an appliance and/or installation, having a) a device, designed for data and/or message interchange with regard to states of one or more components with an analysis unit, b) a device, designed to receive historic data from the one or more components with regard to their life in collective form, c) a device, designed for data and/or message interchange with a learning machine unit that is designed to deliver a predictive model for identifying at least one component requiring maintenance to the device, d) an evaluation device that is designed to use the data and/or messages coming from the analysis unit, e) a device, designed for data and/or message interchange with a monitoring device that is designed to take the one or more identified components requiring maintenance, which can prompt a visual and/or audible display is provided. |
|
G06N504 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A system for diagnosing at least one component requiring maintenance in an appliance and/or installation, having a) a device, designed for data and/or message interchange with regard to states of one or more components with an analysis unit, b) a device, designed to receive historic data from the one or more components with regard to their life in collective form, c) a device, designed for data and/or message interchange with a learning machine unit that is designed to deliver a predictive model for identifying at least one component requiring maintenance to the device, d) an evaluation device that is designed to use the data and/or messages coming from the analysis unit, e) a device, designed for data and/or message interchange with a monitoring device that is designed to take the one or more identified components requiring maintenance, which can prompt a visual and/or audible display is provided. |
|
Systems and methods are provided for managing medical adherence. An exemplary method may include managing medical adherence utilizing data aggregating and processing to determine impact on a user's health based on their behavior related to prescribed medication. The method may entail utilizing data related to a medication regimen and patient behavior to determine a patient's compliance to the regimen in terms of dosage and time. These values may be utilized to calculate a medical adherence value representing a patient's adherence to a prescribed regimen. Responsive to determining low medical adherence, a notification may be generated which may result in an intervention with the patient. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method for managing a patient's medical adherence, the method comprising: receiving, using a processor, data related to the patient, the data including information related to a prescribed medication regimen having one or more medications, patient behavior data, a respective literacy level associated with each of the one or more medications; calculating a compliance to dosage and a compliance to time for each of the one or more medications based on the received data; calculating a drug adherence count associated with each of the one or more medications by summing at least two of the compliance to dosage, the compliance to time, and the respective literacy level associated with each of the one or more medications; determining a daily medication adherence for each of the one or more medications; calculating a daily regimen adherence value by summing the daily medication adherence of all of the one or more medications in the medication regimen; calculating a daily regimen baseline value by re-calculating the daily regimen adherence value by utilizing a maximum potential value for the drug adherence count for each of the respective medications in the regimen associated with the patient; determining a medical adherence value based on the daily regimen adherence value and the daily regimen baseline value; and comparing the medical adherence with a threshold value. 2. The computer-implemented method of claim 1, further comprising: generating a notification when the medial adherence value is less than the threshold value. 3. The computer-implemented method of claim 2, further comprising: transmitting the generated notification to one or more user devices; 4. The computer-implemented method of claim 3, wherein the generated notification contains one or more intervention options. 5. The computer-implemented method of claim 1, wherein: receiving data related to the patient further comprises receiving a respective drug importance factor associated with each of the one or more medications; and determining a daily medication adherence for each of the one or more medications comprises determining the daily medication adherence for each of the one or more medications based on the drug adherence count and the drug importance factor associated with each of the one or more medications. 6. The computer-implemented method of claim 5, wherein determining the daily medication adherence for each of the one or more medications based on the drug adherence count and the drug importance factor associated with each of the one or more medications comprises multiplying the drug adherence count and the drug importance factor. 7. The computer-implemented method of claim 1, wherein calculating the compliance to dosage comprises: determining a prescribed dosage for each of the one or more medications; assigning dosage boolean values for actual dosage consumed based on the prescribed dosage and the received data; and calculating the compliance to dosage based on the assigned dosage boolean values. 8. The computer-implemented method of claim 7, wherein calculating the compliance to time comprises: determining a prescribed dosage time for each of the one or more medications; assigning time boolean values for actual consumption time based on the prescribed dosage time and the received data; and calculating the compliance to time based on the assigned time boolean values. 9. The computer-implemented method of claim 1, further comprising: assigning a unique regimen identity to the regimen; and assigning respective unique medication identities to each of the one or more medications based on the unique regimen identity. 10. A system for managing medical adherence, the system comprising: a memory having processor-readable instructions stored therein; and a processor configured to access the memory and execute the processor-readable instructions, which when executed by the processor configures the processor to perform a method, the method comprising: receiving data related to the patient, the data including information related to a prescribed medication regimen having one or more medications, patient behavior data, a respective literacy level associated with each of the one or more medications; calculating a compliance to dosage and a compliance to time for each of the one or more medications based on the received data; calculating a drug adherence count associated with each of the one or more medications by summing at least two of the compliance to dosage, the compliance to time, and the respective literacy level associated with each of the one or more medications; determining a daily medication adherence for each of the one or more medications; calculating a daily regimen adherence value by summing the daily medication adherence of all of the one or more medications in the medication regimen; calculating a daily regimen baseline value by re-calculating the daily regimen adherence value by utilizing a maximum potential value for the drug adherence count for each of the respective medications in the regimen associated with the patient; determining a medical adherence value based on the daily regimen adherence value and the daily regimen baseline value; and comparing the medical adherence with a threshold value. 11. The system of claim 10, wherein the method further comprises: generating a notification when the medial adherence value is less than the threshold value. 12. The system of claim 11, wherein the method further comprises: transmitting the generated notification to one or more user devices; 13. The system of claim 12, wherein the generated notification contains one or more intervention options. 14. The system of claim 13, wherein: receiving data related to the patient further comprises receiving a respective drug importance factor associated with each of the one or more medications; and determining a daily medication adherence for each of the one or more medications comprises determining the daily medication adherence for each of the one or more medications based on the drug adherence count and the drug importance factor associated with each of the one or more medications. 15. The system of claim 14, wherein determining the daily medication adherence for each of the one or more medications based on the drug adherence count and the drug importance factor associated with each of the one or more medications comprises multiplying the drug adherence count and the drug importance factor. 16. The system of claim 10, wherein calculating the compliance to dosage comprises: determining a prescribed dosage for each of the one or more medications; assigning dosage boolean values for actual dosage consumed based on the prescribed dosage and the received data; and calculating the compliance to dosage based on the assigned dosage boolean values. 17. The system of claim 16, wherein calculating the compliance to time comprises: determining a prescribed dosage time for each of the one or more medications; assigning time boolean values for actual consumption time based on the prescribed dosage time and the received data; and calculating the compliance to time based on the assigned time boolean values. 18. The system of claim 10, wherein the method further comprises: assigning a unique regimen identity to the regimen; and assigning respective unique medication identities to each of the one or more medications based on the unique regimen identity. 19. A non-transitory computer-readable medium storing instructions, then instructions, when executed by a computer system cause the computer system to perform a method, the method comprising: receiving, using a processor, data related to the patient, the data including information related to a prescribed medication regimen having one or more medications, patient behavior data, a respective literacy level associated with each of the one or more medications; calculating a compliance to dosage and a compliance to time for each of the one or more medications based on the received data; calculating a drug adherence count associated with each of the one or more medications by summing at least two of the compliance to dosage, the compliance to time, and the respective literacy level associated with each of the one or more medications; determining a daily medication adherence for each of the one or more medications; calculating a daily regimen adherence value by summing the daily medication adherence of all of the one or more medications in the medication regimen; calculating a daily regimen baseline value by re-calculating the daily regimen adherence value by utilizing a maximum potential value for the drug adherence count for each of the respective medications in the regimen associated with the patient; determining a medical adherence value based on the daily regimen adherence value and the daily regimen baseline value; and comparing the medical adherence with a threshold value. 20. The non-transitory computer-readable medium of claim 19, wherein the method further comprises: generating a notification when the medial adherence value is less than the threshold value. 21. The non-transitory computer-readable medium of claim 20, wherein the method further comprises: transmitting the generated notification to one or more user devices; 22. The non-transitory computer-readable medium of claim 21, wherein the generated notification contains one or more intervention options. 23. The non-transitory computer-readable medium of claim 19, wherein: receiving data related to the patient further comprises receiving a respective drug importance factor associated with each of the one or more medications; and determining a daily medication adherence for each of the one or more medications comprises determining the daily medication adherence for each of the one or more medications based on the drug adherence count and the drug importance factor associated with each of the one or more medications. 24. The non-transitory computer-readable medium of claim 23, wherein determining the daily medication adherence for each of the one or more medications based on the drug adherence count and the drug importance factor associated with each of the one or more medications comprises multiplying the drug adherence count and the drug importance factor. 25. The non-transitory computer-readable medium of claim 19, wherein calculating the compliance to dosage comprises: determining a prescribed dosage for each of the one or more medications; assigning dosage boolean values for actual dosage consumed based on the prescribed dosage and the received data; and calculating the compliance to dosage based on the assigned dosage boolean values. 26. The non-transitory computer-readable medium of claim 25, wherein calculating the compliance to time comprises: determining a prescribed dosage time for each of the one or more medications; assigning time boolean values for actual consumption time based on the prescribed dosage time and the received data; and calculating the compliance to time based on the assigned time boolean values. 27. The non-transitory computer-readable medium of claim 19, wherein the method further comprises: assigning a unique regimen identity to the regimen; and assigning respective unique medication identities to each of the one or more medications based on the unique regimen identity. |
|
ACCEPTED | Please predict whether this patent is acceptable.PATENT ABSTRACT: Systems and methods are provided for managing medical adherence. An exemplary method may include managing medical adherence utilizing data aggregating and processing to determine impact on a user's health based on their behavior related to prescribed medication. The method may entail utilizing data related to a medication regimen and patient behavior to determine a patient's compliance to the regimen in terms of dosage and time. These values may be utilized to calculate a medical adherence value representing a patient's adherence to a prescribed regimen. Responsive to determining low medical adherence, a notification may be generated which may result in an intervention with the patient. |
|
G06N5048 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Systems and methods are provided for managing medical adherence. An exemplary method may include managing medical adherence utilizing data aggregating and processing to determine impact on a user's health based on their behavior related to prescribed medication. The method may entail utilizing data related to a medication regimen and patient behavior to determine a patient's compliance to the regimen in terms of dosage and time. These values may be utilized to calculate a medical adherence value representing a patient's adherence to a prescribed regimen. Responsive to determining low medical adherence, a notification may be generated which may result in an intervention with the patient. |
|
Methods and systems are described for processing media consumption information across multiple data spaces over a common media asset space. User preference information is received from two data spaces. User preference information from the first data space includes monitored user interactions of a first plurality of users with respect to a first plurality of media assets and user preference information from the second data space includes levels of enjoyment that a second plurality of users expressly input with respect to a second plurality of media assets. Both sets of preference information are transformed to respective consumption layer preference information and respective attributes indicative of users' preferences are determined. A first and second sentimental similarity values are determined for the first and second preference information respectively. The two sentimental similarity values are compared and an error value is calculated based on the comparison. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for processing media consumption information across multiple data spaces over a common media asset space, the method comprising: receiving, by a consumption model, first preference information of a first plurality of users, wherein the first preference information is associated with a first data space and describes monitored user interactions of the first plurality of users with respect to a first plurality of media assets, and wherein the first plurality of media assets corresponds to the first data space; receiving, by the consumption model, second preference information of a second plurality of users, wherein the second preference information is associated with a second data space and comprises levels of enjoyment that are expressly input by the second plurality of users with respect to a second plurality of media assets, and wherein the second plurality of media assets corresponds to the second data space; transforming the first preference information to first consumption layer preference information, wherein the first consumption layer preference information comprises specific attributes that are indicative of users' preferences; transforming the second preference information to second consumption layer preference information, wherein the second consumption layer preference information comprises specific attributes that are indicative of users' preferences; determining, using a preference model, first user preference details corresponding to a first media asset and a second media asset based on the first consumption layer preference information; determining, using the preference model, second user preference details corresponding to the first media asset and the second media asset based on the second consumption layer preference information; determining, using a similarity model, a first sentimental similarity between the first media asset and the second media asset, wherein the first sentimental similarity corresponds to a degree of similarity between the first media asset and the second media asset based on the first user preference details; determining, using the similarity model, a second sentimental similarity between the first media asset and the second media asset, wherein the second sentimental similarity corresponds to a degree of similarity between the first media asset and the second media asset based on the second user preference details; and determining, using an error model, a difference between the first sentimental similarity and the second sentimental similarity. 2. The method of claim 1, wherein the difference is a pair-wise difference, and wherein the method further comprises: adjusting, based on the pair-wise difference between the first sentimental similarity and the second sentimental similarity, the first user preference details and the second user preference details determined from the first and second consumption layer preference information in order to minimize the error value. 3. The method of claim 2, wherein adjusting, based on the difference between the first sentimental similarity and the second sentimental similarity, the user preference details comprises applying a chain rule in order to determine weights associated with trainable parameters of the preference model. 4. The method of claim 1, wherein determining, using the preference model, the user preference details corresponding to the first media asset and the second media asset based on the first consumption layer preference information and the second consumption layer preference information respectively, comprises applying at least one of a linear transformation function, a neural network, and a restricted Boltzmann machine. 5. The method of claim 1, wherein determining, using the similarity model, the first sentimental similarity between the first media asset and the second media asset based on the received user preference details associated with the first data space comprises applying at least one of a Pearson's coefficient and cosine similarity. 6. The method of claim 1, wherein determining, using the error model, the difference between the first sentimental similarity and the second sentimental similarity comprises: calculating a first quality value, wherein the first quality value is associated with the first sentimental similarity; calculating a second quality value, wherein the second quality value is associated with the second sentimental similarity; and determining the difference between the first sentimental similarity and the second sentimental similarity based on the first quality value and the second quality value. 7. The method of claim 6, wherein the first quality value is based on a number of users from the first data space who consumed the first media asset and the second media asset. 8. The method of claim 6, wherein the second quality value is based on a number of users from the second data space who expressly input their level of enjoyment with respect to the first media asset and the second media asset. 9. The method of claim 6, wherein determining, using the error model, the difference between the first sentimental similarity and the second sentimental similarity comprises: determining a first particularity value of the first preference information; determining a second particularity value of the second preference information; and determining the difference between the first sentimental similarity and the second sentimental similarity based on the first particularity value and the second particularity value. 10. The method of claim 1, wherein transforming the first preference information and the second preference information to the first consumption layer preference information and the second consumption layer preference information comprises: determining, for the first media asset of the first plurality of media assets whether the first media asset is also within the second plurality of media assets; and in response to determining that the first media asset is also within the second plurality of media assets, generating a record for the first media asset, wherein the record comprises preference information that is retrieved from both the first data space and the second data space. 11. A system for processing media consumption information across multiple data spaces over a common media asset space, the system comprising: control circuitry configured to: receive first preference information of a first plurality of users, wherein the first preference information is associated with a first data space and describes monitored user interactions of the first plurality of users with respect to a first plurality of media assets, and wherein the first plurality of media assets corresponds to the first data space; receive second preference information of a second plurality of users, wherein the second preference information is associated with a second data space and comprises levels of enjoyment that are expressly input by the second plurality of users with respect to a second plurality of media assets, and wherein the second plurality of media assets corresponds to the second data space; transform the first preference information to first consumption layer preference information, wherein the first consumption layer preference information comprises specific attributes that are indicative of users' preferences; transform the second preference information to second consumption layer preference information, wherein the second consumption layer preference information comprises specific attributes that are indicative of users' preferences; determine first user preference details corresponding to a first media asset and a second media asset based on the first consumption layer preference information; determine second user preference details corresponding to the first media asset and the second media asset based on the second consumption layer preference information; determine a first sentimental similarity between the first media asset and the second media asset, wherein the first sentimental similarity corresponds to a degree of similarity between the first media asset and the second media asset based on the first user preference details; determine a second sentimental similarity between the first media asset and the second media asset, wherein the second sentimental similarity corresponds to a degree of similarity between the first media asset and the second media asset based on the second user preference details; and determine a difference between the first sentimental similarity and the second sentimental similarity. 12. The system of claim 11, wherein the difference is a pair-wise difference, and wherein the control circuitry is further configured to: adjust, based on the pair-wise difference between the first sentimental similarity and the second sentimental similarity, the first user preference details and the second user preference details determined from the first and second consumption layer preference information in order to minimize the error value. 13. The system of claim 12, wherein the control circuitry configured to adjust, based on the difference between the first sentimental similarity and the second sentimental similarity, the user preference details is further configured to apply a chain rule in order to determine weights associated with trainable parameters of the preference model. 14. The system of claim 11, wherein the control circuitry configured to determine, using the preference model, the user preference details corresponding to the first media asset and the second media asset based on the first consumption layer preference information and the second consumption layer preference information respectively is further configured to apply at least one of a linear transformation function, a neural network, and a restricted Boltzmann machine. 15. The system of claim 11, wherein the control circuitry configured to determine, using the similarity model, the first sentimental similarity between the first media asset and the second media asset based on the received user preference details associated with the first data space is further configured to apply at least one of a Pearson's coefficient and cosine similarity. 16. The system of claim 11, wherein the control circuitry configured to determine, using the error model, the difference between the first sentimental similarity and the second sentimental similarity is further configured to: calculate a first quality value, wherein the first quality value is associated with the first sentimental similarity; calculate a second quality value, wherein the second quality value is associated with the second sentimental similarity; and determine the difference between the first sentimental similarity and the second sentimental similarity based on the first quality value and the second quality value. 17. The system of claim 16, wherein the first quality value is based on a number of users from the first data space who consumed the first media asset and the second media asset. 18. The system of claim 16, wherein the second quality value is based on a number of users from the second data space who expressly input their level of enjoyment with respect to the first media asset and the second media asset. 19. The system of claim 16, wherein the control circuitry configured to determine, using the error model, the difference between the first sentimental similarity and the second sentimental similarity is further configured to: determine a first particularity value of the first preference information; determine a second particularity value of the second preference information; and determine the difference between the first sentimental similarity and the second sentimental similarity based on the first particularity value and the second particularity value. 20. The system of claim 11, wherein the control circuitry configured to transform the first preference information and the second preference information to the first consumption layer preference information and the second consumption layer preference information is further configured to: determine, for the first media asset of the first plurality of media assets whether the first media asset is also within the second plurality of media assets; and in response to determining that the first media asset is also within the second plurality of media assets, generate a record for the first media asset, wherein the record comprises preference information that is retrieved from both the first data space and the second data space. 21-50. (canceled) |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: Methods and systems are described for processing media consumption information across multiple data spaces over a common media asset space. User preference information is received from two data spaces. User preference information from the first data space includes monitored user interactions of a first plurality of users with respect to a first plurality of media assets and user preference information from the second data space includes levels of enjoyment that a second plurality of users expressly input with respect to a second plurality of media assets. Both sets of preference information are transformed to respective consumption layer preference information and respective attributes indicative of users' preferences are determined. A first and second sentimental similarity values are determined for the first and second preference information respectively. The two sentimental similarity values are compared and an error value is calculated based on the comparison. |
|
G06N7005 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Methods and systems are described for processing media consumption information across multiple data spaces over a common media asset space. User preference information is received from two data spaces. User preference information from the first data space includes monitored user interactions of a first plurality of users with respect to a first plurality of media assets and user preference information from the second data space includes levels of enjoyment that a second plurality of users expressly input with respect to a second plurality of media assets. Both sets of preference information are transformed to respective consumption layer preference information and respective attributes indicative of users' preferences are determined. A first and second sentimental similarity values are determined for the first and second preference information respectively. The two sentimental similarity values are compared and an error value is calculated based on the comparison. |
|
A graph syntax validation system, method, or computer-readable medium that receives: (i) an input graph, (ii) transformation rules, and (iii) a minimal valid graph. The system/method/computer-readable medium transforms the input graph into the minimal valid graph using the transformation rules that are comprised of source patterns and target patterns. The system/method/computer-readable medium recurrently transforms the input graph until either the input graph has been reduced to the minimal valid graph indicating that the input graph uses a valid syntax, or until it is determined that one or more transformation rules do not match the input graph indicating that the input graph uses an invalid syntax. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for graph syntax validation comprising: receiving, by a processor, (i) an input graph that contains one or more input graph nodes, (ii) transformation rules, and (iii) a minimal valid graph; and transforming, by the processor in response to receiving the input graph, the input graph into the minimal valid graph using the transformation rules, including a source pattern and a target pattern, further comprising, in the transforming step, source pattern-matching by comparing the input graph with the source pattern of the transformation rules and determining whether the input graph matches the source pattern of one or more transformation rules; and rule-executing, by replacing the input graph nodes that are determined to match the source pattern of one or more transformation rules with the target patterns for the one or more transformation rules, wherein the transforming recurs until either the input graph has been determined to be reduced to the minimal valid graph indicating that the input graph uses a valid syntax, or until it is determined that none of the transformation rules match the input graph indicating that the input graph uses an invalid syntax. 2. The method of claim 1, further comprising result-visualizing, by the processor, the transformation result. 3. The method of claim 1, further comprising design-facilitating, by the processor, the minimal valid graph and the transformation rules. 4. The method of claim 3, wherein the design-facilitating includes the use of non-terminal symbol design tools to generate intermediate transformation results. 5. The method of claim 3, wherein the design-facilitating includes placeholder design tools for use in the source pattern and the target patterns. 6. A graph syntax validation system, comprising: a processor configured to receive (i) an input graph that contains one or more input graph nodes, (ii) transformation rules, and (iii) a minimal valid graph; transform, in response to receiving the input graph, the input graph into the minimal valid graph using the transformation rules, including a source pattern and a target pattern; source pattern-match by comparing the input graph with the source patterns of the transformation rules and determining whether the input graph matches the source pattern of one or more transformation rules; and rule-execute, by replacing the input graph nodes that are determined to match with the source pattern of one or more transformation rules with the target patterns for the one or more transformation rules, wherein the processor recurrently transforms until either the input graph has been determined to be reduced to the minimal valid graph indicating that the input graph uses a valid syntax, or until it is determined that none of the transformation rules match the input graph indicating that the input graph uses an invalid syntax. 7. The system of claim 6, wherein the processor is further configured to result-visualize the transformation result. 8. The system of claim 6, wherein the processor is further configured to design-facilitate the minimal valid graph and the transformation rules. 9. The system of claim 8, wherein the processor is further configured to design-facilitate using non-terminal symbol design tools to generate intermediate transformation results. 10. The system of claim 8, wherein the processor is further configured to design-facilitate using placeholder design tools in the source pattern and the target patterns. 11. A non-transitory computer readable medium configured to provide a method for graph syntax validation when executable instructions are executed, comprising instructions for: receiving (i) an input graph that contains one or more input graph nodes, (ii) transformation rules, and (iii) a minimal valid graph; and transforming, in response to receiving the input graph, the input graph into the minimal valid graph using the transformation rules, including a source pattern and a target pattern, further comprising, in the transforming, source pattern-matching by comparing the input graph with the source pattern of the transformation rules and determining whether the input graph matches the source pattern of one or more transformation rules; and rule-executing, by replacing the input graph nodes that are determined to match the source pattern of one or more transformation rules with the target patterns for the one or more transformation rules, wherein the transforming recurs until either the input graph has been determined to be reduced to the minimal valid graph indicating that the input graph uses a valid syntax, or until it is determined that none of the transformation rules match the input graph indicating that the input graph uses an invalid syntax. 12. The non-transitory computer-readable medium of claim 11, further comprising result-visualizing, by the processor, the transformation result. 13. The non-transitory computer-readable medium of claim 11, further comprising design-facilitating, by the processor, the minimal valid graph and the transformation rules. 14. The non-transitory computer-readable medium of claim 13, wherein the design-facilitating includes the use of non-terminal symbol design tools to generate intermediate transformation results. 15. The non-transitory computer-readable medium of claim 13, wherein the design-facilitating includes placeholder design tools for use in the source pattern and the target patterns. |
|
ACCEPTED | Please predict whether this patent is acceptable.PATENT ABSTRACT: A graph syntax validation system, method, or computer-readable medium that receives: (i) an input graph, (ii) transformation rules, and (iii) a minimal valid graph. The system/method/computer-readable medium transforms the input graph into the minimal valid graph using the transformation rules that are comprised of source patterns and target patterns. The system/method/computer-readable medium recurrently transforms the input graph until either the input graph has been reduced to the minimal valid graph indicating that the input graph uses a valid syntax, or until it is determined that one or more transformation rules do not match the input graph indicating that the input graph uses an invalid syntax. |
|
G06N5047 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A graph syntax validation system, method, or computer-readable medium that receives: (i) an input graph, (ii) transformation rules, and (iii) a minimal valid graph. The system/method/computer-readable medium transforms the input graph into the minimal valid graph using the transformation rules that are comprised of source patterns and target patterns. The system/method/computer-readable medium recurrently transforms the input graph until either the input graph has been reduced to the minimal valid graph indicating that the input graph uses a valid syntax, or until it is determined that one or more transformation rules do not match the input graph indicating that the input graph uses an invalid syntax. |
|
A technique relates to configuring a superconducting router. The superconducting router is operated in a first mode. Ports are configured to be in reflection in the first mode in order to reflect a signal. The superconducting router is operated in a second mode. A given pair of the ports is connected together and in transmission in the second mode, such that the signal is permitted to pass between the given pair of the ports. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of configuring a superconducting router, the method comprising: operating the superconducting router in a first mode, wherein ports are configured to be in reflection in the first mode in order to reflect a signal; and operating the superconducting router in a second mode, wherein a given pair of the ports is connected together and in transmission in the second mode, such that the signal is permitted to pass between the given pair of the ports. 2. The method of claim 1, wherein the ports are in reflection in the first mode for a predefined frequency of the signal. 3. The method of claim 1, wherein the given pair of the ports are in transmission in the second mode for a predefined frequency. 4. The method of claim 1, wherein the ports are configured to be isolated from one another, such that any port is controllable to be separated from another port. 5. The method of claim 1, wherein the superconducting router comprises superconducting materials. 6. The method of claim 1, wherein the superconducting router comprises tunable filters associated with the ports, such that the tunable filters are controllable to operate in the first mode and the second mode. 7. The method of claim 6, wherein the tunable filters are configured to be operated as an open switch and a closed switch according to the first and second modes. 8. The method of claim 1, wherein in the second mode, the given pair of the ports is in transmission while other ports of the ports are in reflection for a predefined frequency of the signal. 9. The method of claim 1, wherein the signal is in a microwave domain. 10. The method of claim 1, wherein the given pair of the ports are time dependent, such that a selection of the given pair of the ports are configured to change according to a defined time scheme. 11. The method of claim 1, wherein the given pair of the ports are configured to be arbitrarily selected from the ports. 12. The method of claim 1, wherein the given pair of the ports are configured to be selected from the ports at a defined time. 13. The method of claim 1, wherein the superconducting router is a lossless microwave switch having superconducting materials. 14. A method of configuring a superconducting circulator, the method comprising: operating the superconducting circulator to receive a readout signal at an input port while an output port is in reflection, wherein the readout signal is to be transmitted through a common port to a quantum system, wherein the readout signal is configured to cause a reflected readout signal to resonate back from the quantum system; and operating the superconducting circulator to output the reflected readout signal at the output port while the input port is in reflection. 15. The method of claim 14, wherein a delay line delays transmission of the readout signal and the reflected readout signal. 16. The method of claim 15, further comprising providing a transition time to switch between operating the input port in transmission to operating the input port in reflection, wherein the delay line causes the transition time. 17. The method of claim 14, wherein the quantum system includes a readout resonator operatively connected to a qubit. 18. The method of claim 17, wherein the reflected readout signal includes quantum information of the qubit. 19. The method of claim 14, wherein a first tunable filter is connected to the input port, such that the first tunable filter permits the input port to operate in transmission or reflection; and wherein a second tunable filter is connected to the output port, such that the second tunable filter permits the output port to operate in transmission or reflection. 20. The method of claim 14, wherein the circulator is a lossless microwave switch having superconducting materials. 21. A superconducting router comprising: ports configured to operate in a first mode and a second mode, wherein in the first mode the ports are configured to be in reflection in order to reflect a signal; and a given pair of the ports configured to operate in the second mode, wherein in the second mode the given pair of the ports is connected together and in transmission, such that the signal is permitted to pass between the given pair of the ports. 22. The superconducting router of claim 21, wherein the ports are in reflection in the first mode for a predefined frequency of the signal; and wherein the given pair of the ports is in transmission in the second mode for the predefined frequency. 23. The superconducting router of claim 21, wherein tunable filters are associated with the ports, such that the tunable filters are controllable to operate in the first mode and the second mode. 24. A superconducting circulator comprising: an input port connected to a first tunable filter such that the input port is configured to operate in a first mode and a second mode; and an output port connected to a second tunable filter such that the output port is configured to operate in the first mode and the second mode, wherein in the first mode the input port is configured to receive a readout signal while the output port is in reflection, wherein the readout signal is to be transmitted through a common port to a quantum system, wherein the readout signal is configured to cause a reflected readout signal to resonate back from the quantum system, wherein in the second mode the output port is configured to output the reflected readout signal while the input port is in reflection. 25. A system comprising: a quantum system; and a superconducting microwave switch connected to the quantum system, wherein the superconducting microwave switch is configured to receive a readout signal at an input port, wherein the readout signal is to be transmitted through a common port to the quantum system, wherein the superconducting microwave switch is configured to output a reflected readout signal at an output port, the reflected readout signal being from the quantum system. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: A technique relates to configuring a superconducting router. The superconducting router is operated in a first mode. Ports are configured to be in reflection in the first mode in order to reflect a signal. The superconducting router is operated in a second mode. A given pair of the ports is connected together and in transmission in the second mode, such that the signal is permitted to pass between the given pair of the ports. |
|
G06N99002 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A technique relates to configuring a superconducting router. The superconducting router is operated in a first mode. Ports are configured to be in reflection in the first mode in order to reflect a signal. The superconducting router is operated in a second mode. A given pair of the ports is connected together and in transmission in the second mode, such that the signal is permitted to pass between the given pair of the ports. |
|
Recurrent neural networks are powerful tools for handling incomplete data problems in machine learning thanks to their significant generative capabilities. However, the computational demand for algorithms to work in real time applications requires specialized hardware and software solutions. We disclose a method for adding recurrent processing capabilities into a feedforward network without sacrificing much from computational efficiency. We assume a mixture model and generate samples of the last hidden layer according to the class decisions of the output layer, modify the hidden layer activity using the samples, and propagate to lower layers. For an incomplete data problem, the iterative procedure emulates feedforward-feedback loop, filling-in the missing hidden layer activity with meaningful representations. | Please help me write a proper abstract based on the patent claims. CLAIM: 1-5. (canceled) 6. A computer implemented method for recurrent data processing, comprising the steps of: computing activity of multiple layers of hidden layer nodes in a feed forward neural network, given an input data instance, forming memories of hidden layer activities, utilizing clustering and filtering methods, as a training phase in a recurrent processing, finding memories that are closest to the presented test data instance according to the class decision of the feedforward network, and imputing the test data hidden layer activity with computed closest memories in an iterative fashion, wherein the step of forming memories of hidden layer activities, utilizing clustering and filtering methods, as a training phase in a recurrent processing further comprises the substeps of: computing hidden layer activities of every training data instance, then low-pass filtering and stacking the hidden layer activities in a data structure, keeping a first and second hidden layer activity memory, indexed with the class label, forming both class specific and class independent duster centers as quantized memories of training data's second hidden layer activity, via k-means clustering, using each class data separately; or using all the data together depending on a choice of class specificity, keeping quantized second hidden layer memories, indexed with class labels or non-indexed, depending on the class specificity choice, training a cascade of classifiers for enabling multiple hypotheses generation of a network, via utilizing a subset of the data as the training data, keeping a classifier memory, indexed with the set of data used during training, wherein the step of finding memories that are closest to the presented test data instance according to the class decision of the feedforward network, and imputing the test data hidden layer activity with computed closest memories in an iterative fashion further comprises the substeps of determining the first, second and third class label choices of the neural network as multiple hypotheses, via a cascaded procedure utilizing a sequence of classifier decisions, computing a set of candidate samples for the second layer, that are closest (Euclidian) hidden layer memories to the test data's second hidden layer activity, using the multiple hypotheses class decisions of the network and a corresponding memory database; then assigning the second hidden layer sample as one of the candidate hidden layer memories, via max or averaging operations depending on the choice of multi-hypotheses competition, merging the second hidden layer sample with the test data hidden layer activity via weighted averaging operation, creating an updated second hidden layer activity, using the updated second hidden layer activity to compute the closest (Euclidian) first hidden layer memory, and assigning as the first hidden layer sample, merging the first hidden layer sample with the test data first hidden layer activity via weighted averaging operation, creating an updated first hidden layer activity, computing the feedforward second hidden layer activity from updated first hidden layer activity, and merging this feedforward second hidden layer activity with updated second hidden layer activity, via weighted averaging operation, repeating these steps for multiple iterations starting from the step of determining the first, second and third class label choices of the neural network as multiple hypotheses, via a cascaded procedure utilizing a sequence of classifier decisions, and using the output of step of computing the feedforward second hidden layer activity from updated first hidden layer activity, and merging this feedforward second hidden layer activity with updated second hidden layer activity, via weighted averaging operation in the beginning of the next iteration. 7. A computer implemented method according to claim 6 for enabling a feedforward network to mimic a recurrent neural network via making a class decision at the output layer of feedforward neural network, and selecting an appropriate memory to estimate selected model's (class decision) hidden layer activities, then inserting the selected memory to the hidden layer activity as if it is a feedback from a higher layer network in classical recurrent networks. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: Recurrent neural networks are powerful tools for handling incomplete data problems in machine learning thanks to their significant generative capabilities. However, the computational demand for algorithms to work in real time applications requires specialized hardware and software solutions. We disclose a method for adding recurrent processing capabilities into a feedforward network without sacrificing much from computational efficiency. We assume a mixture model and generate samples of the last hidden layer according to the class decisions of the output layer, modify the hidden layer activity using the samples, and propagate to lower layers. For an incomplete data problem, the iterative procedure emulates feedforward-feedback loop, filling-in the missing hidden layer activity with meaningful representations. |
|
G06N30454 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Recurrent neural networks are powerful tools for handling incomplete data problems in machine learning thanks to their significant generative capabilities. However, the computational demand for algorithms to work in real time applications requires specialized hardware and software solutions. We disclose a method for adding recurrent processing capabilities into a feedforward network without sacrificing much from computational efficiency. We assume a mixture model and generate samples of the last hidden layer according to the class decisions of the output layer, modify the hidden layer activity using the samples, and propagate to lower layers. For an incomplete data problem, the iterative procedure emulates feedforward-feedback loop, filling-in the missing hidden layer activity with meaningful representations. |
|
A method that improves the training of predictive models. Better trained predictive models make better predictions, and can classify transactions with reduced levels of false positives and false negative. Included is an apparatus for executing a data clean-up algorithm that harmonizes a wide range of real world supervised and unsupervised training data into a single, error-free, uniformly formatted record file that has every field coherent and well populated with information. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method that improves the training of predictive models, comprising: converting and transforming a variety of inconsistent and incoherent supervised and unsupervised training data for predictive models received by a network server as electronic data files, and storing that in a computer data storage mechanism, and then into another single, error-free, uniformly formatted record file stored in the computer data storage mechanism with an apparatus for executing a data integrity analysis algorithm that harmonizes a range of supervised and unsupervised training data into flat-data records in which every field of every record file is modified to be coherent and well-populated with information; comparing and correcting any data values in each data field in the inconsistent and incoherent supervised and unsupervised training data according to a user-service consumer preference and a predefined data dictionary of valid data values with an apparatus for executing an algorithm that substitutes data values in the data fields of incoming supervised and unsupervised training data with at least one value representing a minimum, a maximum, a null, an average, and a default; discerning the context of any text included in the inconsistent and incoherent supervised and unsupervised training data with an apparatus for executing a contextual dictionary algorithm that employs a thesaurus of alternative contexts of ambiguous words for find a common context denominator, and to then record the context determined into the computer data storage mechanism for later access by a predictive model; cleaning up inconsistent, missing, and illegal data in each data field by removal or reconstitution with an apparatus for executing an algorithm for cleaning up raw data in stored data records, field-by-field, record-by-record in which some types of fields are restricted in what is legal or allowed, and includes fetching raw data from the computer data storage mechanism and testing each field if a data value reported is numeric or symbolic, and if numeric, a data dictionary is used to see if such data value is previously listed as valid, and if symbolic, using another data dictionary to see if such data value is listed there as valid; cleaning up inconsistent, missing, and illegal data in each data field by removal or reconstitution with an apparatus for executing a Smith-Waterman algorithm for a local-sequence alignment and to determine if there are any similar regions between two strings or sequences, and in which a consistent, coherent terminology is then enforceable in each data field without data loss, and in which the Smith-Waterman algorithm compares segments of all possible lengths and optimizes a similarity measure without looking at any total sequence; cleaning up inconsistent, missing, and illegal data in each data field by removal or reconstitution with an apparatus for replacing a numeric value, wherein a numeric value to use as a replacement depends on any flags or preferences that were set to use a default, the average, a minimum, a maximum, or a null; sampling cleaned, raw-data from the flat-data records in the computer data storage mechanism with an apparatus for executing an algorithm that tests if data are supervised, and if so, that creates a plurality of individual data sets for each class with a stratified selection as needed, and then testing if a selected class is abnormal or uncharacteristic, and if not, down-sampling and producing sampled records of the classes and splitting any remaining data into separate training sets, separate test sets, and separate blind sets all then stored in the computer data storage mechanism for later use in subsequent steps to train a predictive model; if the test for each record of each class in supervised data is abnormal or uncharacteristic, then skipping a down-sampling for that instance; and if in a previous step the cleaned, raw-data from the flat-data records in the computer data storage mechanism was determined by the apparatus for executing an algorithm that tests if data are supervised are, in fact, unsupervised, then down-sampling all records and splitting a remaining a sampled record data into a separate a training set, a separate test set, and a separate blind set for later use in subsequent steps to train a predictive model. 2. A method that improves the training of predictive models, comprising: converting and transforming a variety of inconsistent and incoherent supervised and unsupervised training data for predictive models received by a network server as electronic data files, and storing that in a computer data storage mechanism, and then into another single, error-free, uniformly formatted record file stored in the computer data storage mechanism with an apparatus for executing a data integrity analysis algorithm that harmonizes a range of supervised and unsupervised training data into flat-data records in which every field of every record file is modified to be coherent and well-populated with information. 3. The method of claim 2, further comprising: comparing and correcting any data values in each data field in the inconsistent and incoherent supervised and unsupervised training data according to a user-service consumer preference and a predefined data dictionary of valid data values with an apparatus for executing an algorithm that substitutes data values in the data fields of incoming supervised and unsupervised training data with at least one value representing a minimum, a maximum, a null, an average, and a default. 4. The method of claim 2, further comprising: discerning the context of any text included in the inconsistent and incoherent supervised and unsupervised training data with an apparatus for executing a contextual dictionary algorithm that employs a thesaurus of alternative contexts of ambiguous words for find a common context denominator, and to then record the context determined into the computer data storage mechanism for later access by a predictive model. 5. The method of claim 2, further comprising: cleaning up inconsistent, missing, and illegal data in each data field by removal or reconstitution with an apparatus for executing an algorithm for cleaning up raw data in stored data records, field-by-field, record-by-record in which some types of fields are restricted in what is legal or allowed, and includes fetching raw data from the computer data storage mechanism and testing each field if a data value reported is numeric or symbolic, and if numeric, a data dictionary is used to see if such data value is previously listed as valid, and if symbolic, using another data dictionary to see if such data value is listed there as valid. 6. The method of claim 2, further comprising: cleaning up inconsistent, missing, and illegal data in each data field by removal or reconstitution with an apparatus for executing a Smith-Waterman algorithm for a local-sequence alignment and to determine if there are any similar regions between two strings or sequences, and in which a consistent, coherent terminology is then enforceable in each data field without data loss, and in which the Smith-Waterman algorithm compares segments of all possible lengths and optimizes a similarity measure without looking at any total sequence. 7. The method of claim 2, further comprising: cleaning up inconsistent, missing, and illegal data in each data field by removal or reconstitution with an apparatus for replacing a numeric value, wherein a numeric value to use as a replacement depends on any flags or preferences that were set to use a default, the average, a minimum, a maximum, or a null. 8. The method of claim 2, further comprising: sampling cleaned, raw-data from the flat-data records in the computer data storage mechanism with an apparatus for executing an algorithm that tests if data are supervised, and if so, that creates a plurality of individual data sets for each class with a stratified selection as needed, and then testing if a selected class is abnormal or uncharacteristic, and if not, down-sampling and producing sampled records of the classes and splitting any remaining data into separate training sets, separate test sets, and separate blind sets all then stored in the computer data storage mechanism for later use in subsequent steps to train a predictive model; and if the test for each record of each class in supervised data is abnormal or uncharacteristic, then skipping a down-sampling for that instance. 9. The method of claim 8, further comprising: if in a previous step the cleaned, raw-data from the flat-data records in the computer data storage mechanism was determined by the apparatus for executing an algorithm that tests if data are supervised are, in fact, unsupervised, then down-sampling all records and splitting a remaining a sampled record data into a separate a training set, a separate test set, and a separate blind set for later use in subsequent steps to train a predictive model. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: A method that improves the training of predictive models. Better trained predictive models make better predictions, and can classify transactions with reduced levels of false positives and false negative. Included is an apparatus for executing a data clean-up algorithm that harmonizes a wide range of real world supervised and unsupervised training data into a single, error-free, uniformly formatted record file that has every field coherent and well populated with information. |
|
G06N99005 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method that improves the training of predictive models. Better trained predictive models make better predictions, and can classify transactions with reduced levels of false positives and false negative. Included is an apparatus for executing a data clean-up algorithm that harmonizes a wide range of real world supervised and unsupervised training data into a single, error-free, uniformly formatted record file that has every field coherent and well populated with information. |
|
One embodiment provides a system comprising a memory device for maintaining deterministic neural data relating to a digital neuron and a logic circuit for deterministic neural computation and stochastic neural computation. Deterministic neural computation comprises processing a neuronal state of the neuron based on the deterministic neural data maintained. Stochastic neural computation comprises generating stochastic neural data relating to the neuron and processing the neuronal state of the neuron based on the stochastic neural data generated. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: maintaining neural data for an electronic neuron; receiving an incoming neuronal firing event targeting the neuron; in response to the incoming neuronal firing event received, updating a membrane potential variable of the neuron based on the neural data; and generating an outgoing neuronal firing event in response to the neuron spiking, wherein the neuron spikes at an average firing rate proportional to a relative magnitude of the membrane potential variable. 2. The method of claim 1, further comprising: in response to the neuron spiking, re-setting the membrane potential variable to one of the following values: a zero value, a stored reset value, a computed reset value based on a threshold value, or a non-reset value. 3. The method of claim 1, wherein the neural data comprises at least one of the following: a leak weight for the neuron, a synaptic weight, or a threshold value for the membrane potential variable. 4. The method of claim 1, wherein the neuron spikes in response to the membrane potential variable meeting or exceeding a threshold value for the membrane potential variable. 5. A system comprising a computer processor, a computer-readable hardware storage medium, and program code embodied with the computer-readable hardware storage medium for execution by the computer processor to implement a method comprising: maintaining neural data for an electronic neuron; receiving an incoming neuronal firing event targeting the neuron; in response to the incoming neuronal firing event received, updating a membrane potential variable of the neuron based on the neural data; and generating an outgoing neuronal firing event in response to the neuron spiking, wherein the neuron spikes at an average firing rate proportional to a relative magnitude of the membrane potential variable. 6. The system of claim 5, further comprising: in response to the neuron spiking, re-setting the membrane potential variable to one of the following values: a zero value, a stored reset value, a computed reset value based on a threshold value, or a non-reset value. 7. The system of claim 5, wherein the neural data comprises at least one of the following: a leak weight for the neuron, a synaptic weight, or a threshold value for the membrane potential variable. 8. The system of claim 5, wherein the neuron spikes in response to the membrane potential variable meeting or exceeding a threshold value for the membrane potential variable. 9. A computer program product comprising a computer-readable hardware storage device having program code embodied therewith, the program code being executable by a computer to implement a method comprising: maintaining neural data for an electronic neuron; receiving an incoming neuronal firing event targeting the neuron; in response to the incoming neuronal firing event received, updating a membrane potential variable of the neuron based on the neural data; and generating an outgoing neuronal firing event in response to the neuron spiking, wherein the neuron spikes at an average firing rate proportional to a relative magnitude of the membrane potential variable. 10. The computer program product of claim 9, further comprising: in response to the neuron spiking, re-setting the membrane potential variable to one of the following values: a zero value, a stored reset value, a computed reset value based on a threshold value, or a non-reset value. 11. The computer program product of claim 9, wherein the neural data comprises at least one of the following: a leak weight for the neuron, a synaptic weight, or a threshold value for the membrane potential variable. 12. The computer program product of claim 9, wherein the neuron spikes in response to the membrane potential variable meeting or exceeding a threshold value for the membrane potential variable. |
|
ACCEPTED | Please predict whether this patent is acceptable.PATENT ABSTRACT: One embodiment provides a system comprising a memory device for maintaining deterministic neural data relating to a digital neuron and a logic circuit for deterministic neural computation and stochastic neural computation. Deterministic neural computation comprises processing a neuronal state of the neuron based on the deterministic neural data maintained. Stochastic neural computation comprises generating stochastic neural data relating to the neuron and processing the neuronal state of the neuron based on the stochastic neural data generated. |
|
G06N3063 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: One embodiment provides a system comprising a memory device for maintaining deterministic neural data relating to a digital neuron and a logic circuit for deterministic neural computation and stochastic neural computation. Deterministic neural computation comprises processing a neuronal state of the neuron based on the deterministic neural data maintained. Stochastic neural computation comprises generating stochastic neural data relating to the neuron and processing the neuronal state of the neuron based on the stochastic neural data generated. |
|
Embodiments are directed to a two-terminal resistive processing unit (RPU) having a first terminal, a second terminal and an active region. The active region effects a non-linear change in a conduction state of the active region based on at least one first encoded signal applied to the first terminal and at least one second encoded signal applied to the second terminal. The active region is configured to locally perform a data storage operation of a training methodology based at least in part on the non-linear change in the conduction state. The active region is further configured to locally perform a data processing operation of the training methodology based at least in part on the non-linear change in the conduction state. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A two-terminal resistive processing unit (RPU) comprising: a first terminal; a second terminal; and an active region having a conduction state; wherein the conduction state identifies a weight of a training methodology applied to the RPU; wherein the active region is configured to locally perform a data storage operation of the training methodology; and wherein the active region is further configured to locally perform a data processing operation of the training methodology. 2. The two-terminal RPU of claim 1, wherein the data storage operation comprises a change in the conduction state that is based at least in part on a result of the data processing operation. 3. The two-terminal RPU of claim 2, wherein the change in the conduction state comprises a non-linear change based on at least one first encoded signal applied to the first terminal and at least one second encoded signal applied to the second terminal. 4. The two-terminal RPU of claim 3, wherein: the active region is further configured to locally perform the data storage operation of the training methodology based at least in part on the non-linear change in the conduction state; and the active region is further configured to locally perform the data processing operation of the training methodology based at least in part on the non-linear change in the conduction state. 5. The two-terminal RPU of claim 1, wherein the training methodology comprises at least one of: an online neural network training; a matrix inversion; and a matrix decomposition. 6. A two-terminal resistive processing unit (RPU) comprising: a first terminal; a second terminal; and an active region having a conduction state; wherein the active region is configured to effect a non-linear change in the conduction state based on at least one first encoded signal applied to the first terminal and at least one second encoded signal applied to the second terminal; wherein the active region is further configured to locally perform a data storage operation of a training methodology based at least in part on the non-linear change in the conduction state; and wherein the active region is further configured to locally perform a data processing operation of the training methodology based at least in part on the non-linear change in the conduction state. 7. The two-terminal RPU of claim 6, wherein the encoding of the at least one first encoded signal and the at least one second encoded signal comprises a stochastic sequence of pulses. 8. The two-terminal RPU of claim 6, wherein the encoding of the at least one first encoded signal and the at least one second encoded signal comprise s a magnitude modulation. 9. The two-terminal RPU of claim 6, wherein the non-linear change comprises a rectifying non-linear change or a saturating non-linear change. 10. The two-terminal RPU of claim 6, wherein the non-linear change comprises an exponential non-linear change. 11. A trainable crossbar array comprising: a set of conductive row wires; a set of conductive column wires configured to form a plurality of crosspoints at intersections between the set of conductive row wires and the set of conductive column wires; and a two-terminal resistive processing unit (RPU) at each of the plurality of crosspoints; wherein the RPU is configured to locally perform a data storage operation of a training methodology applied to the trainable crossbar array; wherein the RPU is further configured to locally perform a data processing operation of the training methodology. 12. The array of claim 11, wherein: the two-terminal RPU comprises a first terminal, a second terminal and an active region having a conduction state; and the conduction state identifies a weight of the training methodology applied to the RPU. 13. The array of claim 12, wherein: the data storage operation comprises a change in the conduction state that is based at least in part on a result of the data processing operation; and the change in the conduction state comprises a non-linear change based on at least one first encoded signal applied to the first terminal and at least one second encoded signal applied to the second terminal. 14. The array of claim 13, wherein: the active region is further configured to locally perform the data storage operation of the training methodology based at least in part on the non-linear change in the conduction state; and the active region is further configured to locally perform the data processing operation of the training methodology based at least in part on the non-linear change in the conduction state. 15. The array of claim 11, wherein the training methodology comprises at least one of: an online neural network training; a matrix inversion; and a matrix decomposition. 16-25. (canceled) |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: Embodiments are directed to a two-terminal resistive processing unit (RPU) having a first terminal, a second terminal and an active region. The active region effects a non-linear change in a conduction state of the active region based on at least one first encoded signal applied to the first terminal and at least one second encoded signal applied to the second terminal. The active region is configured to locally perform a data storage operation of a training methodology based at least in part on the non-linear change in the conduction state. The active region is further configured to locally perform a data processing operation of the training methodology based at least in part on the non-linear change in the conduction state. |
|
G06N3088 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Embodiments are directed to a two-terminal resistive processing unit (RPU) having a first terminal, a second terminal and an active region. The active region effects a non-linear change in a conduction state of the active region based on at least one first encoded signal applied to the first terminal and at least one second encoded signal applied to the second terminal. The active region is configured to locally perform a data storage operation of a training methodology based at least in part on the non-linear change in the conduction state. The active region is further configured to locally perform a data processing operation of the training methodology based at least in part on the non-linear change in the conduction state. |
|
A system and method configured to provide persistent evidence based multi-ontology context dependent decision support, eligibility assessment and feature scoring. Decisions are achieved via a probabilistic functional extension of both potentiality and plausibility towards nouns in all data forms. Plausibility refers to the full set of values garnered by the evidence accumulation process while potentiality is a mechanism to set the various match threshold values. The thresholds define acceptable confidence levels for decision-making and wherein both plausibility and potentiality are implemented through statistical applications which model and estimate the distribution of random vectors by estimating margins and copula separately from all data types. Evidence is filtered by margins and copula on a persistent basis from the scoring of newly harvested content and refined results are computed on the basis of partial matching of feature vector elements for separate and distinct feature weightings associated with the given entity and each of the reference entities within the compressed copula. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A decision support system configured to provide single source and centralized decision making, the decision support system comprising: one or more processors configured to execute computer program modules, the computer program modules comprising: a content harvesting module configured to receive persistent content; a plausibility scoring module configured to perform hypothesis validation and refutation functions and generate a plausibility scoring value; a potentiality scoring module configured to set confidence thresholds for decision making and generate a potentiality scoring value; and a decision determination module configured to adjudicate the potentiality scoring value and the plausibility scoring value as against threshold values and render a decision based thereon. 2. The decision support system of claim 1 wherein said persistent content comprises nouns. 3. The decision support system of claim 1 wherein said persistent content comprises noun based phrases. 4. The decision support system of claim 1 wherein said plausibility scoring value is determined based upon a confidence level related to whether or not said content includes sufficient information to identify said content as well as said content's association with a specified ontology. 5. The decision support system of claim 1 wherein said content is represented as feature vector elements. 6. The decision support system of claim 1 further comprising a reference data storage module, said reference data storage module storing reference data which is matched as against said persistent content. 7. The decision support system of claim 6 wherein said reference data is stored in the form of feature vector elements. 8. The decision support system of claim 1 wherein said plausibility scoring module generates said plausibility scoring value by employing at least one copula function to identify and model applicable dependence structures. 9. The decision support system of claim 1 wherein said potentiality scoring module generates said potentiality scoring value by employing at least one copula function to identify and model applicable dependence structures. 10. The decision support system of claim 1 wherein said decision represents a patient eligibility determination. 11. A computer-implemented method of providing decision support, the method being implemented in a computer system comprising one or more processors configured to execute computer program modules, the method comprising: receiving persistent content; performing hypothesis validation and refutation functions and generating a plausibility scoring value; setting confidence thresholds for decision making and generating a potentiality scoring value; and adjudicating the potentiality scoring value and the plausibility scoring value as against threshold values and rendering a decision based thereon. 12. The method of claim 11 wherein said persistent content comprises nouns. 13. The method of claim 11 wherein said persistent content comprises noun based phrases. 14. The method of claim 11 further comprising the step of determining said plausibility scoring value based upon a confidence level related to whether or not said content includes sufficient information to identify said content as well as said content's association with a specified ontology. 15. The method of claim 11 wherein said content is represented as feature vector elements. 16. The method of claim 11 further comprising the step of storing reference data which is matched as against said persistent content. 17. The method of claim 16 wherein said reference data is stored in the form of feature vector elements. 18. The method of claim 11 wherein said plausibility scoring module generates said plausibility scoring value by employing at least one copula function to identify and model applicable dependence structures. 19. The method of claim 11 wherein said potentiality scoring module generates said potentiality scoring value by employing at least one copula function to identify and model applicable dependence structures. 20. The method of claim 11 wherein said decision represents a patient eligibility determination. |
|
REJECTED | Please predict whether this patent is acceptable.PATENT ABSTRACT: A system and method configured to provide persistent evidence based multi-ontology context dependent decision support, eligibility assessment and feature scoring. Decisions are achieved via a probabilistic functional extension of both potentiality and plausibility towards nouns in all data forms. Plausibility refers to the full set of values garnered by the evidence accumulation process while potentiality is a mechanism to set the various match threshold values. The thresholds define acceptable confidence levels for decision-making and wherein both plausibility and potentiality are implemented through statistical applications which model and estimate the distribution of random vectors by estimating margins and copula separately from all data types. Evidence is filtered by margins and copula on a persistent basis from the scoring of newly harvested content and refined results are computed on the basis of partial matching of feature vector elements for separate and distinct feature weightings associated with the given entity and each of the reference entities within the compressed copula. |
|
G06N502 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A system and method configured to provide persistent evidence based multi-ontology context dependent decision support, eligibility assessment and feature scoring. Decisions are achieved via a probabilistic functional extension of both potentiality and plausibility towards nouns in all data forms. Plausibility refers to the full set of values garnered by the evidence accumulation process while potentiality is a mechanism to set the various match threshold values. The thresholds define acceptable confidence levels for decision-making and wherein both plausibility and potentiality are implemented through statistical applications which model and estimate the distribution of random vectors by estimating margins and copula separately from all data types. Evidence is filtered by margins and copula on a persistent basis from the scoring of newly harvested content and refined results are computed on the basis of partial matching of feature vector elements for separate and distinct feature weightings associated with the given entity and each of the reference entities within the compressed copula. |
|
A computer-implemented method of determining an approximated value of a parameter in a first domain is described. The parameter is dependent on one or more variables which vary in a second domain, and the parameter is determined by a function which relates sets of values of the one or more variables in the second domain to corresponding values in the first domain. The method is implemented on a computer system including a processor, and the method comprises: determining a plurality of anchor points in the second domain, wherein each anchor point comprises a set of values of the one or more variables in the second domain; evaluating, at each anchor point, the function to generate corresponding values of the parameter in the first domain; generating an approximation function to the function by fitting a series of orthogonal functions or an approximation to a series of orthogonal functions to the corresponding values of the parameter in the first domain; and using the approximation function to generate the approximated value of the parameter in the first domain. | Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method of determining an approximated value of a parameter in a first domain, the parameter being dependent on one or more variables which vary in a second domain, and the parameter being determined by a function which relates sets of values of the one or more variables in the second domain to corresponding values in the first domain, the method being implemented on a computer system including a processor, and the method comprising: determining a plurality of anchor points in the second domain, wherein each anchor point comprises a set of values of the one or more variables in the second domain; evaluating, at each anchor point, the function to generate corresponding values of the parameter in the first domain; generating an approximation function to the function by fitting a series of orthogonal functions or an approximation to a series of orthogonal functions to the corresponding values of the parameter in the first domain; and using the approximation function to generate the approximated value of the parameter in the first domain. 2. The computer-implemented method of claim 1, further comprising: transforming the function into an intermediary function before the generating step, wherein the intermediary function is a linear transformation of the function, and wherein the generating step comprises approximating the intermediary function. 3. The computer-implemented method of claim 2, wherein the transforming step comprises linearly transforming the function into a predetermined range. 4. The computer-implemented method of claim 1, wherein the series of orthogonal functions is one of the group comprising: sines, cosines, sines and cosines, Bessel functions, Gegenbauer polynomials, Hermite polynomials, Laguerre polynomials, Chebyshev polynomials, Jacobi polynomials, Spherical harmonics, Walsh functions, Legendre polynomials, Zernike polynomials, Wilson polynomials, Meixner-Pollaczek polynomials, continuous Hahn polynomials, continuous dual Hahn polynomials, a classical polynomials described by the Askey scheme, Askey-Wilson polynomials, Racah polynomials, dual Hahn polynomials, Meixner polynomials, piecewise constant interpolants, linear interpolants, polynomial interpolants, gaussian process based interpolants, spline interpolants, barycentric interpolants, Krawtchouk polynomials, Charlier polynomials, sieved ultraspherical polynomials, sieved Jacobi polynomials, sieved Pollaczek polynomials, Rational interpolants, Trigonometric interpolants, Hermite interpolants, Cubic interpolants, and Rogers-Szegö polynomials. 5. The computer-implemented method of claim 1, wherein the series of orthogonal functions is a series of orthogonal polynomials. 6. The computer-implemented method of claim 1, wherein the approximation to a series of orthogonal functions is an approximation to a series of orthogonal polynomials. 7. The computer-implemented method of claim 1, further comprising: selecting a series of orthogonal functions or an approximation to a series of orthogonal functions before the generating step, and wherein the generating step comprises using the selected series of orthogonal functions or the approximation to the series of orthogonal functions. 8. The computer-implemented method of claim 1, wherein the number of anchor points, NE, is so that: N S · T N E · T + N S · t > 1 Wherein NS is the number of scenarios, t is the time taken to run the approximation function, and T is the time taken to run the function. 9. The computer-implemented method of claim 1, further comprising: holding the values of all but one variable in the set to be constant in the evaluating step. 10. The computer-implemented method of claim 1, further comprising using the approximated values of the parameter to generate standard metrics. 11. The computer-implemented method of claim 1, wherein the values of the variables vary stochastically. 12. The computer-implemented method of claim 1, wherein the using step comprises using the approximation function a plurality of times to generate at least one scenario or one time step, each scenario or each time step comprising a plurality of approximated values. 13. The computer-implemented method of claim 12, wherein the number of scenarios or time steps is significantly greater than the number of anchor points. 14. The computer-implemented method of claim 1, wherein an output of the function is a parameter of a financial product. 15. The computer-implemented method of claim 14, wherein the financial product is a financial derivative including one or more of a group comprising: an option pricing function, a swap pricing function and a combination thereof. 16. The computer-implemented method of claim 1, wherein the function is one of the group comprising: a Black-Scholes model, a Longstaff-Schwartz model, a binomial options pricing model, a Black model, a Garman-Kohlhagen model, a Vanna-Volga method, a Chen model, a Merton's model, a Vasicek model, a Rendleman-Bartter model, a Cox-Ingersoll-Ross model, a Ho-Lee model, a Hull-White model, a Black-Derman-Toy model, a Black-Karasinski model, a Heston model, a Monte Carlo based pricing model, a binomial pricing model, a trinomial pricing model, a tree based pricing model, a finite-difference based pricing model, a Heath-Jarrow-Morton model, a variance gamma model, a Fuzzy pay-off method, a Single-index model, a Chepakovich valuation model, a Markov switching multifractal, a Datar Mathews method, and a Kalotay-Williams-Fabozzi model. 17. The computer-implemented method of claim 1, wherein the anchor points are the zeros of each of orthogonal function, or a subset of the zeros. 18. The computer-implemented method of claim 17, wherein the approximating function is generated using an interpolation scheme. 19. The computer-implemented method of claim 18, wherein the interpolation scheme is one from the group comprising: piecewise constant interpolants, linear interpolants, polynomial interpolants, gaussian process based interpolants, spline interpolants, barycentric interpolants, Rational interpolants, Trigonometric interpolants, Hermite interpolants and Cubic interpolants. 20. The computer-implemented method of claim 1, wherein the anchor points are integration points of a numerical integration scheme. 21. The computer-implemented method of claim 20, wherein the numerical integration scheme is one from the group comprising: Newton-Cotes methods, a trapezoidal method, a Simpson's method, a Boole's method, a Romberg integration method, Gaussian quadrature methods, Chenshaw-Curtis methods, a Fejer method, a Gaus-Kronrod method, Fourier Transform methods, an Adaptive quadrature method, a Richardson extrapolation, a Monte Carlo and Quasi Monte Carlo method, a Markov chain Monte Carlo, a Metropolis Hastings algorithm, a Gibbs Sampling, and Fast Fourier Transform methods. 22. The computer-implemented method of claim 1, wherein the speed of the calculation is increased with no loss of accuracy in the standard metrics, and/or the accuracy of the standard metrics increase with no decrease in the speed of the calculation when using the approximation function. 23. The computer-implemented method of claim 1, wherein there is a difference between the approximated value of the parameter and the parameter in the 2nd significant figure when compared at the same point in the first domain. 24. The computer-implemented method of claim 1, wherein there is a difference between the approximated value of the parameter and the parameter in between the 4th and the 6th significant figure when compared at the same point in the first domain. 25. The computer-implemented method of claim 1, wherein there is a difference between the approximated value of the parameter and the parameter in the 15th significant figure when compared at the same point in the first domain. 26. A financial derivative comprising a parameter, wherein a value of the parameter is determined using the computer-implemented method of claim 1. 27. A computer system comprising a processor configured to determine an approximated value of a parameter in a first domain, the parameter being dependent on one or more variables which vary in a second domain, and the parameter being determined by a function which relates sets of values of the one or more variables in the second domain to corresponding values in the first domain, the computer system comprising: a determination module arranged to determine a plurality of anchor points in the second domain, wherein each anchor point comprises a set of values of the one or more variables in the second domain; an evaluation module arranged to evaluate, at each anchor point, the function to generate corresponding values of the parameter in the first domain; a generation module arranged to generate an approximation function to the function by fitting a series of orthogonal functions or an approximation to a series of orthogonal functions to the corresponding values of the parameter in the first domain; and an approximation module arranged to use the approximation function to generate the approximated value of the parameter in the first domain. 28. A computer-implemented method of determining an approximated value of a parameter in a first domain, the parameter being dependent on one or more variables which vary in a second domain, and the parameter being determined by a function which relates sets of values of the one or more variables in the second domain to corresponding values in the first domain, the method using a series of orthogonal functions or an approximation to a series of orthogonal functions to approximate the function. |
|
PENDING | Please predict whether this patent is acceptable.PATENT ABSTRACT: A computer-implemented method of determining an approximated value of a parameter in a first domain is described. The parameter is dependent on one or more variables which vary in a second domain, and the parameter is determined by a function which relates sets of values of the one or more variables in the second domain to corresponding values in the first domain. The method is implemented on a computer system including a processor, and the method comprises: determining a plurality of anchor points in the second domain, wherein each anchor point comprises a set of values of the one or more variables in the second domain; evaluating, at each anchor point, the function to generate corresponding values of the parameter in the first domain; generating an approximation function to the function by fitting a series of orthogonal functions or an approximation to a series of orthogonal functions to the corresponding values of the parameter in the first domain; and using the approximation function to generate the approximated value of the parameter in the first domain. |
|
G06N708 | Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A computer-implemented method of determining an approximated value of a parameter in a first domain is described. The parameter is dependent on one or more variables which vary in a second domain, and the parameter is determined by a function which relates sets of values of the one or more variables in the second domain to corresponding values in the first domain. The method is implemented on a computer system including a processor, and the method comprises: determining a plurality of anchor points in the second domain, wherein each anchor point comprises a set of values of the one or more variables in the second domain; evaluating, at each anchor point, the function to generate corresponding values of the parameter in the first domain; generating an approximation function to the function by fitting a series of orthogonal functions or an approximation to a series of orthogonal functions to the corresponding values of the parameter in the first domain; and using the approximation function to generate the approximated value of the parameter in the first domain. |
Subsets and Splits