output
stringlengths
7
3.46k
input
stringclasses
1 value
instruction
stringlengths
129
114k
A deep neural network to which data category information is added is established locally, to-be-identified data is input to an input layer of the deep neural network generated based on the foregoing data category information, and information of a category to which the to-be-identified data belongs is acquired, where the information of the category is output by an output layer of the deep neural network. A deep neural network is established based on data category information, such that category information of to-be-identified data is conveniently and rapidly obtained using the deep neural network, thereby implementing a category identification function of the deep neural network, and facilitating discovery of an underlying law of the to-be-identified data according to the category information of the to-be-identified data.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A data category identification method based on a deep neural network, comprising: establishing an initial deep neural network; generating a linear category analysis function after data category information is added to a locally saved initial linear category analysis function according to an input training sample vector set; acquiring an optimization function of the initial deep neural network according to a locally saved unsupervised auto-encoding model optimization function and the linear category analysis function; acquiring a parameter of the initial deep neural network according to the optimization function of the initial deep neural network; establishing the deep neural network according to a locally saved classification neural network, the initial deep neural network, and the parameter of the initial deep neural network, wherein the deep neural network has a multi-layer network structure comprising at least an input layer and an output layer; inputting to-be-identified data to the input layer of the deep neural network; and acquiring information of a category to which the to-be-identified data belongs, wherein the information of the category is output by the output layer of the deep neural network. 2. The method according to claim 1, wherein generating the linear category analysis function after data category information is added to the locally saved initial linear category analysis function according to the input training sample vector set comprises: performing relaxation processing on the initial linear category analysis function using a relaxation algorithm; performing normalization processing on the input training sample vector set to generate a normalized training sample vector set; and substituting the normalized training sample vector set into the initial linear category analysis function on which relaxation processing has been performed in order to generate the linear category analysis function. 3. The method according to claim 2, wherein the linear category analysis function is: ζ lda  ( W ) = ∑ k = 1 K  { ∑ ( x i , x j ∈ M )  w k T  x i  x j T  w k - ∑ ( x i , x j ∈ C )  w k T  x i  x j T  w k } wherein ζlda(W) is the linear category analysis function, wherein W is a parameter of the deep neural network, wherein W is a matrix comprising multiple elements, and the matrix is obtained by learning the normalized training sample vector set, wherein wk is any column vector in the matrix W, wherein wTk is a transposition of the column vector wk, wherein both xi and xj are training sample vectors in the normalized training sample vector set, wherein XjT is a transposition of xj, wherein M is a vector pair set comprising at least one pair of training sample vectors that belong to different categories, wherein C is a vector pair set comprising at least one pair of training sample vectors that belong to a same category, wherein (xi,xj)εM indicates that xi and xj belong to different categories, wherein (xi,xj)εC indicates that xi and xj belong to a same category, and wherein K is the total number of column vectors in the matrix W. 4. The method according to claim 1, wherein the optimization function of the initial deep neural network is: ζ=αζae(W)+(1−α)ζlda(W) wherein α is a coefficient of the optimization function of the initial deep neural network, and is preset and acquired according to an application scenario, wherein ζae(W) is the unsupervised auto-encoding model optimization function, wherein ζlda(W) is the linear category analysis function, and wherein ζ is the optimization function of the initial deep neural network. 5. The method according to claim 1, wherein acquiring the parameter of the initial deep neural network according to the optimization function of the initial deep neural network comprises: acquiring, according to the optimization function of the initial deep neural network and using a backpropagation algorithm, a gradient corresponding to the optimization function of the initial deep neural network; and acquiring, using a gradient descent algorithm or a quasi-Newton algorithm, the parameter of the initial deep neural network according to the gradient corresponding to the optimization function of the initial deep neural network. 6. The method according to claim 1, wherein establishing the deep neural network according to the locally saved classification neural network, the initial deep neural network, and the parameter of the initial deep neural network comprises: superimposing the classification neural network onto the initial deep neural network in order to generate an initial deep neural network that is obtained after superimposition processing; and establishing, using a backpropagation algorithm, the deep neural network according to the parameter of the initial deep neural network and the initial deep neural network that is obtained after superimposition processing. 7. A data category identification apparatus based on a deep neural network, comprising: a first establishing unit configured to establish an initial deep neural network; a generating unit configured to generate a linear category analysis function after data category information is added to a locally saved initial linear category analysis function according to an input training sample vector set; an optimization function acquiring unit configured to acquire an optimization function of the initial deep neural network according to a locally saved unsupervised auto-encoding model optimization function and the linear category analysis function; a parameter acquiring unit configured to acquire a parameter of the initial deep neural network according to the optimization function of the initial deep neural network; a second establishing unit configured to establish the deep neural network according to a locally saved classification neural network, the initial deep neural network, and the parameter of the initial deep neural network, wherein the deep neural network has a multi-layer network structure comprising at least an input layer and an output layer; and a data category identifying unit configured to: input to-be-identified data to the input layer of the deep neural network; and acquire information of a category to which the to-be-identified data belongs, wherein the information of the category is output by the output layer of the deep neural network. 8. The apparatus according to claim 7, wherein the generating unit is configured to: perform relaxation processing on the initial linear category analysis function using a relaxation algorithm; perform normalization processing on the input training sample vector set; and substitute the training sample vector set on which normalization processing has been performed into the initial linear category analysis function on which relaxation processing has been performed in order to generate the linear category analysis function. 9. The apparatus according to claim 8, wherein the linear category analysis function generated by the generating unit is: ζ lda  ( W ) = ∑ k = 1 K  { ∑ ( x i , x j ∈ M )  w k T  x i  x j T  w k - ∑ ( x i , x j ∈ C )  w k T  x i  x j T  w k } wherein ζlda(W) is the linear category analysis function, wherein W is a parameter of the deep neural network, wherein W is a matrix comprising multiple elements, and the matrix is obtained by learning the training sample vector set on which normalization processing has been performed, wherein wk is any column vector in the matrix W, wherein wTk is a transposition of the column vector wk, wherein both xi and xj are training sample vectors in the training sample vector set on which normalization processing has been performed, wherein xjT is a transposition of xj, wherein M is a vector pair set comprising at least one pair of training sample vectors that belong to different categories, wherein C is a vector pair set comprising at least one pair of training sample vectors that belong to a same category wherein (xi,xj)εM indicates that xi and xj belong to different categories, wherein (xi,xj)εC indicates that xi and xj belong to a same category, and wherein K is the total number of column vectors comprised in the matrix W. 10. The apparatus according to claim 7, wherein the optimization function, which is acquired by the optimization function acquiring unit, of the initial deep neural network is: ζ=αζae(W)+(1−α)ζlda(W) wherein α is a coefficient of the optimization function of the initial deep neural network, and is preset and acquired according to an application scenario, wherein ζae(W) is the unsupervised auto-encoding model optimization function, wherein ζlda(W) is the linear category analysis function, and wherein ζ is the optimization function of the initial deep neural network. 11. The apparatus according to claim 7, wherein the parameter acquiring unit is configured to: acquire, according to the optimization function of the initial deep neural network and using a backpropagation algorithm, a gradient corresponding to the optimization function of the initial deep neural network; and acquire, using a gradient descent algorithm or a quasi-Newton algorithm, the parameter of the initial deep neural network according to the gradient corresponding to the optimization function of the initial deep neural network. 12. The apparatus according to claim 7, wherein the second establishing unit is configured to: superimpose the classification neural network onto the initial deep neural network to generate an initial deep neural network that is obtained after superimposition processing, and establish, using a backpropagation algorithm, the deep neural network according to the parameter of the initial deep neural network and the initial deep neural network that is obtained after superimposition processing. 13. A non-transitory computer readable medium storing codes which, when executed by a processor of a network system, perform steps of: establishing an initial deep neural network; generating a linear category analysis function after data category information is added to a locally saved initial linear category analysis function according to an input training sample vector set; acquiring an optimization function of the initial deep neural network according to a locally saved unsupervised auto-encoding model optimization function and the linear category analysis function; acquiring a parameter of the initial deep neural network according to the optimization function of the initial deep neural network; establishing a deep neural network according to a locally saved classification neural network, the initial deep neural network, and the parameter of the initial deep neural network, wherein the deep neural network has a multi-layer network structure comprising at least an input layer and an output layer; inputting to-be-identified data to the input layer of the deep neural network; and acquiring information of a category to which the to-be-identified data belongs, wherein the information of the category is output by the output layer of the deep neural network. 14. The non-transitory computer readable medium according to claim 13, wherein generating the linear category analysis function after data category information is added to the locally saved initial linear category analysis function according to the input training sample vector set comprises: performing relaxation processing on the initial linear category analysis function using a relaxation algorithm, and performing normalization processing on the input training sample vector set to generate a normalized training sample vector set; and substituting the normalized training sample vector set into the initial linear category analysis function on which relaxation processing has been performed in order to generate the linear category analysis function.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A deep neural network to which data category information is added is established locally, to-be-identified data is input to an input layer of the deep neural network generated based on the foregoing data category information, and information of a category to which the to-be-identified data belongs is acquired, where the information of the category is output by an output layer of the deep neural network. A deep neural network is established based on data category information, such that category information of to-be-identified data is conveniently and rapidly obtained using the deep neural network, thereby implementing a category identification function of the deep neural network, and facilitating discovery of an underlying law of the to-be-identified data according to the category information of the to-be-identified data.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A deep neural network to which data category information is added is established locally, to-be-identified data is input to an input layer of the deep neural network generated based on the foregoing data category information, and information of a category to which the to-be-identified data belongs is acquired, where the information of the category is output by an output layer of the deep neural network. A deep neural network is established based on data category information, such that category information of to-be-identified data is conveniently and rapidly obtained using the deep neural network, thereby implementing a category identification function of the deep neural network, and facilitating discovery of an underlying law of the to-be-identified data according to the category information of the to-be-identified data.
A rule management device includes: a display unit which displays a plurality of condition objects that respectively indicate a plurality of conditions on a screen; an input receiving unit which receives an input for grouping the plurality of condition objects on the screen; and a rule registration unit which performs processing, for each of the group, for, when the group includes the plurality of condition objects, generating a rule that links a predetermined conclusion to a first condition in which all the plurality of conditions that are identified by the plurality of condition objects are combined by ANDs and registering the rule, and, when the group includes only one of the condition objects, generating a rule that links a predetermined conclusion to one of the conditions that is identified by the one of the condition objects and registering the rule.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A rule management device comprising: a display unit which displays a plurality of condition objects that respectively indicate a plurality of conditions on a screen; an input receiving unit which receives an input for grouping the plurality of condition objects on the screen; and a rule registration unit which performs processing, for each of the group, for, when the group includes the plurality of condition objects, generating a rule that links a predetermined conclusion to a first condition in which all the plurality of conditions that are identified by the plurality of condition objects are combined by ANDs and registering the rule, and, when the group includes only one of the condition objects, generating a rule that links a predetermined conclusion to one of the conditions that is identified by the one of the condition objects and registering the rule. 2. The rule management device according to claim 1, wherein the display unit displays one or a plurality of conclusion objects that indicate the predetermined conclusion on the screen; the input receiving unit receives an input for including one of the conclusion objects in each of the group; and the rule registration unit generates a rule that links the predetermined conclusion identified by the conclusion object included in each of the group to a condition included in each group, and registers the rule. 3. The rule management device according to claim 1, wherein the display unit displays a multi-condition contained object that groups a part of the plurality of condition objects on the screen; the input receiving unit can receive an input for including the multi-condition contained object in the group; the rule registration unit, when the group includes the multi-condition contained object, generates a second condition in which all the plurality of conditions that are identified by the plurality of condition objects grouped by the multi-condition contained object are combined by ORs, when the group includes another one of the condition objects, generates the first condition in which a condition that is identified by the other one of the condition objects and the second condition are combined by ANDs, and, when the group includes no other one of the condition objects, generates a rule that links the predetermined conclusion to the second condition. 4. The rule management device according to claim 1, wherein the input receiving unit receives an input for linking the objects mutually with a line on the screen; and the rule registration unit generates the rule by treating the plurality of objects that are mutually associated with the line as the objects included in one of the groups. 5. The rule management device according to claim 1, wherein the display unit displays a definition object that indicates a definition of a condition wording included in the conditions on the screen. 6. The rule management device according to claim 5, wherein the condition wording includes a first condition wording of which definition differs for each of the conditions; and the display unit displays the definition object that indicates a definition of the first condition wording and the condition object that indicates the condition which is a premise of the definition by relating them. 7. The rule management device according to claim 6, wherein the display unit links displays the definition object that indicates a definition of the first condition wording and the condition object that indicates the condition which is a premise of the definition by linking them with a line. 8. The rule management device according to claim 1, further comprising: a rule display unit which displays a plurality of condition objects that respectively indicate a plurality of conditions on a screen, and displays the condition objects that respectively indicate one or a plurality of conditions included in each rule with grouping based on the rule registered by the rule registration unit; and a selection receiving unit which receives one selection input among the plurality of condition objects displayed by the rule display unit, wherein the rule display unit, when the selection receiving unit receives a selection input of one of the condition objects, displays one or a plurality of groups that include the selected condition object in a manner of distinguishing it or them from other groups. 9. The rule management device according to claim 1, further comprising: a rule description generation and output unit which generates and outputs a description that indicates each rule based on the rule registered by the rule registration unit. 10. A computer readable medium embodying a program the program causing a rule management device to perform a method, the method comprising: displaying a plurality of condition objects that respectively indicate a plurality of conditions on a screen; receiving an input for grouping the plurality of condition objects on the screen; and performing processing, for each of the group, for, when the group includes the plurality of condition objects, generating a rule that links a predetermined conclusion to a first condition in which all the plurality of conditions that are identified by the plurality of condition objects are combined by ANDs and registering the rule, and, when the group includes only one of the condition objects, generating a rule that links a predetermined conclusion to one of the conditions that is identified by the one of the condition objects and registering the rule. 11. A rule management method for a rule management device comprising: displaying a plurality of condition objects that respectively indicate a plurality of conditions on a screen; receiving an input for grouping the plurality of condition objects on the screen; and performing processing, for each of the group, for, when the group includes the plurality of condition objects, generating a rule that links a predetermined conclusion to a first condition in which all the plurality of conditions that are identified by the plurality of condition objects are combined by ANDs and registering the rule, and, when the group includes only one of the condition objects, generating a rule that links a predetermined conclusion to one of the conditions that is identified by the one of the condition objects and registering the rule. 12. A rule management device comprising: display means for displaying a plurality of condition objects that respectively indicate a plurality of conditions on a screen; input receiving means for receiving an input for grouping the plurality of condition objects on the screen; and rule registration means for performing processing, for each of the group, for, when the group includes the plurality of condition objects, generating a rule that links a predetermined conclusion to a first condition in which all the plurality of conditions that are identified by the plurality of condition objects are combined by ANDs and registering the rule, and, when the group includes only one of the condition objects, generating a rule that links a predetermined conclusion to one of the conditions that is identified by the one of the condition objects and registering the rule.
REJECTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A rule management device includes: a display unit which displays a plurality of condition objects that respectively indicate a plurality of conditions on a screen; an input receiving unit which receives an input for grouping the plurality of condition objects on the screen; and a rule registration unit which performs processing, for each of the group, for, when the group includes the plurality of condition objects, generating a rule that links a predetermined conclusion to a first condition in which all the plurality of conditions that are identified by the plurality of condition objects are combined by ANDs and registering the rule, and, when the group includes only one of the condition objects, generating a rule that links a predetermined conclusion to one of the conditions that is identified by the one of the condition objects and registering the rule.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A rule management device includes: a display unit which displays a plurality of condition objects that respectively indicate a plurality of conditions on a screen; an input receiving unit which receives an input for grouping the plurality of condition objects on the screen; and a rule registration unit which performs processing, for each of the group, for, when the group includes the plurality of condition objects, generating a rule that links a predetermined conclusion to a first condition in which all the plurality of conditions that are identified by the plurality of condition objects are combined by ANDs and registering the rule, and, when the group includes only one of the condition objects, generating a rule that links a predetermined conclusion to one of the conditions that is identified by the one of the condition objects and registering the rule.
A method of system monitoring or, more particularly, novelty detection, based on extreme value theory in particular a points-over-threshold POT method which is applicable to multimodal multivariate data. Multimodal multivariate data points collected by continuously monitoring a system are transformed into probability space by obtaining their probability density function (pdf) values from a statistical model of normality, such as a pdf fitted to a training data set of normal data. Extremal data is defined as that whose pdf value is below a predetermined threshold and a new analytic function, in particular the Generalised Pareto Distribution (GPD) is fitted to that extremal data only. The fitted GPD can be compared to a GPD fitted to the extremal datapoints of the training data set of normal data to determine if the monitored system is in a normal state. Alternatively a threshold can be set by calculating an extreme value distribution of the GPD fitted to the extremal data of the training data set and setting as the threshold the pdf value which separates a desired proportion, e.g., 0.99 of the probability mass from the remainder. If the minimum pdf value of a set of data points collected from the system is below the threshold, the system may be abnormal.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of system monitoring to automatically detect abnormal states of a system, the method comprising the steps of: (a) repeatedly measuring a plurality of system parameters to produce multi-parameter data points each representing the state of the system at a particular time; (b) comparing each data point to a statistical model giving the probability density function of the normal states of the system to obtain a probability density function value for each data point; and (d) determining whether or not the system state is normal by comparing the obtained probability density function values to a threshold based on a distribution function fitted to those probability density function values of a set of data points known to represent low probability normal states of the system. 2. A method according to claim 1 wherein the step (d) of determining whether or not the system state is normal comprises comparing the distribution of the obtained probability density function values to the fitted distribution function. 3. A method according to claim 1 wherein the step (d) of determining whether or not the system state is normal comprises comparing a distribution function fitted to the obtained probability density function values with the distribution function fitted to those probability density function values of a set of data points known to represent low probability normal states of the system. 4. A method according to claim 3 wherein the set of data points known to represent low probability normal states of the system are selected from a training data set of measurements on the system in a normal state as points which correspond to a probability density function value lower than a first predetermined threshold. 5. A method according to claim 1 wherein the step (d) of determining whether or not the system state is normal comprises comparing the pdf value of the datapoint to a threshold calculated by: fitting a distribution function to the pdf values of a set of data points known to represent low probability normal states of the system, then calculating an extreme value distribution of the fitted distribution function, and setting the threshold on the extreme value distribution as that value which separates a selected proportion of the higher probability mass from the lower probability remainder in the extreme value distribution. 6. A method according to claim 5 wherein the extreme value distribution is calculated by generating a plurality of sets of values from the fitted distribution function, selecting the extremum of each of said sets and fitting an analytic extreme value distribution to the selected extrema. 7. A method according to claim 6 wherein the analytic extreme value distribution is the Weibull distribution. 8. A method according to claim 1 wherein the distribution function is the Generalised Pareto Distribution. 9. A method according to claim 1 wherein the statistical model is multimodal. 10. A method according to claim 1 wherein the statistical model is multivariate, each variable of the statistical model corresponding to one parameter of said multi-parameter data points, each parameter being a measurement of an output of a sensor on the system. 11. A system monitor for monitoring the state of a system in accordance with the method of claim 1, the monitor storing said statistical model and being adapted to perform said repeated measurements of the state of the system to execute said method to classify the system state as normal or abnormal. 12. A system monitor according to claim 11 adapted to acquire measurements of said system state continually and to execute said method on a rolling window of m successive data points. 13. A system monitor according to claim 11 further adapted to store measurements of the system state classified as normal for use in retraining the statistical model. 14. A patient monitor comprising a system monitor according to claim 11 wherein said system is a human patient and said measurements of system parameters comprise measurements of at least two of: heart rate, breathing rate, oxygen saturation, body temperature, systolic blood pressure and diastolic blood pressure.
REJECTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method of system monitoring or, more particularly, novelty detection, based on extreme value theory in particular a points-over-threshold POT method which is applicable to multimodal multivariate data. Multimodal multivariate data points collected by continuously monitoring a system are transformed into probability space by obtaining their probability density function (pdf) values from a statistical model of normality, such as a pdf fitted to a training data set of normal data. Extremal data is defined as that whose pdf value is below a predetermined threshold and a new analytic function, in particular the Generalised Pareto Distribution (GPD) is fitted to that extremal data only. The fitted GPD can be compared to a GPD fitted to the extremal datapoints of the training data set of normal data to determine if the monitored system is in a normal state. Alternatively a threshold can be set by calculating an extreme value distribution of the GPD fitted to the extremal data of the training data set and setting as the threshold the pdf value which separates a desired proportion, e.g., 0.99 of the probability mass from the remainder. If the minimum pdf value of a set of data points collected from the system is below the threshold, the system may be abnormal.
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method of system monitoring or, more particularly, novelty detection, based on extreme value theory in particular a points-over-threshold POT method which is applicable to multimodal multivariate data. Multimodal multivariate data points collected by continuously monitoring a system are transformed into probability space by obtaining their probability density function (pdf) values from a statistical model of normality, such as a pdf fitted to a training data set of normal data. Extremal data is defined as that whose pdf value is below a predetermined threshold and a new analytic function, in particular the Generalised Pareto Distribution (GPD) is fitted to that extremal data only. The fitted GPD can be compared to a GPD fitted to the extremal datapoints of the training data set of normal data to determine if the monitored system is in a normal state. Alternatively a threshold can be set by calculating an extreme value distribution of the GPD fitted to the extremal data of the training data set and setting as the threshold the pdf value which separates a desired proportion, e.g., 0.99 of the probability mass from the remainder. If the minimum pdf value of a set of data points collected from the system is below the threshold, the system may be abnormal.
A reconfigurable neural circuit includes a two dimensional array including a plurality of processing nodes, wherein each processing node includes a neuron circuit, a synapse circuit, a spike timing dependent plasticity (STDP) circuit, a weight memory for storing synaptic weights, the weight memory coupled to the synapse circuit, an interconnect fabric for interconnections to and from and between the neuron circuit, the synapse circuit, the STDP circuit, the weight memory, and between a respective node in the array and other processing nodes in the array, and a connectivity memory for storing interconnect routing controls coupled to the interconnect fabric.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A reconfigurable neural circuit comprising: a two dimensional array comprising a plurality of processing nodes; wherein each processing node comprises: a neuron circuit; a synapse circuit; a spike timing dependent plasticity (STDP) circuit; a weight memory for storing synaptic weights, the weight memory coupled to the synapse circuit; an interconnect fabric for interconnections to and from and between the neuron circuit, the synapse circuit, the STDP circuit, the weight memory, and between a respective node in the array and other processing nodes in the array; and a connectivity memory for storing interconnect routing controls coupled to the interconnect fabric. 2. The reconfigurable neural circuit of claim 1 wherein each processing node comprises: a time multiplexed synapse circuit. 3. The reconfigurable neural circuit of claim 1 wherein: an output of the synapse circuit is coupled to an input of the neuron circuit; an input to the synapse circuit is coupled to the STDP circuit; an output of the neuron circuit is coupled to the STDP circuit; an output of the STDP circuit is coupled to the weight memory. 4. The reconfigurable neural circuit of claim 1 wherein: the neuron circuit comprises an integrate and fire circuit. 5. The reconfigurable neural circuit of claim 1 wherein: the weight memory comprises a memristor memory, flip flops, or a static random access memory. 6. The reconfigurable neural circuit of claim 1 wherein: the connectivity memory comprises flip flops or a static random access memory. 7. The reconfigurable neural circuit of claim 1 wherein: the interconnect fabric comprises a plurality of switches for changing the interconnections to and from the neuron circuit, the synapse circuit, the STDP circuit, the weight memory, and other processing nodes in the array. 8. The reconfigurable neural circuit of claim 7 wherein: the plurality of switches comprise a plurality of uni-directional and bi-directional switches. 9. The reconfigurable neural circuit of claim 1 wherein: the weight memory stores N synaptic conductance values or weights for N virtual synapse circuits. 10. The reconfigurable neural circuit of claim 9 wherein: the connectivity memory stores interconnect routing controls for N time periods; wherein the interconnect fabric is reconfigurable for each of the N time periods. 11. The reconfigurable neural circuit of claim 10 wherein: one of the N synaptic conductance values or weights is read from the weight memory for each of the N time periods and coupled to the synapse circuit. 12. The reconfigurable neural circuit of claim 11 wherein: an output of the STDP circuit is coupled to the weight memory; and a synaptic conductance value or weight read from the weight memory during a respective time period of the N time periods is updated or changed in the weight memory by writing the weight memory in the respective time period according to the output of the STDP circuit. 13. The reconfigurable neural circuit of claim 1 wherein: the STDP element comprises a biologically inspired spike timing dependent plasticity (STDP) learning rule. 14. A method of providing a reconfigurable neural network comprising: forming a two dimensional array of plurality of processing nodes, wherein each processing node comprises: a synapse; a neuron coupled to the synapse; and a spike timing dependent plasticity (STDP) element; storing N synaptic weights for each processing node; accessing a synaptic weight for each processing node during each of N time periods and forming a virtual synapse within each processing node during each of the N time periods using the synapse and a respective accessed synaptic weight; and controlling connections to and from and between the neuron, the synapse, and the STDP element in a processing node, and connections between each respective processing node and other processing nodes in the array. 15. The method of claim 14 further comprising: time multiplexing the synapse to form N virtual synapses. 16. The method of claim 14 wherein within each processing node: an output of the synapse is coupled to an input of the neuron; an input to the synapse is coupled to the STDP element; an output of the neuron is coupled to the STDP element; an output of the STDP element is coupled to the weight memory. 17. The method of claim 14 wherein: the neuron comprises an integrate and fire circuit. 18. The method of claim 14 wherein controlling connections to and from and between the neuron, the synapse, and the STDP element in a processing node, and connections between each respective processing node and other processing nodes in the array comprises: controlling a plurality of switches. 19. The method of claim 14 further comprising: storing controls for N time periods for controlling connections to and from and between the neuron, the synapse, and the STDP element in a processing node, and connections between each respective processing node and other processing nodes in the array. 20. The method of claim 14 further comprising: updating or changing a synaptic weight read from the weight memory during a respective time period of the N time periods by writing the weight memory in the respective time period according to an output of the STDP element. 21. The method of claim 20 wherein: the STDP element updates or changes the synaptic weight according to a biologically inspired spike timing dependent plasticity (STDP) learning rule. 22. The method of claim 14 wherein storing N synaptic weights for each processing node comprises: storing the N synaptic weight in each processing node using a memristor memory, flip flops, or a static random access memory.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A reconfigurable neural circuit includes a two dimensional array including a plurality of processing nodes, wherein each processing node includes a neuron circuit, a synapse circuit, a spike timing dependent plasticity (STDP) circuit, a weight memory for storing synaptic weights, the weight memory coupled to the synapse circuit, an interconnect fabric for interconnections to and from and between the neuron circuit, the synapse circuit, the STDP circuit, the weight memory, and between a respective node in the array and other processing nodes in the array, and a connectivity memory for storing interconnect routing controls coupled to the interconnect fabric.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A reconfigurable neural circuit includes a two dimensional array including a plurality of processing nodes, wherein each processing node includes a neuron circuit, a synapse circuit, a spike timing dependent plasticity (STDP) circuit, a weight memory for storing synaptic weights, the weight memory coupled to the synapse circuit, an interconnect fabric for interconnections to and from and between the neuron circuit, the synapse circuit, the STDP circuit, the weight memory, and between a respective node in the array and other processing nodes in the array, and a connectivity memory for storing interconnect routing controls coupled to the interconnect fabric.
A device may receive data associated with an event. The device may identify a context of the event based on receiving the data. The device may identify a similar event based on performing a comparison of the context of the event and a context of the similar event. The device may determine a set of pre-events associated with the event based on identifying a pre-event that occurred before the similar event. The set of pre-events may include at least one pre-event similar to the pre-event that occurred before the similar event. The device may determine a set of post-events associated with the event based on determining the set of pre-events and identifying a post-event that occurred after the similar event. The set of post-events may include at least one post-event similar to the post-event. The device may perform an action based on the set of post-events.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A device, comprising: one or more processors to: receive data associated with a first event; identify a first context of the first event based on the data; identify a plurality of second events based on the first context and a second context of the plurality of second events, the first context being similar to the second context; determine a set of pre-events associated with the first event based on a plurality of pre-events associated with the plurality of second events, one or more pre-events of the set of pre-events being similar to the plurality of pre-events; determine a set of post-events associated with the first event based on the set of pre-events and based on a plurality of post-events associated with the plurality of second events, the set of post-events including one or more post-events predicted to occur after the first event, the one or more post-events being similar to the plurality of post-events; and perform an action related to the first event based on determining the set of post-events. 2. The device of claim 1, where the one or more processors are further to: determine a timing of steps of the plurality of second events; and where the one or more processors, when determining the set of pre-events, are to: determine the set of pre-events based on the timing of the steps of the plurality of second events. 3. The device of claim 1, where the one or more processors are to: use a canonical correlation technique to identify the plurality of second events based on receiving the data; apply a filter to the plurality of second events based on identifying the plurality of second events, the filter including a temporal filter or a geographic filter; and where the one or more processors, when identifying the plurality of second events, are to: identify the plurality of second events based on applying the filter to the plurality of second events. 4. The device of claim 1, where the one or more processors are further to: use a homophily technique to determine a score for the first event and the plurality of second events based on identifying the plurality of second events, the score indicating a semantic similarity between the first event and the plurality of second events based on a knowledge graph; determine an order for the plurality of second events based on the score; and where the one or more processors, when determining the set of pre-events, are to: determine the set of pre-events based on the order for the plurality of second events. 5. The device of claim 1, where the one or more processors, are further to: identify the plurality of post-events associated with the plurality of second events based on identifying the plurality of second events in a knowledge graph; identify another plurality of post-events associated with a plurality of third events based on identifying the plurality of third events in the knowledge graph, a third context of the plurality of third events being similar to the first context; and where the one or more processors, when determining the set of post-events, are to: determine the set of post-events based on identifying the plurality of post-events and the other plurality of post-events. 6. The device of claim 1, where the one or more processors are further to: identify a second action associated with the plurality of second events based on identifying the plurality of second events; determine a set of actions associated with the first event based on identifying the second action; and where the one or more processors, when performing the action, are to: perform the action based on determining the set of actions, the action being included in the set of actions. 7. The device of claim 1, where the one or more processors are further to: determine a severity of the set of post-events; and where the one or more processors, when performing the action, are to: perform the action based on the severity of the set of post-events. 8. A method, comprising: receiving, by a device, data associated with an event; identifying, by the device, a context of the event based on receiving the data; identifying, by the device, a similar event based on performing a comparison of the context of the event and a context of the similar event; determining, by the device, a set of pre-events associated with the event based on identifying a pre-event that occurred before the similar event, the set of pre-events including at least one pre-event similar to the pre-event that occurred before the similar event; determining, by the device, a set of post-events associated with the event based on determining the set of pre-events and identifying a post-event that occurred after the similar event, the set of post-events including at least one post-event similar to the post-event; and performing, by the device, an action based on the set of post-events. 9. The method of claim 8, further comprising: receiving other data associated with the similar event prior to receiving the data; identifying the pre-event, the post-event, or the context of the similar event based on receiving the other data; and storing information identifying the pre-event, the post-event, or the context of the similar event based on identifying the pre-event, the post-event, or the context of the similar event. 10. The method of claim 8, further comprising: processing the data to identify a term included in the data based on receiving the data; and where identifying the context of the event comprises: identifying the context of the event based on processing the data to identify the term. 11. The method of claim 10, further comprising: identifying a first term included in the data based on receiving the data, the first term being associated with the context of the event; identifying a second term included in other data based on receiving the other data, the second term being associated with the context of the similar event; determining whether the first term and the second term are similar based on identifying the first term and the second term; and where identifying the similar event comprises: identifying the similar event based on determining whether the first term and the second term are similar. 12. The method of claim 8, further comprising: identifying one or more pre-events associated with one or more similar events based on identifying the one or more similar events, the one or more similar events including the similar event; determining whether the one or more pre-events are associated with a threshold quantity of the one or more similar events based on identifying the one or more pre-events; and where determining the set of pre-events comprises: determining the set of pre-events based on determining whether the one or more pre-events are associated with the threshold quantity of the one or more similar events. 13. The method of claim 8, further comprising: identifying another similar event based on receiving other data associated with the other similar event; determining whether the other similar event is the post-event associated with the similar event based on identifying the other similar event; and where determining the set of post-events comprises: determining the set of post-events based on determining whether the other similar event is the post-event associated with the similar event. 14. The method of claim 8, further comprising: determining the action to perform based on a similar action associated with the post-event; determining an outcome of the post-event using a term associated with the post-event based on determining the action; modifying the action based on determining the outcome of the post-event; and where performing the action comprises: performing the action based on modifying the action. 15. A non-transitory computer-readable medium storing instructions, the instructions comprising: one or more instructions that, when executed by one or more processors, cause the one or more processors to: receive, from multiple devices, data associated with an event; process the data to identify a context of the event, the context of the event being based on a term or a tag identified in the data; identify a historical event associated with a similar context as the context of the event based on processing the data, the context of the historical event being based on another term similar to the term or another tag similar to the tag; determine a set of pre-events associated with the event based on identifying a pre-event associated with the historical event, the set of pre-events being semantically similar to the pre-event; determine a set of post-events associated with the event based on the set of pre-events and based on a post-event associated with the historical event, the set of post-events being semantically similar to the post-event; and perform an action related to the event based on determining the set of post-events. 16. The non-transitory computer-readable medium of claim 15, where the one or more instructions, when executed by the one or more processors, further cause the one or more processors to: identify a first pre-event associated with a first historical event and a second pre-event associated with a second historical event based on identifying the first historical event and the second historical event; normalize a first temporal distance of the first pre-event from the first historical event and a second temporal distance of the second pre-event from the second historical event; and where the one or more instructions, that cause the one or more processors to determine the set of pre-events, cause the one or more processors to: determine the set of pre-events based on normalizing the first temporal distance and the second temporal distance. 17. The non-transitory computer-readable medium of claim 15, where the one or more instructions, when executed by the one or more processors, further cause the one or more processors to: determine a likelihood of the set of post-events based on determining the set of post-events; determine a set of actions to perform based on the likelihood of the set of post-events, the set of actions including the action; and where the one or more instructions, that cause the one or more processors to perform the action, cause the one or more processors to: perform the action based on determining the set of actions. 18. The non-transitory computer-readable medium of claim 15, where the one or more instructions, when executed by the one or more processors, further cause the one or more processors to: determine a set of actions based on the set of post-events, the set of actions including the action; determine a likelihood of a historical action, similar to the action, being associated with one or more historical events using a Bayesian probability technique based on determining the set of actions; and where the one or more instructions, that cause the one or more processors to perform the action, cause the one or more processors to: perform the action based on the likelihood satisfying a threshold. 19. The non-transitory computer-readable medium of claim 15, where the one or more instructions, when executed by the one or more processors, further cause the one or more processors to: use a predictive modeling technique to predict the set of post-events based on the post-event; and where the one or more instructions, that cause the one or more processors to determine the set of post-events, cause the one or more processors to: determine the set of post-events based on using the predictive modeling technique to predict the set of post-events. 20. The non-transitory computer-readable medium of claim 15, where the one or more instructions, when executed by the one or more processors, further cause the one or more processors to: identify first pre-events associated with the historical event and second pre-events associated with the historical event, the first pre-events being associated with a first set of semantically similar terms, the second pre-events being associated with a second set of semantically similar terms; determine a first group of pre-events for the first pre-events and a second group of pre-events for the second pre-events based on identifying the first pre-events and the second pre-events; and where the one or more instructions, that cause the one or more processors to determine the set of pre-events, cause the one or more processors to: determine the set of pre-events based on the first group of pre-events and the second group of pre-events.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A device may receive data associated with an event. The device may identify a context of the event based on receiving the data. The device may identify a similar event based on performing a comparison of the context of the event and a context of the similar event. The device may determine a set of pre-events associated with the event based on identifying a pre-event that occurred before the similar event. The set of pre-events may include at least one pre-event similar to the pre-event that occurred before the similar event. The device may determine a set of post-events associated with the event based on determining the set of pre-events and identifying a post-event that occurred after the similar event. The set of post-events may include at least one post-event similar to the post-event. The device may perform an action based on the set of post-events.
G06N502
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A device may receive data associated with an event. The device may identify a context of the event based on receiving the data. The device may identify a similar event based on performing a comparison of the context of the event and a context of the similar event. The device may determine a set of pre-events associated with the event based on identifying a pre-event that occurred before the similar event. The set of pre-events may include at least one pre-event similar to the pre-event that occurred before the similar event. The device may determine a set of post-events associated with the event based on determining the set of pre-events and identifying a post-event that occurred after the similar event. The set of post-events may include at least one post-event similar to the post-event. The device may perform an action based on the set of post-events.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting trends in event streams. One method includes generating a first set of parameters of a machine learning model from a first system processing an event stream, the first system comprising a first central modeler that receives aggregated information from a first plurality of local modelers; generating a second set of parameters of the machine learning model from a second system processing the event stream, the second system comprising a second central modeler that receives aggregated information from a second plurality of local modelers; determining a difference between the first set of parameters and the second set of parameters; and determining that the difference is greater than a threshold amount and as a consequence outputting information identifying a trend in the event stream.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: receiving events of an event stream that are each processed by one of a plurality of first local modelers and by one of a plurality of second local modelers, wherein the first and second local modelers each execute on a system of one or more computers; aggregating, by each first local modeler, information associated with each event received by the first local modeler to generate respective first locally aggregated information; aggregating, by each second local modeler, information associated with each event received by the second local modeler to generate respective second locally aggregated information; providing, by one or more of the first local modelers, to a first central modeler, first locally aggregated information generated by the one or more first local modelers; providing, by one or more of the second local modelers, to a second central modeler, second locally aggregated information generated by the one or more second local modelers, wherein the first and the second central modelers execute on the system of one or more computers; aggregating, by the first central modeler and the second central modeler, respectively, locally aggregated information received by the first central modeler and the second central modeler, respectively, to generate first centrally aggregated information and second centrally aggregated information, respectively; wherein the aggregating by the first local modelers or the first central modeler or both is done according to a first learning rate parameter and the aggregating by the second local modelers or the second central modeler or both is done according to a second learning rate parameter different from the first learning rate parameter, wherein each learning rate parameter specifies one or more respective weights to be applied to aggregated information associated with events; determining, by the first central modeler, first parameters of a machine learning model using the first centrally aggregated information; determining, by the second central modeler, second parameters of the machine learning model using the second centrally aggregated information; and determining a difference between the first parameters and the second parameters determining that the difference is greater than a threshold amount and as a consequence outputting information identifying a change in trend in the event stream. 2. The method of claim 1, wherein: the information identifying the change in trend includes an identification of one or more parameter of the first parameters and the second parameters that is different by more than a threshold amount. 3. The method of claim 1, wherein: each event has a time stamp; and the first learning rate parameter and the second learning rate parameter each specify a first function and a different second function, respectively, that output a weight to be applied to information associated with an event given a time stamp of the event. 4. The method of claim 3, wherein the first function and the second function are applied by the first local modelers and the second local modelers, respectively. 5. The method of claim 4, wherein the first local modelers weight older events lower than the second local modelers do. 6. The method of claim 4, wherein the first local modelers weight older events higher than the second local modelers do. 7. The method of claim 1, wherein: the first central modeler and the second central modeler determine the first parameters and the second parameters to represent the parameters of the machine learning model at respective different points in time according to the first learning rate parameter and the second learning rate parameter. 8. The method of claim 1, wherein: determining a difference between the first parameters and the second parameters comprises determining an L1-norm or an L2-norm difference between the first parameters and the second parameters. 9. A method comprising: generating a first set of parameters of a machine learning model from a first system processing an event stream, the first system comprising a first central modeler that receives aggregated information from a first plurality of local modelers; generating a second set of parameters of the machine learning model from a second system processing the event stream, the second system comprising a second central modeler that receives aggregated information from a second plurality of local modelers; determining a difference between the first set of parameters and the second set of parameters; and determining that the difference is greater than a threshold amount and as a consequence outputting information identifying a change in trend in the event stream. 10. The method of claim 9, wherein: the first set of parameters and the second set of parameters represent the parameters of the machine learning model at different points in time. 11. The method of claim 9, wherein: determining a difference between the first set of parameters and the second set of parameters comprises determining an L1-norm or an L2-norm difference between the first parameters and the second parameters. 12. The method of claim 9, wherein: the information identifying the change in trend includes an identification of one or more parameter of the first parameters and the second parameters that is different by more than a threshold amount. 13. The method of claim 9, wherein: first system and the second system operate according a first learning rate parameter and a different second learning rate parameter, respectively. 14. A system comprising: one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: receiving events of an event stream that are each processed by one of a plurality of first local modelers and by one of a plurality of second local modelers, wherein the first and second local modelers each execute on a system of one or more computers; aggregating, by each first local modeler, information associated with each event received by the first local modeler to generate respective first locally aggregated information; aggregating, by each second local modeler, information associated with each event received by the second local modeler to generate respective second locally aggregated information; providing, by one or more of the first local modelers, to a first central modeler, first locally aggregated information generated by the one or more first local modelers; providing, by one or more of the second local modelers, to a second central modeler, second locally aggregated information generated by the one or more second local modelers, wherein the first and the second central modelers execute on the system of one or more computers; aggregating, by the first central modeler and the second central modeler, respectively, locally aggregated information received by the first central modeler and the second central modeler, respectively, to generate first centrally aggregated information and second centrally aggregated information, respectively; wherein the aggregating by the first local modelers or the first central modeler or both is done according to a first learning rate parameter and the aggregating by the second local modelers or the second central modeler or both is done according to a second learning rate parameter different from the first learning rate parameter, wherein each learning rate parameter specifies one or more respective weights to be applied to aggregated information associated with events; determining, by the first central modeler, first parameters of a machine learning model using the first centrally aggregated information; determining, by the second central modeler, second parameters of the machine learning model using the second centrally aggregated information; and determining a difference between the first parameters and the second parameters determining that the difference is greater than a threshold amount and as a consequence outputting information identifying a change in trend in the event stream. 15. The system of claim 14, wherein: the information identifying the change in trend includes an identification of one or more parameter of the first parameters and the second parameters that is different by more than a threshold amount. 16. The system of claim 14, wherein: each event has a time stamp; and the first learning rate parameter and the second learning rate parameter each specify a first function and a different second function, respectively, that output a weight to be applied to information associated with an event given a time stamp of the event. 17. The system of claim 16, wherein the first function and the second function are applied by the first local modelers and the second local modelers, respectively. 18. The system of claim 17, wherein the first local modelers weight older events lower than the second local modelers do. 19. The system of claim 17, wherein the first local modelers weight older events higher than the second local modelers do. 20. The system of claim 14, wherein: the first central modeler and the second central modeler determine the first parameters and the second parameters to represent the parameters of the machine learning model at respective different points in time according to the first learning rate parameter and the second learning rate parameter.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting trends in event streams. One method includes generating a first set of parameters of a machine learning model from a first system processing an event stream, the first system comprising a first central modeler that receives aggregated information from a first plurality of local modelers; generating a second set of parameters of the machine learning model from a second system processing the event stream, the second system comprising a second central modeler that receives aggregated information from a second plurality of local modelers; determining a difference between the first set of parameters and the second set of parameters; and determining that the difference is greater than a threshold amount and as a consequence outputting information identifying a trend in the event stream.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting trends in event streams. One method includes generating a first set of parameters of a machine learning model from a first system processing an event stream, the first system comprising a first central modeler that receives aggregated information from a first plurality of local modelers; generating a second set of parameters of the machine learning model from a second system processing the event stream, the second system comprising a second central modeler that receives aggregated information from a second plurality of local modelers; determining a difference between the first set of parameters and the second set of parameters; and determining that the difference is greater than a threshold amount and as a consequence outputting information identifying a trend in the event stream.
Systems, methods, and non-transitory computer readable media configured to determine scores for content items published in an online environment based on at least one machine learning model trained with features associated with the content items. The scores can be associated with probabilities that the content items include objectionable material. A subset of the content items can be selected based on scores of the subset of the content items and satisfaction of a threshold value. It can be determined whether the subset of the content items includes objectionable material.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method comprising: determining, by a computing system, scores for content items published in an online environment based on at least one machine learning model trained with features associated with the content items, the scores associated with probabilities that the content items include objectionable material; selecting, by the computing system, a subset of the content items based on scores of the subset of the content items and satisfaction of a threshold value; and determining, by the computing system, whether the subset of the content items includes objectionable material. 2. The computer-implemented method of claim 1, wherein the features reflect contextual information regarding the content items. 3. The computer-implemented method of claim 2, wherein the features relate to at least one of a user who flagged a content item and a user who uploaded a flagged content item. 4. The computer-implemented method of claim 3, wherein the features include at least one of reporting accuracy, abuse history, gender, age, profile completeness, profile verification, locale, friends counts, account age, number of reporters, language, and topics reflected by the content items. 5. The computer-implemented method of claim 1, wherein the content items include flagged content items. 6. The computer-implemented method of claim 1, wherein the at least one machine learning model is based on a random forest technique. 7. The computer-implemented method of claim 1, wherein the at least one machine learning model includes different machine learning models, the method further comprising developing the different machine learning models to identify objectionable material in different types of content items. 8. The computer-implemented method of claim 1, further comprising sorting the content items based on the scores. 9. The computer-implemented method of claim 1, wherein the determining whether the subset of the content items includes objectionable material comprises: presenting, via a computer enabled user interface, the subset of the content items for manual review; and receiving labels regarding whether the subset of the content items includes objectionable material based on the manual review. 10. The computer-implemented method of claim 9, further comprising retraining the at least one machine learning model based on the labels. 11. A system comprising: at least one processor; and a memory storing instructions that, when executed by the at least one processor, cause the system to perform: determining scores for content items published in an online environment based on at least one machine learning model trained with features associated with the content items, the scores associated with probabilities that the content items include objectionable material; selecting a subset of the content items based on scores of the subset of the content items and satisfaction of a threshold value; and determining whether the subset of the content items includes objectionable material. 12. The system method of claim 11, wherein the features reflect contextual information regarding the content items. 13. The system method of claim 12, wherein the features relate to at least one of a user who flagged a content item and a user who uploaded a flagged content item. 14. The system method of claim 13, wherein the features include at least one of reporting accuracy, abuse history, gender, age, profile completeness, profile verification, locale, friends counts, account age, number of reporters, language, and topics reflected by the content items. 15. The system method of claim 11, wherein the content items include flagged content items. 16. A non-transitory computer-readable storage medium including instructions that, when executed by at least one processor of a computing system, cause the computing system to perform a method comprising: determining scores for content items published in an online environment based on at least one machine learning model trained with features associated with the content items, the scores associated with probabilities that the content items include objectionable material; selecting a subset of the content items based on scores of the subset of the content items and satisfaction of a threshold value; and determining whether the subset of the content items includes objectionable material. 17. The non-transitory computer-readable storage medium of claim 16, wherein the features reflect contextual information regarding the content items. 18. The non-transitory computer-readable storage medium of claim 17, wherein the features relate to at least one of a user who flagged a content item and a user who uploaded a flagged content item. 19. The non-transitory computer-readable storage medium of claim 18, wherein the features include at least one of reporting accuracy, abuse history, gender, age, profile completeness, profile verification, locale, friends counts, account age, number of reporters, language, and topics reflected by the content items. 20. The non-transitory computer-readable storage medium of claim 16, wherein the content items include flagged content items.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Systems, methods, and non-transitory computer readable media configured to determine scores for content items published in an online environment based on at least one machine learning model trained with features associated with the content items. The scores can be associated with probabilities that the content items include objectionable material. A subset of the content items can be selected based on scores of the subset of the content items and satisfaction of a threshold value. It can be determined whether the subset of the content items includes objectionable material.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Systems, methods, and non-transitory computer readable media configured to determine scores for content items published in an online environment based on at least one machine learning model trained with features associated with the content items. The scores can be associated with probabilities that the content items include objectionable material. A subset of the content items can be selected based on scores of the subset of the content items and satisfaction of a threshold value. It can be determined whether the subset of the content items includes objectionable material.
To represent selection behavior of a cognitively-biased consumer as a learnable model having high prediction accuracy. there is provided a processing apparatus including a parameter storing unit configured to store first weight values set among nodes between an input layer and an intermediate layer and second weight values set among nodes between the intermediate layer and an output layer, an acquiring unit configured to acquire a plurality of input values to a plurality of input nodes, and a calculating unit configured to calculate a plurality of output values from a plurality output nodes corresponding to the plurality of input values using a prediction model in which the influence of the second weight value set between the output node and the intermediate node corresponding to the input node, the input value to which is equal to or smaller than a threshold is reduced.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A processing apparatus for processing a prediction model including an input layer including a plurality of input nodes, an output layer including a plurality of output nodes, and an intermediate layer including a plurality of intermediate nodes, the processing apparatus comprising: a parameter storing unit configured to store first weight values set among the nodes between the input layer and the intermediate layer and second weight values set among the nodes between the intermediate layer and the output layer; an acquiring unit configured to acquire a plurality of input values to the plurality of input nodes; and a calculating unit configured to calculate a plurality of output values from the plurality output nodes corresponding to the plurality of input values using the prediction model in which an influence of the second weight value set between the output node and the intermediate node corresponding to the input node whose input value is equal to or smaller than a threshold is reduced. 2. The processing apparatus according to claim 1, wherein the calculating unit reduces a magnitude of the second weight value set between the output node not corresponding to the input node whose input value is larger than the threshold, and the intermediate node without changing a magnitude of the second weight value set between the output node corresponding to the input node whose input value is larger than the threshold, and the intermediate node. 3. The processing apparatus according to claim 2, wherein the calculating unit sets the magnitude of the second weight value set between the output node not corresponding to the input node whose input value is larger than the threshold, and the intermediate node to 0. 4. The processing apparatus according to claim 3, wherein the calculating unit sets, in the calculation of the plurality of output values from the plurality of output nodes corresponding to the plurality of input values, the output value from the output node corresponding to the input node whose input value is 0, to 0. 5. The processing apparatus according to claim 3, wherein the acquiring unit acquires learning data including the plurality of input values and a plurality of output values that should be output to the plurality of output nodes to correspond to the plurality of input values, the processing apparatus comprises a learning processing unit configured to learn the prediction model on the basis of the plurality of input values and the plurality of output values for learning, and the learning processing unit sets the second weight value set between the output node corresponding to the input node whose input value for learning is 0, and the intermediate node to 0 and learns the prediction model. 6. The processing apparatus according to claim 5, wherein the prediction model is a selection model obtained by modeling selection behavior of a target with respect to a given choice, and the processing apparatus comprises: an input vector generating unit configured to generate an input vector that indicates whether each of a plurality of kinds of choices is included in input choices; and an output vector generating unit configured to generate an output vector that indicates whether each of the plurality of kinds of choices is included in output choices for learning. 7. The processing apparatus according to claim 6, wherein the learning processing unit learns the prediction model including selection behavior corresponding to a cognitive bias of the target. 8. The processing apparatus according to claim 7, wherein the learning processing unit learns the prediction model in which a ratio of selection probabilities of choices included in the input choices is variable depending on a combination of other choices included in the input choices. 9. The processing apparatus according to claim 8, wherein in the prediction model, input biases, intermediate biases, and output biases are further set for the nodes included in the input layer, the intermediate layer, and the output layer, and the learning processing unit learns the first weight values, the second weight values, the input biases, the intermediate biases, and the output biases. 10. The processing apparatus according to claim 9, further comprising a probability calculating unit configured to calculate, on the basis of parameters including the first weight values, the second weight values, the input biases, the intermediate biases, and the output biases, probabilities that the respective choices are selected according to the input choices. 11. The processing apparatus according to claim 10, wherein the learning processing unit updates the parameters to increase the possibilities that the output choices are selected according to the input choices concerning each of kinds of selection behavior for learning. 12. The processing apparatus according to any one of claim 11, wherein the prediction model is a selection model obtained by modeling selection behavior of a target with respect to a give choice, the target is a user, and the choices are choices of a commodity or a service given to the user, the acquiring unit acquires the learning data including, as selection behavior for learning, a choice selected by the user from the choices of the commodity or the service given to the user, and the learning processing unit learns the prediction model obtained by modeling the selection behavior of the user corresponding to the choices of the commodity or the service. 13. The processing apparatus according to claim 12, comprising: a designation input unit configured to receive designation of a commodity or a service promoted for sale among a plurality of kinds of commodities or services; a selecting unit configured to select, out of the plurality of kinds of choices corresponding to the plurality of kinds of commodities or services, a plurality of input choices including the commodity or the service promoted for sale as a choice; and a specifying unit configured to specify, among the plurality of input choices, an input choice with which a probability that the choice corresponding to the commodity or the service promoted to sale is higher. 14. The processing apparatus according to claim 5, wherein the prediction model is a selection model obtained by modeling selection behavior of a target with respect to a give choice, the target is a user, and the choices are presented to the user on a web site. 15. A program product for causing a computer to function as the processing apparatus according to claim 14. 16. A processing method for processing a prediction model including an input layer including a plurality of input nodes, an output layer including a plurality of output nodes, and an intermediate layer including a plurality of intermediate nodes, the processing method comprising: a parameter storing step for storing first weight values set among the nodes between the input layer and the intermediate layer and second weight values set among the nodes between the intermediate layer and the output layer; an acquiring step for acquiring a plurality of input values to the plurality of input nodes; and a calculating step for calculating a plurality of output values from the plurality output nodes corresponding to the plurality of input values using the prediction model in which an influence of the second weight value set between the output node and the intermediate node corresponding to the input node whose input value is equal to or smaller than a threshold is reduced.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: To represent selection behavior of a cognitively-biased consumer as a learnable model having high prediction accuracy. there is provided a processing apparatus including a parameter storing unit configured to store first weight values set among nodes between an input layer and an intermediate layer and second weight values set among nodes between the intermediate layer and an output layer, an acquiring unit configured to acquire a plurality of input values to a plurality of input nodes, and a calculating unit configured to calculate a plurality of output values from a plurality output nodes corresponding to the plurality of input values using a prediction model in which the influence of the second weight value set between the output node and the intermediate node corresponding to the input node, the input value to which is equal to or smaller than a threshold is reduced.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: To represent selection behavior of a cognitively-biased consumer as a learnable model having high prediction accuracy. there is provided a processing apparatus including a parameter storing unit configured to store first weight values set among nodes between an input layer and an intermediate layer and second weight values set among nodes between the intermediate layer and an output layer, an acquiring unit configured to acquire a plurality of input values to a plurality of input nodes, and a calculating unit configured to calculate a plurality of output values from a plurality output nodes corresponding to the plurality of input values using a prediction model in which the influence of the second weight value set between the output node and the intermediate node corresponding to the input node, the input value to which is equal to or smaller than a threshold is reduced.
Embodiments of techniques and systems for performance of predicted actions are described. In embodiments, a predicted action performance engine (“PAE”) may receive one or probabilities of potential actions that may be performed on a computing device. The PAE may also receive a system context for the computing device describing available resources on the computing device, workload, etc. Based on these probabilities and the system context, the PAE may determine one or more predicted actions and/or resource utilizations which are likely to occur and which may be performed ahead of time. The PAE may then facilitate performance of these actions and/or resource utilizations. Other embodiments may be described and claimed.
Please help me write a proper abstract based on the patent claims. CLAIM: 1-30. (canceled) 31. One or more non-transitory computer-readable media comprising instructions, which when executed by a computer device, cause the computer device to: identify a current system context of the computer device, wherein the current system context comprises information about one or more applications executing on the computer device; determine probabilities for one or more potential actions for the computer device based on an indication of a plurality of states and transitions between individual states of the plurality of states; select a predicted action to perform based on the current system context and the determined probabilities; and pre-fetch executable code for the predicted action prior to receipt of a command to obtain the executable code for the predicted action. 32. The one or more computer-readable media of claim 31, wherein the current system context further comprises a current state of the computer device, a resource availability of the computer device, a logical environment of the computer device, a physical location of the computer device, and environmental information for the computer device. 33. The one or more computer-readable media of claim 31, wherein a flow structure is to indicate the plurality of states and transitions between individual states, and wherein the flow structure comprises an indication of potential actions that are ordered by probability and distance in time to performance from a current state of the computer device, and wherein to determine the probabilities, execution of the instructions by the computer device, is to cause the computer device to process the flow structure. 34. The one or more computer-readable media of claim 31, wherein execution of the instructions by the computer device, is to cause the computer device to: select one or more support actions and/or resource utilizations for the predicted action to support execution of the executable code to perform the predicted action. 35. The one or more computer-readable media of claim 34, wherein to select the one or more actions to support execution, execution of the instructions by the computer device, is to cause the computer device to: identify the one or more support actions and/or resource utilizations from among capabilities indicated by the current system context. 36. The one or more computer-readable media of claim 31, wherein to pre-fetch the executable code, execution of the instructions by the computer device, is to cause the computer device to load cache data associated with the selected predicted action from one or more resources into cache memory, or access data associated with the selected predicted action over a network. 37. The one or more computer-readable media of claim 31, wherein to pre-fetch the executable code, execution of the instructions by the computer device, is to cause the computer device to: obtain the executable code for an application implemented by the computer device; or obtain the executable code from another computer device separate from the computer device. 38. The one or more computer-readable media of claim 31, wherein the probabilities are based on one or more tags associated with individual potential actions of the one or more potential actions. 39. A computer system comprising: one or more processors coupled with one or more memory devices, wherein the one or more processors are to execute instructions to: identify a current system context of the computer device, wherein the current system context comprises information about one or more applications executing on the computer device; determine probabilities for one or more potential actions for the computer device based on an indication of a plurality of states and transitions between individual states of the plurality of states; select a predicted action to perform based on the current system context and the determined probabilities; and pre-fetch executable code for the predicted action prior to receipt of a command to obtain the executable code for the predicted action. 40. The computer system of claim 39, further comprising: one or more communications interfaces, and wherein the current system context further comprises: a current state of the computer system including one or more applications executing on the computer system, power consumed by of the computer system, a resource availability of the one or more memory devices, or a current workload of the one or more processors, a logical environment of the computer device including network connectivity of the one or more communications interfaces or data received by the one or more communications interfaces over a network, a physical location of the computer device, and environmental information for the computer device including a temperature of the computer system. 41. The computer system of claim 39, wherein a flow structure is to indicate the plurality of states and transitions between individual states, and wherein the flow structure comprises an indication of potential actions that are ordered by probability and distance in time to performance from a current state of the computer device, and wherein to determine the probabilities, the one or more processors are to execute the instructions to process the flow structure. 42. The computer system of claim 39, wherein the one or more processors are to execute the instructions to: select one or more support actions and/or resource utilizations for the predicted action to support execution of the executable code to perform the predicted action. 43. The computer system of claim 42, wherein to select the one or more actions to support execution, the one or more processors are to execute the instructions to: identify the one or more support actions and/or resource utilizations from among capabilities indicated by the current system context. 44. The computer system of claim 39, wherein to pre-fetch the executable code, the one or more processors are to execute the instructions to load cache data associated with the selected predicted action from one or more resources into cache memory, or access data associated with the selected predicted action over a network. 45. The computer system of claim 39, wherein to pre-fetch the executable code, the one or more processors are to execute the instructions to: obtain the executable code for an application implemented by the computer device; or obtain the executable code from another computer device separate from the computer device. 46. An apparatus for predicting activities of a computer device, the apparatus comprising: a probabilities engine to be operated by one or more computer processors to: obtain context information from one or more applications executing on the computer device; identify a current action currently being performed by the computer device; and determine probabilities for one or more potential actions for the computer device based on the context information and the current action; and a predicted action engine to be operated by the one or more computer processors to: identify a current system context of the computer device; select a predicted action to perform based on the current system context and the determined probabilities; and pre-fetch executable code for the predicted action prior to receipt of a command to obtain the executable code for the action. 47. The apparatus of claim 46, wherein the current system context comprises a current state of the computer device, a resource availability of the computer device, a logical environment of the computer device, a physical location of the computer device, and environmental information for the computer device. 48. The apparatus of claim 46, wherein a flow structure is to indicate the plurality of states and transitions between individual states, and wherein the flow structure comprises an indication of potential actions that are ordered by probability and distance in time to performance from a current state of the computer device, and wherein to determine the probabilities, the probabilities engine is to process the flow structure. 49. The apparatus of claim 46, wherein the predicted action engine is to: identify one or more support actions and/or resource utilizations from among capabilities indicated by the current system context; and select the one or more support actions and/or resource utilizations for the predicted action to support execution of the executable code for the predicted action. 50. The apparatus of claim 46, wherein to pre-fetch the executable code, the predicted action engine is to load cache data associated with the selected predicted action from one or more resources into cache memory, access data associated with the selected predicted action over a network, obtain the executable code for an application implemented by the computer device; or obtain the executable code from another computer device separate from the computer device. 51. A method for predicting activities of a first computer device, the method comprising identifying, by a second computer device, a current system context of the first computer device, wherein the current system context comprises information about one or more applications executing on the first computer device; determining, by the second computer device, probabilities for one or more potential actions for the first computer device based on an indication of a plurality of states and transitions between individual states of the plurality of states; selecting, by the second computer device, a predicted action to perform based on the current system context and the determined probabilities; and pre-fetching, by the second computer device, executable code for the predicted action prior to receipt of a command to obtain the executable code for the predicted action, wherein the pre-fetching comprises providing, by the second computer device, the executable code to the first computer device. 52. The method of claim 51, wherein the current system context further comprises a current state of the first computer device, a resource availability of the first computer device, a logical environment of the first computer device, a physical location of the first computer device, and environmental information for the first computer device. 53. The method of claim 51, wherein pre-fetching the executable code comprises one or more of: loading, by the second computer device, cache data associated with the selected predicted action from one or more resources into cache memory; accessing and obtaining, by the second computer device, data over a network for the first computer device; and sending, by the second computer device to the first computer device, data over the network or another network. 54. The method of claim 51, wherein pre-fetching the executable code comprises: obtaining, by the second computer device, the executable code from an application implemented by the second computer device or the first computer device; or obtaining, by the second computer device, the executable code from a third computer device separate from the first computer device. 55. The method of claim 54, wherein the first and second computer devices are implemented as a same computer device, or the first and second computer devices are implemented as different computer devices.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Embodiments of techniques and systems for performance of predicted actions are described. In embodiments, a predicted action performance engine (“PAE”) may receive one or probabilities of potential actions that may be performed on a computing device. The PAE may also receive a system context for the computing device describing available resources on the computing device, workload, etc. Based on these probabilities and the system context, the PAE may determine one or more predicted actions and/or resource utilizations which are likely to occur and which may be performed ahead of time. The PAE may then facilitate performance of these actions and/or resource utilizations. Other embodiments may be described and claimed.
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Embodiments of techniques and systems for performance of predicted actions are described. In embodiments, a predicted action performance engine (“PAE”) may receive one or probabilities of potential actions that may be performed on a computing device. The PAE may also receive a system context for the computing device describing available resources on the computing device, workload, etc. Based on these probabilities and the system context, the PAE may determine one or more predicted actions and/or resource utilizations which are likely to occur and which may be performed ahead of time. The PAE may then facilitate performance of these actions and/or resource utilizations. Other embodiments may be described and claimed.
There is provided a system and method for training and utilizing a boundary graph machine learning algorithm. The system including a processor configured to receive a plurality of entry nodes, each of the plurality of entry nodes including an entry node input and an entry node output, add each of the plurality of entry nodes to a graph using the entry node input and the entry node output, receiving a plurality of training nodes, each of the plurality of training nodes including a training node input and a training node output, add each of the plurality of training nodes to the graph when the training node input for each of the plurality of training nodes is similar to the training node output of a closest node and the training node output of each of the plurality of training nodes is different than the training node output of the closest node.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system, the system comprising: a processor configured to execute a machine learning algorithm; and a memory configured to store the machine learning algorithm, the machine learning algorithm including a graph with nodes, wherein each of the nodes is associated with an input data and an output data, and pairs of the nodes are connected by a boundary when the input data associated with a first node from the nodes is similar to the input data associated with a second node from the nodes and the output data associated with the first node from the nodes is different than the output data associated with the second node from the nodes. 2. The system of claim 1, wherein the processor is further configured to: receive a test input data; measure a distance between the test input data and the input data associated with each of a plurality of entry nodes from the nodes; select a first entry node from the plurality of entry nodes, the first entry node corresponding to one of the plurality of entry nodes with a first smallest distance value; traverse through the graph starting with the first entry node and ending at a first end node from the nodes; and compute a first estimate based on the output data associated with the first end node. 3. The system of claim 2, wherein the processor is further configured to: select a second entry node from the plurality of entry nodes, the second entry node corresponding to a second of the plurality of entry nodes with a second smallest distance value; traverse through the graph starting with the second entry node and ending at a second end node from the nodes; compute a second estimate based on the output data associated with the second end node; and update the first estimate based on the second estimate. 4. The system of claim 2, wherein the processor is further configured to: receive a test output data; and add a new node to the nodes when the test output data does not match the first estimate, wherein the input data of the new node corresponds to the test input data and the output data of the new node corresponds to the test output data. 5. The system of claim 2, wherein the machine learning algorithm further includes a second graph with second nodes, wherein each of the second nodes is associated with a second input data and a second output data, and wherein the processor is further configured to: measure a second distance between the test input data and the second input data associated with each of a plurality of second entry nodes from the second nodes; select a second entry node from the plurality of second entry nodes, the second entry node corresponding to one of the plurality of second entry nodes closest to the test input data; traverse through the second graph starting with the second entry node and ending at a second end node from the second nodes; compute a second estimate based on the second output data associated with the second end node; and update the first estimate with the second estimate. 6. The system of claim 1, wherein the graph corresponds to a network. 7. A system, the system comprising: a memory for storing a machine learning algorithm, the machine learning algorithm including a graph; and a processor configured to: receive a first entry node, the first entry node including a first input data and a first output data; add the first entry node to the graph using the first input data and the first output data; receive a first training node, the first training node including a second input data and a second output data; add the first training node to the graph when the second input data is similar to the first input data and the second output data is different than the first output data; and connect the first entry node to the first training node using a first boundary. 8. The system of claim 7, wherein the processor is further configured to: receive a second training node, the second training node including a third input data and a third output data, wherein a first distance between the second training node and the first training node is smaller than a second distance between the second training node and the first entry node; traverse through the graph starting from the first entry node and ending at the first training node; add the second training node to the graph when the third output data is different than the second output data; and connect the first training node to the second training node using a second boundary. 9. The system of claim 7, wherein the processor is further configured to: receive a test input data; traverse through the graph starting with the first entry node and ending at the first training node, the first training node having a distance closer to the test input data than the first entry node; and compute a first estimate based on the second output data of the first training node. 10. The system of claim 9, wherein the processor is further configured to: receive a test output data; and add a new node to the graph when the test output data is different than the first estimate. 11. The system of claim 7, wherein before receiving the first training node the processor is further configured to: receive a plurality of entry nodes, each of the plurality of entry nodes including entry input data and entry output data; and add each of the plurality of entry nodes to the graph using the entry input data and the entry output data of each of the plurality of entry nodes. 12. The system of claim 11, wherein the processor is further configured to: receive a plurality of training nodes, each of the plurality of training nodes including training input data and training output data; and add each of the plurality of training nodes to the graph when the training input data of each of the plurality of training nodes is similar to a closest input data for a locally closest node and the training output data of each of the plurality of training nodes is different than a closest output data for the locally closest node. 13. The system of claim 12, wherein each of the plurality of training nodes added to the graph is connected to the locally closest one of the plurality of training nodes using a connection boundary. 14. The system of claim 7, wherein the graph corresponds to a network. 15. A method, the method comprising: receiving a first entry node, the first entry node including a first input data and a first output data; adding the first entry node to a graph using the first input data and the first output data; receiving a first training node, the first training node including a second input data and a second output data; adding the first training node to the graph when the second input data is similar to the first input data and the second output data is different than the first output data; and connecting the first entry node to the first training node using a first boundary. 16. The method of claim 15, further comprising: receiving a second training node, the second training node including a third input data and a third output data, wherein a first distance between the second training node and the first training node is smaller than a second distance between the second training node and the first entry node; traversing through the graph starting from the first entry node and ending at the first training node; adding the second training node to the graph when the third output data is different than the second output data; and connecting the first training node to the second training node using a second boundary. 17. The method of claim 15, further comprising: receiving a test input data; traversing through the graph starting with the first entry node and ending at the first training node, the first training node having a distance closer to the test input data than the first entry node; computing a first estimate based on the second output data of the first training node; receiving a test output data; and adding a new node to the graph when the test output data is different than the first estimate. 18. The method of claim 15, wherein before receiving the first training node the method further comprises: receiving a plurality of entry nodes, each of the plurality of entry nodes including entry input data and entry output data; and adding each of the plurality of entry nodes to the graph using the entry input data and the entry output data of each of the plurality of entry nodes. 19. The method of claim 18, the method further comprising: receiving a plurality of training nodes, each of the plurality of training nodes including training input data and training output data; and adding each of the plurality of training nodes to the graph when the training input data of each of the plurality of training nodes is similar to a closest input data for a locally closest node and the training output data of each of the plurality of training nodes is different than a closest output data for the locally closest node. 20. The method of claim 15, wherein the graph corresponds to a network.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: There is provided a system and method for training and utilizing a boundary graph machine learning algorithm. The system including a processor configured to receive a plurality of entry nodes, each of the plurality of entry nodes including an entry node input and an entry node output, add each of the plurality of entry nodes to a graph using the entry node input and the entry node output, receiving a plurality of training nodes, each of the plurality of training nodes including a training node input and a training node output, add each of the plurality of training nodes to the graph when the training node input for each of the plurality of training nodes is similar to the training node output of a closest node and the training node output of each of the plurality of training nodes is different than the training node output of the closest node.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: There is provided a system and method for training and utilizing a boundary graph machine learning algorithm. The system including a processor configured to receive a plurality of entry nodes, each of the plurality of entry nodes including an entry node input and an entry node output, add each of the plurality of entry nodes to a graph using the entry node input and the entry node output, receiving a plurality of training nodes, each of the plurality of training nodes including a training node input and a training node output, add each of the plurality of training nodes to the graph when the training node input for each of the plurality of training nodes is similar to the training node output of a closest node and the training node output of each of the plurality of training nodes is different than the training node output of the closest node.
The present disclosure provides a planning method for learning applied to a planning system for learning, and the planning system for learning includes a storage, a monitor and a processor. The planning method for learning includes the following steps: recording learning information of a plurality of subjects and storing the learning information in the storage via the monitor; calculating weighting parameters of the subjects according to the learning information and calculating weighting scores of the subjects according to the weighting parameters via the processor; and performing a fuzzy process to the weighting scores via the processor to transform the weighting scores into score levels of the subjects, so as to establish a learning sequence of the subjects to establish a learning plan.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A planning method for learning applied to a planning system for learning, wherein the planning system for learning comprises a storage, a monitor, and a processor, and the planning method for learning comprises: recording learning information of a plurality of subjects and storing the learning information in the storage via the monitor; calculating weighting parameters of the subjects according to the learning information and calculating weighting scores of the subjects according to the weighting parameters via the processor; and performing a fuzzy process to the weighing scores via the processor to transform the weighting scores into score levels of the subjects so as to establish a learning sequence among the subjects to establish a learning plan. 2. The planning method for learning of claim 1, wherein calculating the weighting parameters of the subjects according to the learning information via the processor comprises: calculating the weighting parameters according to the number of times of learning and learning time of the learning information via the processor. 3. The planning method for learning of claim 1, wherein performing the fuzzy process to the weighing scores via the processor to transform the weighting scores into the score levels of the subjects so as to establish the learning sequence among the subjects to establish the learning plan comprises: transforming the weighting score corresponding to a first subject into a first score level via the processor when the weighting score corresponding to the first subject of the subjects is lower than or equal to a first threshold value; and transforming the weighting score corresponding to a second subject into a second score level via the processor when the weighting score corresponding to the second subject of the subjects is higher than the first threshold value. 4. The planning method for learning of claim 3, wherein performing the fuzzy process to the weighing scores via the processor to transform the weighting scores into the score levels of the subjects so as to establish the learning sequence among the subjects to establish the learning plan comprises: establishing a forward learning sequence from the second subject to the first subject via the processor after the processor transforming the weighting score corresponding to the first subject into the first score level and transforming the weighting score corresponding to the second subject into the second score level. 5. The planning method for learning of claim 1, further comprising: updating the learning information of the subjects immediately and storing the updated learning information in the storage via the monitor; and re-establishing the learning plan according to the updated learning information and the updated learning sequence via the processor. 6. A planning system for learning comprising: a storage; a monitor, configured to record learning information of a plurality of subjects and store the learning information in the storage; and a processor, configured to calculate weighting parameters of the subjects according to the learning information and calculate weighting scores of the subjects according to the weighting parameters, wherein the processor performs a fuzzy process to the weighing scores to transform the weighting scores into score levels of the subjects so as to establish a learning sequence of the subjects to establish a learning plan. 7. The planning system for learning of claim 6, wherein the learning information of the subjects comprises the number of times of learning and learning time of the learning information, and the processor is configured to calculate the weighting parameters according to the number of times of learning and the learning time of the learning information. 8. The planning system for learning of claim 6, wherein when the weighting score corresponding to a first subject of the subjects is lower than or equal to a first threshold value, the processor transforms the weighting score corresponding to the first subject into a first score level; when the weighting score corresponding to a second subject of the subjects is higher than the first threshold value, the processor transforms the weighting score corresponding to the second subject into a second score level. 9. The planning system for learning of claim 8, wherein after the processor transforms the weighting score corresponding to the first subject into the first score level and convers the weighting score corresponding to the second subject into the second score level, the processor establishes a forward learning sequence from the second subject to the first subject. 10. The planning system for learning of claim 6, wherein the monitor is configured to immediately update the learning information of the subjects, and store the updated learning information in the storage, and the processor re-establishes the learning according to the updated learning information and the updated learning sequence.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: The present disclosure provides a planning method for learning applied to a planning system for learning, and the planning system for learning includes a storage, a monitor and a processor. The planning method for learning includes the following steps: recording learning information of a plurality of subjects and storing the learning information in the storage via the monitor; calculating weighting parameters of the subjects according to the learning information and calculating weighting scores of the subjects according to the weighting parameters via the processor; and performing a fuzzy process to the weighting scores via the processor to transform the weighting scores into score levels of the subjects, so as to establish a learning sequence of the subjects to establish a learning plan.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: The present disclosure provides a planning method for learning applied to a planning system for learning, and the planning system for learning includes a storage, a monitor and a processor. The planning method for learning includes the following steps: recording learning information of a plurality of subjects and storing the learning information in the storage via the monitor; calculating weighting parameters of the subjects according to the learning information and calculating weighting scores of the subjects according to the weighting parameters via the processor; and performing a fuzzy process to the weighting scores via the processor to transform the weighting scores into score levels of the subjects, so as to establish a learning sequence of the subjects to establish a learning plan.
A pattern recognition system includes a learning unit, a learning unit, a threshold calculation unit, and a determining unit. The learning unit learns, based on learned data of a first pattern, a model for determining whether recognition object data is the first pattern. The learning unit calculates likelihood indicating how likely the recognition object data is the first pattern by using the model learned by the learning unit. The threshold calculation unit calculates a threshold to be compared with the likelihood to determine whether the recognition object data is the first pattern, based on first likelihood that is calculated with respect to learned data of the first pattern and second likelihood that is calculated with respect to learned data of a second pattern. The determining unit determines whether the recognition object data is the first pattern by using the threshold.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A pattern recognition system comprising: a learning unit to learn, based on learned data of a first pattern, a model for determining whether recognition object data is the first pattern; a likelihood calculation unit to calculate likelihood indicating how likely the recognition object data is the first pattern by using the model learned by the learning unit; a threshold calculation unit to calculate a threshold to be compared with the likelihood to determine whether the recognition object data is the first pattern, based on first likelihood that is calculated with respect to learned data of the first pattern and second likelihood that is calculated with respect to learned data of a second pattern; and a determining unit to determine whether the recognition object data is the first pattern by using the threshold. 2. The pattern recognition system according to claim 1, wherein the learning unit learns the model based on a plurality of pieces of learned data of the first pattern classified into any one of a plurality of categories, and the threshold calculation unit calculates the threshold for each of the categories. 3. The pattern recognition system according to claim 1, wherein the threshold calculation unit calculates the threshold having a value between a first value and a second value, the first value having highest frequency of a plurality of values of first likelihood calculated with respect to a plurality of pieces of learned data of the first pattern, and the second value having highest frequency of a plurality of values of second likelihood calculated with respect to a plurality of pieces of learned data of the second pattern. 4. The pattern recognition system according to claim 1, wherein the threshold calculation unit calculates the threshold having a value of an intersection of distribution of a plurality of values of first likelihood calculated with respect to a plurality of pieces of learned data of the first pattern and distribution of a plurality of values of second likelihood calculated with respect to a plurality of pieces of learned data of the second pattern. 5. The pattern recognition system according to claim 1, wherein the threshold calculation unit calculates the threshold having a specified value among values between a first value and a second value, the first value having highest frequency of a plurality of values of first likelihood calculated with respect to a plurality of pieces of learned data of the first pattern, and the second value having highest frequency of a plurality of values of second likelihood calculated with respect to a plurality of pieces of learned data of the second pattern. 6. The pattern recognition system according to claim 1, wherein the first pattern is a pattern of abnormal sound, the second pattern is a pattern of normal sound, and the threshold calculation unit calculates the threshold having a value determined in accordance with detection sensitivity specified as sensitivity in detecting the abnormal sound, among values between a first value and a second value, the first value having highest frequency of a plurality of values of first likelihood calculated with respect to a plurality of pieces of learned data of the first pattern, and the second value having highest frequency of a plurality of values of second likelihood calculated with respect to a plurality of pieces of learned data of the second pattern. 7. The pattern recognition system according to claim 1, wherein the first pattern is a pattern of abnormal sound, the second pattern is a pattern of normal sound, and the threshold calculation unit calculates the threshold having a value determined in accordance with a degree of danger specified as a degree of danger of the abnormal sound, among values between a first value and a second value, the first value having highest frequency of a plurality of values of first likelihood calculated with respect to a plurality of pieces of learned data of the first pattern, and the second value having highest frequency of a plurality of values of second likelihood calculated with respect to a plurality of pieces of learned data of the second pattern. 8. A computer program product comprising a non-transitory computer-readable medium including programmed instructions, the instructions causing a computer to function as: a learning unit to learn, based on learned data of a first pattern, a model for determining whether recognition object data is the first pattern; a likelihood calculation unit to calculate likelihood indicating how likely the recognition object data is the first pattern by using the model learned by the learning unit; a threshold calculation unit to calculate a threshold to be compared with the likelihood to determine whether the recognition object data is the first pattern, based on first likelihood that is calculated with respect to learned data of the first pattern and second likelihood that is calculated with respect to learned data of a second pattern; and a determining unit to determine whether the recognition object data is the first pattern by using the threshold. 9. The computer program product according to claim 8, wherein the learning unit learns the model based on a plurality of pieces of learned data of the first pattern classified into any one of a plurality of categories, and the threshold calculation unit calculates the threshold for each of the categories. 10. The computer program product according to claim 8, wherein the threshold calculation unit calculates the threshold having a value between a first value and a second value, the first value having highest frequency of a plurality of values of first likelihood calculated with respect to a plurality of pieces of learned data of the first pattern, and the second value having highest frequency of a plurality of values of second likelihood calculated with respect to a plurality of pieces of learned data of the second pattern. 11. The computer program product according to claim 8, wherein the threshold calculation unit calculates the threshold having a value of an intersection of distribution of a plurality of values of first likelihood calculated with respect to a plurality of pieces of learned data of the first pattern and distribution of a plurality of values of second likelihood calculated with respect to a plurality of pieces of learned data of the second pattern. 12. The computer program product according to claim 8, wherein the threshold calculation unit calculates the threshold having a specified value among values between a first value and a second value, the first value having highest frequency of a plurality of values of first likelihood calculated with respect to a plurality of pieces of learned data of the first pattern, and the second value having highest frequency of a plurality of values of second likelihood calculated with respect to a plurality of pieces of learned data of the second pattern. 13. The computer program product according to claim 8, wherein the first pattern is a pattern of abnormal sound, the second pattern is a pattern of normal sound, and the threshold calculation unit calculates the threshold having a value determined in accordance with detection sensitivity specified as sensitivity in detecting the abnormal sound, among values between a first value and a second value, the first value having highest frequency of a plurality of values of first likelihood calculated with respect to a plurality of pieces of learned data of the first pattern, and the second value having highest frequency of a plurality of values of second likelihood calculated with respect to a plurality of pieces of learned data of the second pattern. 14. The computer program product according to claim 8, wherein the first pattern is a pattern of abnormal sound, the second pattern is a pattern of normal sound, and the threshold calculation unit calculates the threshold having a value determined in accordance with a degree of danger specified as a degree of danger of the abnormal sound, among values between a first value and a second value, the first value having highest frequency of a plurality of values of first likelihood calculated with respect to a plurality of pieces of learned data of the first pattern, and the second value having highest frequency of a plurality of values of second likelihood calculated with respect to a plurality of pieces of learned data of the second pattern. 15. A pattern recognition method comprising: learning, based on learned data of a first pattern, a model for determining whether recognition object data is the first pattern; calculating likelihood indicating how likely the recognition object data is the first pattern by using the model learned in the learning; calculating a threshold to be compared with the likelihood to determine whether the recognition object data is the first pattern, based on first likelihood that is calculated with respect to learned data of the first pattern and second likelihood that is calculated with respect to learned data of a second pattern; and determining whether the recognition object data is the first pattern by using the threshold.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A pattern recognition system includes a learning unit, a learning unit, a threshold calculation unit, and a determining unit. The learning unit learns, based on learned data of a first pattern, a model for determining whether recognition object data is the first pattern. The learning unit calculates likelihood indicating how likely the recognition object data is the first pattern by using the model learned by the learning unit. The threshold calculation unit calculates a threshold to be compared with the likelihood to determine whether the recognition object data is the first pattern, based on first likelihood that is calculated with respect to learned data of the first pattern and second likelihood that is calculated with respect to learned data of a second pattern. The determining unit determines whether the recognition object data is the first pattern by using the threshold.
G06N5047
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A pattern recognition system includes a learning unit, a learning unit, a threshold calculation unit, and a determining unit. The learning unit learns, based on learned data of a first pattern, a model for determining whether recognition object data is the first pattern. The learning unit calculates likelihood indicating how likely the recognition object data is the first pattern by using the model learned by the learning unit. The threshold calculation unit calculates a threshold to be compared with the likelihood to determine whether the recognition object data is the first pattern, based on first likelihood that is calculated with respect to learned data of the first pattern and second likelihood that is calculated with respect to learned data of a second pattern. The determining unit determines whether the recognition object data is the first pattern by using the threshold.
The embodiments provide systems and methods for efficiently and accurately differentiating requests directed to uncacheable content from requests directed to cacheable content based on identifiers from the requests. The differentiation occurs without analysis or retrieval of the content being requested. Some embodiments hash identifiers of prior requests that resulted in uncacheable content being served in order to set indices within a bloom filter. The bloom filter then tracks prior uncacheable requests without storing each of the identifiers so that subsequent requests for uncacheable requests can be easily identified based on a hash of the request identifier and set indices of the bloom filter. Some embodiments produce a predictive model identifying uncacheable content requests by tracking various characteristics found in identifiers of prior requests that resulted in uncacheable content being served. Subsequent requests with identifiers having similar characteristics to those of the predictive model can then be differentiated.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: receiving a plurality of requests at a request distribution server; extracting an identifier from each request of the plurality of requests at the request distribution server; detecting based on the identifier of each request, a first set of the plurality of requests directed to cacheable content and a different second set of the plurality of requests directed to uncacheable content, wherein said detecting is performed without inspecting or obtaining requested content, wherein cacheable content comprises common content that is presented to a plurality of different users, and wherein uncacheable content comprises content that is unique in some form for each user of a plurality of different users; distributing the first set of requests from the request distribution device across a set of servers according to a first request distribution scheme; and distributing the second set of requests from the request distribution device across the set of servers according to a different second request distribution scheme. 2. The method of claim 1, wherein distributing the first set of requests comprises performing a persistent distribution with requests directed to same content being passed to a same server of the set of servers, and wherein distributing the second set of requests comprises performing a round-robin distribution of the second set of requests across the set of servers. 3. The method of claim 1 further comprising producing a bloom filter index from hashing the identifier of each request from the plurality of requests. 4. The method of claim 3, wherein said detecting comprises identifying a particular request from the plurality of requests as an uncacheable content request based on the bloom filter index produced from hashing the identifier of the particular request mapping to a set index within a bloom filter and identifying the particular request as a cacheable content request based on the bloom filter index mapping to an index within the bloom filter that is not set. 5. The method of claim 1 further comprising identifying a first common characteristic within at least a subset of the first set of requests and a second common characteristic within at least a subset of the second set of requests. 6. The method of claim 5, wherein said detecting comprises identifying a particular request from the plurality of requests as a cacheable content request based on the particular request identifier comprising the first common characteristic and identifying the particular request as an uncacheable content request based on the particular request identifier comprising the second common characteristic. 7. The method of claim 1, wherein distributing the first set of requests comprises restricting distribution of the first set of requests across a first subset of the set of servers based on the first request distribution scheme creating a first routing domain within the set of servers, and wherein distributing the second set of requests comprises restricting distribution of the second set of requests across a second subset of the set of servers based on the second request distribution scheme creating a different second routing domain within the set of servers. 8. The method of claim 7, wherein the first subset of servers is optimized for cacheable content as a result of each server of the first subset of servers comprising a large storage cache, and wherein the second subset of servers is optimized uncacheable content as a result of each server of the second subset of servers comprising a small storage cache. 9. A method comprising: tracking each request of a first set of requests to a different index of a first plurality of indices at a request distribution server, wherein each request of the first set of requests comprises an identifier requesting cacheable content, wherein cacheable content is common content that is served to two or more users; tracking each request of a different second set of requests to a different index of a second plurality of indices at a request distribution server, wherein each request of the second set of requests comprises an identifier requesting uncacheable content, wherein uncacheable content is content that is customized for each requesting user; receiving a request at the request distribution server; selecting a particular index from each of the first and second plurality of indices from hashing an identifier of said request; routing the request from the request distribution server to a first server from a plurality of content delivery servers in response to the particular index having a set value in the first plurality of indices; and routing the request from the request distribution server to a different second server from the plurality of content delivery servers in response to the particular index having a set value in the second plurality of indices. 10. The method of claim 9, wherein routing the request to the first server comprises selecting the first server according to a first distribution scheme, and wherein routing the request to the second server comprises selecting the second server according to a different second distribution scheme. 11. The method of claim 9 further comprising monitoring content passed in response to said request from either the first or second server. 12. The method of claim 11 further comprising setting the particular index in the first plurality of indices based on said monitoring identifying cacheable content being served in response to said request, and setting the particular index in the second plurality of indices based on said monitoring identifying uncacheable content being served in response to said request. 13. The method of claim 9, wherein the first plurality of indices form a first bloom filter tracking cacheable content requests previously received by the request distribution server and the second plurality of indices form a second bloom filter tracking uncacheable content requests previously received by the request distribution server. 14. The method of claim 9 further comprising selecting from the plurality of content delivery servers, a cacheable content optimized server in response to the particular index having a set value in the first plurality of indices, wherein the cacheable content optimized server comprises a large storage cache, and wherein the cacheable content optimized server is the first server. 15. The method of claim 14 further comprising selecting from the plurality of content delivery servers, an uncacheable content optimized server in response to the particular index having a set value in the second plurality of indices, wherein the uncacheable content optimized server comprises a server with low overall load or a small storage cache, and wherein the uncacheable content optimized server is the second server. 16. A method comprising: monitoring at a request distribution, different cacheable content returned from a plurality of content delivery servers in response to a first set of requests and different uncacheable content returned from the plurality of content delivery servers in response to a second set of requests; producing at a request distribution server, a first predictive model based on a first set of common characteristics found in Uniform Resource Locators (URLs) of the first set of requests and a second predictive model based on a different second set of common characteristics found in URLs of the second set of requests, wherein the first and second sets of common characteristics comprise one or more of a common URL domain, URL path, URL filename, URL file extension, and URL query string parameter found in two or more requests; receiving a particular request at the request distribution server; predictively routing the particular request as a cacheable content request based on the particular request comprising a URL having at least one characteristic in common with the first set of common characteristics of the first predictive model, wherein predictively routing the particular request as a cacheable content request comprises routing the request from the request distribution server across the plurality of content delivery servers according to a first distribution scheme; and predictively routing the particular request as an uncacheable content request based on the particular request comprising a URL having at least one characteristic in common with the second set of common characteristics of the second predictive model, wherein predictively routing the particular request as an uncacheable content request comprises routing the request from the request distribution server across the plurality of content delivery servers according to a different second distribution scheme. 17. The method of claim 16, wherein producing the first predictive model comprises tracking a frequency with which a plurality of characteristics appear in the URLs of the first set of requests and wherein producing the second predictive model comprises tracking a frequency with which the plurality of characteristics appear in the URLs of the second set of requests. 18. The method of claim 17, wherein predictively routing the particular request as a cacheable content request comprises determining that the particular request URL comprises at least one characteristic appearing with a high probability in identifiers of the first set of identifiers than in identifiers of second set of identifiers. 19. The method of claim 16 further comprising extracting a URL and temporarily storing the URL from each request of the first and second sets of requests prior to said monitoring. 20. The method of claim 20, wherein said producing the first predictive model comprises retrieving the URLs temporarily stored for the first set of requests and analyzing the URLs of the first set of requests for said first set common characteristics.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: The embodiments provide systems and methods for efficiently and accurately differentiating requests directed to uncacheable content from requests directed to cacheable content based on identifiers from the requests. The differentiation occurs without analysis or retrieval of the content being requested. Some embodiments hash identifiers of prior requests that resulted in uncacheable content being served in order to set indices within a bloom filter. The bloom filter then tracks prior uncacheable requests without storing each of the identifiers so that subsequent requests for uncacheable requests can be easily identified based on a hash of the request identifier and set indices of the bloom filter. Some embodiments produce a predictive model identifying uncacheable content requests by tracking various characteristics found in identifiers of prior requests that resulted in uncacheable content being served. Subsequent requests with identifiers having similar characteristics to those of the predictive model can then be differentiated.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: The embodiments provide systems and methods for efficiently and accurately differentiating requests directed to uncacheable content from requests directed to cacheable content based on identifiers from the requests. The differentiation occurs without analysis or retrieval of the content being requested. Some embodiments hash identifiers of prior requests that resulted in uncacheable content being served in order to set indices within a bloom filter. The bloom filter then tracks prior uncacheable requests without storing each of the identifiers so that subsequent requests for uncacheable requests can be easily identified based on a hash of the request identifier and set indices of the bloom filter. Some embodiments produce a predictive model identifying uncacheable content requests by tracking various characteristics found in identifiers of prior requests that resulted in uncacheable content being served. Subsequent requests with identifiers having similar characteristics to those of the predictive model can then be differentiated.
Embodiments of a gas turbine engine lifecycle decision assistant apply a probabilistic-based process founded on a Bayesian mathematical framework to intelligently combine analytical models, expert judgment, and data during the development and field management of gas turbine engines. The process integrates physics-based and high-fidelity models with data and expert judgment that evolves over the course of the gas turbine engine lifecycle. Among other things, embodiments of the gas turbine engine lifecycle decision assistant can improve future predictive models and understanding while at the same time reducing risk and uncertainty in the service management of existing products.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A gas turbine engine lifecycle decision assistance system for understanding and quantifying uncertainties during the lifecycle of one or more gas turbine engine components, the system comprising, embodied in one or more machine-accessible storage media: a bidirectional probabilistic analysis subsystem comprising a probabilistic model of conditional dependencies between a plurality of random variables associated with a plurality of different sources of uncertainty in the gas turbine engine component lifecycle, the probabilistic model arranged to connect at least two of the plurality of different sources of uncertainty by a common random variable, the bidirectional probabilistic analysis subsystem to: compute a joint probability distribution for the probabilistic model; periodically receive new evidence from one or more of the different sources of uncertainty over the course of the component lifecycle; and in response to the new evidence, re-compute the joint probability distribution. 2. The system of claim 1, wherein the bidirectional probabilistic analysis subsystem computes the joint probability distribution in response to a request for a quantification of uncertainty relating to an aspect of the gas turbine engine component lifecycle. 3. The system of claim 2, wherein the bidirectional probabilistic analysis subsystem receives the request from a component design subsystem and/or a field management subsystem and the bidirectional probabilistic analysis subsystem communicates the re-computed joint probability distribution to the component design subsystem and/or the field management subsystem. 4. The system of claim 1, wherein the bidirectional probabilistic analysis subsystem connects the plurality of different sources of uncertainty in the gas turbine engine component lifecycle to the probabilistic model, and wherein at least one of the different sources of uncertainty relates to a pre-production certification phase of the component lifecycle and at least one of the different sources of uncertainty relates to a post-production certification phase of the component lifecycle. 5. The system of claim 1, wherein the bidirectional probabilistic analysis subsystem connects the plurality of different sources of uncertainty in the gas turbine engine component lifecycle to the probabilistic model, and wherein at least one of the different sources of uncertainty relates to a pre-production certification phase of the component lifecycle including one or more of a design phase, a manufacture phase, and a test phase. 6. The system of claim 1, wherein the bidirectional probabilistic analysis subsystem connects the plurality of different sources of uncertainty in the gas turbine engine component lifecycle to the probabilistic model, and wherein at least one of the different sources of uncertainty relates to a post-production certification phase of the component lifecycle including one or more of a use phase, and a service phase. 7. The system of claim 1, wherein the bidirectional probabilistic analysis subsystem connects the plurality of different sources of uncertainty in the gas turbine engine component lifecycle to the probabilistic model, and wherein the plurality of different sources of uncertainty include at least one analytical model, at least one source of empirical data, and at least one source of expert knowledge. 8. A method for quantifying uncertainty during different phases of the lifecycle of a manufactured component, the method comprising, with at least one computing device: identifying at least two sources of uncertainty that are associated with different phases of the manufactured component lifecycle; connecting the at least two sources of uncertainty by a common random variable in a Bayesian network; and computing a joint probability distribution for the Bayesian network using the common random variable and at least one random variable associated with each of the at least two sources of uncertainty. 9. The method of claim 8, comprising receiving new evidence relating to at least one of the random variables and propagating the new evidence through the Bayesian network. 10. The method of claim 9, comprising forward-propagating the new evidence through the Bayesian network if the new evidence relates to an early phase of the gas turbine engine lifecycle. 11. The method of claim 10, comprising back-propagating the new evidence through the probabilistic model if the new evidence relates to a later phase of the gas turbine engine lifecycle. 12. The method of claim 8, comprising computing the joint probability distribution in response to a request for a quantification of uncertainty relating to an aspect of the manufactured component lifecycle. 13. The method of claim 12, comprising receiving the request from a component design subsystem and/or a field management subsystem and communicating the joint probability distribution to the component design subsystem and/or the field management subsystem. 14. The method of claim 8, comprising connecting a source of uncertainty relating to a pre-production certification phase of the component lifecycle to the Bayesian network. 15. The method of claim 8, comprising connecting a source of uncertainty relating to a post-production certification phase of the component lifecycle to the Bayesian network. 16. The method of claim 8, comprising connecting at least one analytical model, at least one source of empirical data, and at least one source of expert knowledge to the Bayesian network. 17. The method of claim 16, comprising propagating output of the at least one analytical model, output of the at least one source of empirical data, and output of the at least one source of expert knowledge through the Bayesian network. 18. A gas turbine engine lifecycle decision assistant for understanding and quantifying uncertainties during the lifecycle of a gas turbine engine component, the gas turbine engine lifecycle decision assistant comprising: computer program instructions embodied in one or more machine-accessible storage media and executable by at least one processor to: create a Bayesian network of conditional dependencies between a plurality of random variables associated with a plurality of different sources of uncertainty in the gas turbine engine component lifecycle, the Bayesian network arranged to connect at least two of the plurality of different sources of uncertainty by a common random variable; compute a joint probability distribution for the Bayesian network; receive new evidence from one or more of the different sources of uncertainty over the course of the component lifecycle; and in response to the new evidence, re-compute the joint probability distribution. 19. The gas turbine engine lifecycle decision assistant of claim 18, wherein the computer program instructions are to compute the joint probability distribution in response to a request for a quantification of uncertainty relating to an aspect of the gas turbine engine component lifecycle, and wherein the computer program instructions are to receive the request from a component design subsystem and/or a field management subsystem and communicate the re-computed joint probability distribution to the component design subsystem and/or the field management subsystem. 20. The gas turbine engine lifecycle decision assistant of claim 18, wherein the computer program instructions are to connect the plurality of different sources of uncertainty in the gas turbine engine component lifecycle to the Bayesian network, and wherein at least one of the different sources of uncertainty relates to a pre-production certification phase of the component lifecycle and at least one of the different sources of uncertainty relates to a post-production certification phase of the component lifecycle, and wherein the plurality of different sources of uncertainty include at least one analytical model, at least one source of empirical data, and at least one source of expert knowledge.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Embodiments of a gas turbine engine lifecycle decision assistant apply a probabilistic-based process founded on a Bayesian mathematical framework to intelligently combine analytical models, expert judgment, and data during the development and field management of gas turbine engines. The process integrates physics-based and high-fidelity models with data and expert judgment that evolves over the course of the gas turbine engine lifecycle. Among other things, embodiments of the gas turbine engine lifecycle decision assistant can improve future predictive models and understanding while at the same time reducing risk and uncertainty in the service management of existing products.
G06N5045
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Embodiments of a gas turbine engine lifecycle decision assistant apply a probabilistic-based process founded on a Bayesian mathematical framework to intelligently combine analytical models, expert judgment, and data during the development and field management of gas turbine engines. The process integrates physics-based and high-fidelity models with data and expert judgment that evolves over the course of the gas turbine engine lifecycle. Among other things, embodiments of the gas turbine engine lifecycle decision assistant can improve future predictive models and understanding while at the same time reducing risk and uncertainty in the service management of existing products.
According to one exemplary embodiment, a method for adjusting taste balance in culinary recipes is provided. The method may include receiving a template recipe, and a new recipe. The method may include determining a first taste profile corresponding with the template recipe. The method may include determining a second taste profile corresponding with the new recipe. The method may include identifying a taste to boost based on comparing the first taste profile to the second taste profile. The method may include determining a boosting ingredient from a substitution ingredients list. The method may include determining a boosting ingredient quantity based on the boosting ingredient and comparing the first taste profile to the second taste profile. The method may include determining a step alteration for the new recipe based on the boosting ingredient and the boosting ingredient quantity.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A processor-implemented method for adjusting taste balance in culinary recipes, the method comprising: receiving, by a processor, a template recipe having a first plurality of ingredients and a first plurality of recipe steps, and a new recipe having a second plurality of ingredients and a second plurality of recipe steps; determining a first taste profile corresponding with the received template recipe based on the first plurality of ingredients; determining a second taste profile corresponding with the received new recipe based on the second plurality of ingredients; identifying a taste to boost based on comparing the first taste profile to the second taste profile; determining a boosting ingredient from a substitution ingredients list based on the second plurality of ingredients and the identified taste; determining a boosting ingredient quantity based on the determined boosting ingredient and comparing the first taste profile to the second taste profile; and determining a step alteration based on the first plurality of ingredients, the first plurality of steps, the determined boosting ingredient, the determined boosting ingredient quantity, the second plurality of ingredients, and the second plurality of ingredient steps. 2. The method of claim 1, wherein determining the first taste profile comprises analyzing the first plurality of ingredients and determining a first set of taste scores, and wherein determining the second taste profile comprises analyzing the second plurality of ingredients and determining a second set of taste scores. 3. The method of claim 2, wherein the first set of taste scores and the second set of taste scores comprises a score for each taste category within a plurality of taste categories, wherein the plurality of taste categories includes saltiness, sweetness, bitterness, sourness, and umami. 4. The method of claim 3, wherein the plurality of taste categories further includes spiciness. 5. The method of claim 3, further comprising: presenting a visual representation of the first taste profile to a user displaying each taste category within the plurality of taste categories. 6. The method of claim 5, wherein the visual representation allows the user to adjust a taste magnitude of each taste category within the plurality of taste categories, and wherein the first taste profile is altered to match the user's adjustments. 7. The method of claim 1, wherein identifying the taste to boost based on comparing the first taste profile to the second taste profile comprises selecting the taste that has the greatest magnitude of divergence between the first taste profile and the second taste profile. 8. The method of claim 1, wherein determining the step alteration based on the first plurality of ingredients, the first plurality of steps, the determined boosting ingredient, the determined boosting ingredient quantity, the second plurality of ingredients, and the second plurality of ingredient steps further comprises applying the determined step alteration to the second plurality of ingredient steps by adding the determined boosting ingredient in the determined boosting ingredient quantity.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: According to one exemplary embodiment, a method for adjusting taste balance in culinary recipes is provided. The method may include receiving a template recipe, and a new recipe. The method may include determining a first taste profile corresponding with the template recipe. The method may include determining a second taste profile corresponding with the new recipe. The method may include identifying a taste to boost based on comparing the first taste profile to the second taste profile. The method may include determining a boosting ingredient from a substitution ingredients list. The method may include determining a boosting ingredient quantity based on the boosting ingredient and comparing the first taste profile to the second taste profile. The method may include determining a step alteration for the new recipe based on the boosting ingredient and the boosting ingredient quantity.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: According to one exemplary embodiment, a method for adjusting taste balance in culinary recipes is provided. The method may include receiving a template recipe, and a new recipe. The method may include determining a first taste profile corresponding with the template recipe. The method may include determining a second taste profile corresponding with the new recipe. The method may include identifying a taste to boost based on comparing the first taste profile to the second taste profile. The method may include determining a boosting ingredient from a substitution ingredients list. The method may include determining a boosting ingredient quantity based on the boosting ingredient and comparing the first taste profile to the second taste profile. The method may include determining a step alteration for the new recipe based on the boosting ingredient and the boosting ingredient quantity.
Disclosed herein are systems, methods, and computer-readable media for classifying a set of inputs via a supervised classifier model that utilizes a novel activation function that provides the capability to learn a scale parameter in addition to a bias parameter and other weight parameters.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method for training a neural network (NN) comprising a plurality of layers including at least one hidden layer, wherein each of the plurality of layers includes a respective set of nodes, the method comprising: determining, for a current layer of the NN for a current iteration of the training, i) a set of inputs, the set of inputs including a set of training inputs or a set of activation results associated with the respective set of nodes of a prior layer of the NN, ii) a bias parameter, iii) a scale parameter, and iv) a respective set of weights to be applied to the set of inputs, the bias parameter, and the scale parameter for each node of the current layer of the NN; determining, for a particular node of the current layer during the current iteration of the training, a combined input based at least in part on the set of inputs, the respective set of weights associated with the particular node, and the bias parameter; weighting the scale parameter using a corresponding weight in the respective set of weights to obtain a weighted scale parameter, wherein the corresponding weight is learned from one or more prior iterations of the training; executing, for the current iteration of the training, an activation function for the particular node, wherein executing the activation function comprises applying the activation function to the combined input and the weighted scale parameter to generate an activation result for the particular node; determining whether the current layer is a final layer; and outputting, based at least in part on determining whether the current layer is a final layer, the activation result as a classifier output of the NN or providing the activation result as input to a next layer of the NN. 2. The computer-implemented method of claim 1, wherein it is determined that the current layer is the final layer, and wherein the activation result is output as the classifier output of the NN. 3. The computer-implemented method of claim 2, further comprising: determining a difference between an actual target output and the classifier output; and updating the respective set of weights to be applied during a next iteration of the training based at least in part on the difference between the actual target output and the classifier output. 4. The computer-implemented method of claim 3, further comprising: determining a cumulative error associated with all nodes in the NN; determining that the cumulative error exceeds a threshold value; and determining that the next iteration of the training should be performed in response to determining that the cumulative error exceeds the threshold value. 5. The computer-implemented method of claim 4, wherein the updating is performed in response to determining that the next iteration of the training should be performed. 6. The computer-implemented method of claim 2, further comprising: determining the set of inputs from an acoustic signal; and decoding a set of classifier outputs including the classifier output to determine a character string corresponding to the acoustic signal. 7. The computer-implemented method of claim 1, further comprising: determining that a threshold number of iterations of the training have been performed, wherein a respective final set of weights to be applied when executing the activation function for each node in the NN is obtained after performing a final iteration of the training. 8. A system for training a neural network (NN) comprising a plurality of layers including at least one hidden layer, wherein each of the plurality of layers includes a respective set of noes, the system comprising: at least one memory storing computer-executable instructions; and at least one processor configured to access the at least one memory and execute the computer-executable instructions to: determine, for a current layer of the NN for a current iteration of the training, i) a set of inputs, the set of inputs including a set of training inputs or a set of activation results associated with the respective set of nodes of a prior layer of the NN, ii) a bias parameter, iii) a scale parameter, and iv) a respective set of weights to be applied to the set of inputs, the bias parameter, and the scale parameter for each node of the current layer of the NN; determine, for a particular node of the current layer during the current iteration of the training, a combined input based at least in part on the set of inputs, the respective set of weights associated with the particular node, and the bias parameter; weight the scale parameter using a corresponding weight in the respective set of weights to obtain a weighted scale parameter, wherein the corresponding weight is learned from one or more prior iterations of the training; execute, for the current iteration of the training, an activation function, wherein executing the activation function comprises applying the activation function to the combined input and the weighted scale parameter to generate an activation result for the particular node; determine whether the current layer is a final layer; and output, based at least in part on determining whether the current layer is a final layer, the activation result as a classifier output of the NN or providing the activation result as input to a next layer of the NN. 9. The system of claim 8, wherein it is determined that the current layer is the final layer, and wherein the activation result is output as the classifier output of the NN. 10. The system of claim 9, wherein the at least one processor is further configured to execute the compute-executable instructions to: determine a difference between an actual target output and the classifier output; and update the respective set of weights to be applied during a next iteration of the training based at least in part on the difference between the actual target output and the classifier output. 11. The system of claim 10, wherein the at least one processor is further configured to execute the compute-executable instructions to: determine a cumulative error associated with all nodes in the NN; determine that the cumulative error exceeds a threshold value; and determine that the next iteration of the training should be performed in response to determining that the cumulative error exceeds the threshold value. 12. The system of claim 11, wherein the updating is performed in response to determining that the next iteration of the training should be performed. 13. The system of claim 9, wherein the at least one processor is further configured to execute the computer-executable instructions to: determine the set of inputs from an acoustic signal; and decode a set of classifier outputs including the classifier output to determine a character string corresponding to the acoustic signal. 14. The system of claim 8, wherein the at least one processor is further configured to execute the compute-executable instructions to: determine that a threshold number of iterations of the training have been performed, wherein a respective final set of weights to be applied when executing the activation function for each node in the NN is obtained after performing a final iteration of the training. 15. A computer program product for training a neural network (NN) comprising a plurality of layers including at least one hidden layer, wherein each of the plurality of layers includes a respective set of nodes, the computer program product comprising a non-transitory storage medium readable by a processing circuit, the storage medium storing instructions executable by the processing circuit to cause a method to be performed, the method comprising: determining, for a current layer of the NN for a current iteration of the training, i) a set of inputs, the set of inputs including a set of training inputs or a set of activation results associated with the respective set of nodes of a prior layer of the NN, ii) a bias parameter, iii) a scale parameter, and iv) a respective set of weights to be applied to the set of inputs, the bias parameter, and the scale parameter for each node of the current layer of the NN; determining, for a particular node of the current layer during the current iteration of the training, a combined input based at least in part on the set of inputs, the respective set of weights associated with the particular node, and the bias parameter; weighting the scale parameter using a corresponding weight in the respective set of weights to obtain a weighted scale parameter, wherein the corresponding weight is learned from one or more prior iterations of the training; executing, for the current iteration of the training, an activation function for the particular node, wherein executing the activation function comprises applying the activation function to the combined input and the weighted scale parameter to generate an activation result for the particular node; determining whether the current layer is a final layer; and outputting, based at least in part on determining whether the current layer is a final layer, the activation result as a classifier output of the NN or providing the activation result as input to a next layer of the NN. 16. The computer program product of claim 15, wherein it is determined that the current layer is the final layer, and wherein the activation result is output as the classifier output of the NN. 17. The computer program product of claim 16, the method further comprising: determining a difference between an actual target output and the classifier output; and updating the respective set of weights to be applied during a next iteration of the training based at least in part on the difference between the actual target output and the classifier output. 18. The computer program product of claim 17, the method further comprising: determining a cumulative error associated with all nodes in the NN; determining that the cumulative error exceeds a threshold value; and determining that the next iteration of the training should be performed in response to determining that the cumulative error exceeds the threshold value. 19. The computer program product of claim 18, wherein the updating is performed in response to determining that the next iteration of the training should be performed. 20. The computer program product of claim 16, the method further comprising: determining the set of inputs from an acoustic signal; and decoding a set of classifier outputs including the classifier output to determine a character string corresponding to the acoustic signal.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Disclosed herein are systems, methods, and computer-readable media for classifying a set of inputs via a supervised classifier model that utilizes a novel activation function that provides the capability to learn a scale parameter in addition to a bias parameter and other weight parameters.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Disclosed herein are systems, methods, and computer-readable media for classifying a set of inputs via a supervised classifier model that utilizes a novel activation function that provides the capability to learn a scale parameter in addition to a bias parameter and other weight parameters.
A personal taste assessment system recommends and predicts a person's preference for a consumable or other item. The system accesses a user profile for a person. The user profile includes a preference model representing associations between the person's ratings of items and a set of item characteristics. The system also accesses a database of characteristic values for a group of items, uses the identifying information to identify a candidate item having characteristic values whose properties match characteristics associated with the rated items that the person found to be appealing, and processes the characteristic values of the candidate item with the user profile to generate a predicted rating as a prediction of how the person would rate the identified candidate item. The system then causes an electronic device to output an identification of the candidate item and the predicted rating.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of generating a recommendation for a consumable item, comprising: by one or more processors: accessing a user profile for a person, wherein the user profile comprises a preference model representing associations between the person's ratings of items and a plurality of item characteristics; accessing a database of characteristic values for a plurality of candidate items; using the identifying information to identify a candidate item having characteristic values whose properties match characteristics associated with the rated items that the person found to be appealing; processing the characteristic values of the candidate item with the user profile to generate a predicted rating as a prediction of how the person would rate the identified candidate item; causing an electronic device to output an identification of the identified candidate item and the predicted rating. 2. The method of claim 1, wherein using the identifying information to identify a candidate item also comprises constraining the plurality of candidate items to those having characteristics that satisfy a user-supplied criterion. 3. The method of claim 1, wherein using the identifying information to identify a candidate item also comprises constraining the plurality of candidate items to those having characteristics that satisfy a situational criterion. 4. The method of claim 1, wherein the situational criterion comprises available inventory at a retailer. 5. The method of claim 1, wherein: each of the candidate items is a consumable item; and the characteristic values of the identified candidate item comprise values of traits that are associated with at least one of the following senses: taste, smell and touch. 6. The method of claim 1, wherein: the characteristic values of the identified candidate item correspond to one or more indicators of preference in the preference model; and generating the predicted rating comprises: entering values of the retrieved characteristics of the identified candidate item into the preference model, and returning a result of the preference model as the prediction. 7. The method of claim 1, wherein processing the retrieved characteristics with the user profile to generate the predicted rating comprises: analyzing dependency patterns in the preference model of the user profile; and using the dependency patterns to generate the predicted rating. 8. The method of claim 6, wherein analyzing the dependency patterns comprises: analyzing consistent patterns of dependency in the preference model and identifying a representative rating based on the consistent patterns; and using the representative rating as the predicted rating. 9. The method of claim 6, wherein analyzing the dependency patterns comprises: analyzing contrasting or polarizing patterns of dependency in the preference model; processing values of characteristics of the identified candidate item in the preference model having contrasting or polarizing patterns of dependency to yield a result; and using the result as the predicted rating. 10. The method of claim 1, wherein processing the retrieved characteristics with the user profile to generate the predicted rating comprises: determining that no preference model in the user profile can be matched with, or is reflective of, the characteristics of the identified candidate item; and generating an inconclusive result as the predicted rating. 11. A system for predicting a person's preference for an item, comprising: one or more processors: a database of characteristic values for a plurality of candidate items; a computer-readable memory storing a user profile for a person, wherein the user profile comprises a preference model representing associations between the person's ratings of items and a plurality of item characteristics; and a computer-readable memory containing programming instructions that, when executed, cause one or more of the processors to: access a user profile for a person, wherein the user profile comprises a preference model representing associations between the person's ratings of items and a plurality of item characteristics, access a database of characteristic values for a plurality of candidate items, use the identifying information to identify a candidate item having characteristic values whose properties match characteristics associated with the rated items that the person found to be appealing, process the characteristic values of the candidate item with the user profile to generate a predicted rating as a prediction of how the person would rate the identified candidate item, and cause an electronic device to output an identification of the identified candidate item and the predicted rating. 12. The system of claim 11, wherein the instructions to use the identifying information to identify a candidate item also comprise instructions to constrain the plurality of candidate items to those having characteristics that satisfy a user-supplied criterion. 13. The system of claim 11, wherein the instructions to use the identifying information to identify a candidate item also comprise instructions to constrain the plurality of candidate items to those having characteristics that satisfy a situational criterion. 14. The method of claim 12, wherein the situational criterion comprises available inventory at a retailer. 15. The system of claim 11, wherein: each of the candidate items is a consumable item; and the characteristic values of the identified candidate item comprise values of traits that are associated with at least one of the following senses: taste, smell and touch. 16. The system of claim 11, wherein: the characteristic values of the identified candidate item correspond to one or more indicators of preference in the preference model; and the instructions to generate the predicted rating comprise instructions to: enter values of the retrieved characteristics of the identified candidate item into the preference model, and return a result of the preference model as the prediction. 17. The system of claim 11, wherein the instructions to process the retrieved characteristics with the user profile to generate the predicted rating comprise instructions to: analyze dependency patterns in the preference model of the user profile; and use the dependency patterns to generate the predicted rating. 18. The system of claim 17, wherein the instructions to analyze the dependency patterns comprise instructions to: analyze consistent patterns of dependency in the preference model and identifying a representative rating based on the consistent patterns; and use the representative rating as the predicted rating. 19. The system of claim 17, wherein the instructions to analyze the dependency patterns comprise instructions to: analyze contrasting or polarizing patterns of dependency in the preference model; process values of characteristics of the identified candidate item in the preference model having contrasting or polarizing patterns of dependency to yield a result; and use the result as the predicted rating. 20. The system of claim 11, wherein the instructions to process the retrieved characteristics with the user profile to generate the predicted rating comprise instructions to: determine that no preference model in the user profile can be matched with, or is reflective of, the characteristics of the identified candidate item; and generate an inconclusive result as the predicted rating.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A personal taste assessment system recommends and predicts a person's preference for a consumable or other item. The system accesses a user profile for a person. The user profile includes a preference model representing associations between the person's ratings of items and a set of item characteristics. The system also accesses a database of characteristic values for a group of items, uses the identifying information to identify a candidate item having characteristic values whose properties match characteristics associated with the rated items that the person found to be appealing, and processes the characteristic values of the candidate item with the user profile to generate a predicted rating as a prediction of how the person would rate the identified candidate item. The system then causes an electronic device to output an identification of the candidate item and the predicted rating.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A personal taste assessment system recommends and predicts a person's preference for a consumable or other item. The system accesses a user profile for a person. The user profile includes a preference model representing associations between the person's ratings of items and a set of item characteristics. The system also accesses a database of characteristic values for a group of items, uses the identifying information to identify a candidate item having characteristic values whose properties match characteristics associated with the rated items that the person found to be appealing, and processes the characteristic values of the candidate item with the user profile to generate a predicted rating as a prediction of how the person would rate the identified candidate item. The system then causes an electronic device to output an identification of the candidate item and the predicted rating.
Systems, methods, and non-transitory computer-readable media can determine a respective latent representation for each entity in a set of entities that are accessible through the social networking system, wherein a latent representation for an entity is determined based at least in part on a topic model associated with the entity, each latent representation for an entity having a lower dimensionality than a topic model of the entity. One or more candidate entities that are related to a first entity can be determined based at least in part on the respective latent representations for the candidate entities and the first entity. At least a first candidate entity from the one or more candidate entities can be provided as a recommendation to a user that formed a connection with the first entity.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method comprising: determining, by a social networking system, a respective latent representation for each entity in a set of entities that are accessible through the social networking system, wherein a latent representation for an entity is determined based at least in part on a topic model associated with the entity, each latent representation for an entity having a lower dimensionality than a topic model of the entity; determining, by the social networking system, one or more candidate entities that are related to a first entity based at least in part on the respective latent representations for the candidate entities and the first entity; and providing, by the social networking system, at least a first candidate entity from the one or more candidate entities as a recommendation to a user that formed a connection with the first entity. 2. The computer-implemented method of claim 1, wherein determining the respective latent representation for each entity in the set of entities further comprises: obtaining, by the social networking system, a respective topic model for each entity in the set of entities; and training, by the social networking system, at least one model using the topic models to output a respective latent representation that corresponds to a topic model. 3. The computer-implemented method of claim 2, wherein the at least one model is implemented as a restricted Boltzmann machine. 4. The computer-implemented method of claim 2, wherein the model includes at least a set of input nodes and a set of hidden nodes, each input node corresponding to a topic and being configured to receive a value indicating whether the topic was identified in an entity, and each hidden node being configured to output a value determined based at least in part on values provided to one or more of the input nodes. 5. The computer-implemented method of claim 1, wherein determining the one or more candidate entities that are related to the first entity further comprises: determining, by the social networking system, that a distance between a respective latent representation for the candidate entity and a latent representation for the first entity satisfies a threshold distance. 6. The computer-implemented method of claim 1, wherein providing at least the first candidate entity from the one or more candidate entities as a recommendation further comprises: determining, by the social networking system, that at least one geographic centroid associated with the first candidate entity has a threshold amount of overlap with at least one geographic centroid associated with the first entity. 7. The computer-implemented method of claim 1, wherein providing at least the first candidate entity from the one or more candidate entities as a recommendation further comprises: determining, by the social networking system, that a reconstruction score for the first candidate entity satisfies a threshold score, the reconstruction score being determined using a model trained to output a latent representation for the first candidate entity, wherein the reconstruction score measures an accuracy of a reconstruction of a topic model of the first candidate entity through the model using the latent representation. 8. The computer-implemented method of claim 1, wherein providing at least the first candidate entity from the one or more candidate entities as a recommendation further comprises: determining, by the social networking system, that a difference between a number of fans associated with the first candidate entity and a number of fans associated with the first entity satisfies a threshold value. 9. The computer-implemented method of claim 1, wherein providing at least the first candidate entity from the one or more candidate entities as a recommendation further comprises: determining, by the social networking system, that at least a threshold number of users have fanned both the first candidate and the first entity. 10. The computer-implemented method of claim 1, wherein an entity corresponds to at least a page, user profile, group, story, or status update that is accessible through the social networking system. 11. A system comprising: at least one processor; and a memory storing instructions that, when executed by the at least one processor, cause the system to perform: determining a respective latent representation for each entity in a set of entities that are accessible through the social networking system, wherein a latent representation for an entity is determined based at least in part on a topic model associated with the entity, each latent representation for an entity having a lower dimensionality than a topic model of the entity; determining one or more candidate entities that are related to a first entity based at least in part on the respective latent representations for the candidate entities and the first entity; and providing at least a first candidate entity from the one or more candidate entities as a recommendation to a user that formed a connection with the first entity. 12. The system of claim 11, wherein determining the respective latent representation for each entity in the set of entities further causes the system to perform: obtaining a respective topic model for each entity in the set of entities; and training at least one model using the topic models to output a respective latent representation that corresponds to a topic model. 13. The system of claim 12, wherein the at least one model is implemented as a restricted Boltzmann machine. 14. The system of claim 12, wherein the model includes at least a set of input nodes and a set of hidden nodes, each input node corresponding to a topic and being configured to receive a value indicating whether the topic was identified in an entity, and each hidden node being configured to output a value determined based at least in part on values provided to one or more of the input nodes. 15. The system of claim 11, wherein determining the one or more candidate entities that are related to the first entity further causes the system to perform: determining that a distance between a respective latent representation for the candidate entity and a latent representation for the first entity satisfies a threshold distance. 16. A non-transitory computer-readable storage medium including instructions that, when executed by at least one processor of a computing system, cause the computing system to perform a method comprising: determining a respective latent representation for each entity in a set of entities that are accessible through the social networking system, wherein a latent representation for an entity is determined based at least in part on a topic model associated with the entity, each latent representation for an entity having a lower dimensionality than a topic model of the entity; determining one or more candidate entities that are related to a first entity based at least in part on the respective latent representations for the candidate entities and the first entity; and providing at least a first candidate entity from the one or more candidate entities as a recommendation to a user that formed a connection with the first entity. 17. The non-transitory computer-readable storage medium of claim 16, wherein determining the respective latent representation for each entity in the set of entities further causes the computing system to perform: obtaining a respective topic model for each entity in the set of entities; and training at least one model using the topic models to output a respective latent representation that corresponds to a topic model. 18. The non-transitory computer-readable storage medium of claim 17, wherein the at least one model is implemented as a restricted Boltzmann machine. 19. The non-transitory computer-readable storage medium of claim 17, wherein the model includes at least a set of input nodes and a set of hidden nodes, each input node corresponding to a topic and being configured to receive a value indicating whether the topic was identified in an entity, and each hidden node being configured to output a value determined based at least in part on values provided to one or more of the input nodes. 20. The non-transitory computer-readable storage medium of claim 16, wherein determining the one or more candidate entities that are related to the first entity further causes the system to perform: determining that a distance between a respective latent representation for the candidate entity and a latent representation for the first entity satisfies a threshold distance.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Systems, methods, and non-transitory computer-readable media can determine a respective latent representation for each entity in a set of entities that are accessible through the social networking system, wherein a latent representation for an entity is determined based at least in part on a topic model associated with the entity, each latent representation for an entity having a lower dimensionality than a topic model of the entity. One or more candidate entities that are related to a first entity can be determined based at least in part on the respective latent representations for the candidate entities and the first entity. At least a first candidate entity from the one or more candidate entities can be provided as a recommendation to a user that formed a connection with the first entity.
G06N502
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Systems, methods, and non-transitory computer-readable media can determine a respective latent representation for each entity in a set of entities that are accessible through the social networking system, wherein a latent representation for an entity is determined based at least in part on a topic model associated with the entity, each latent representation for an entity having a lower dimensionality than a topic model of the entity. One or more candidate entities that are related to a first entity can be determined based at least in part on the respective latent representations for the candidate entities and the first entity. At least a first candidate entity from the one or more candidate entities can be provided as a recommendation to a user that formed a connection with the first entity.
A method for determining the future operational condition of an object includes receiving reference data that indicates the normal operational state of the object, and receiving input multi-dimensional pattern arrays. Each input pattern array has a plurality of input vectors, while each input vector represents a plurality of parameters indicating the condition of the object obtained from one or more first sensors at any time. Estimate values are generated based on a calculation that uses an input pattern array and the reference data to determine a similarity measure between the input values and reference data. The similarity measure accounts for the predetermined and ordered time relationship. The estimate values, in the form of an estimate matrix, include at least one estimate vector of inferred estimate values. A current outcome of the object is determined based upon the inferred estimate values.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for determining the future operational condition of an object, comprising: receiving reference data that indicates the normal operational state of the object, receiving input multi-dimensional pattern arrays, each input pattern array having a plurality of input vectors, each input vector representing a time point and having input values representing a plurality of parameters indicating the condition of the object obtained from one or more first sensors at any time, each input vector arranged and maintained in a predetermined and time ordered relationship with the others, and generating estimate values based on a calculation that uses an input pattern array and the reference data to determine a similarity measure between the input values and reference data, the similarity measure accounting for the predetermined and ordered time relationship, wherein the estimate values are in the form of an estimate matrix that includes at least one estimate vector of inferred estimate values for at least one future point in time or a plurality of second sensors being different from the one or more first sensors, each estimate vector arranged and maintained in the predetermined and ordered time relationship with the others, the reference data being grouped in multi-dimensional training arrays including reference vectors, such that each reference vector in any training array is arranged and maintained in the predetermined and ordered time relationship with the others and such that the generation does not destroy any time information of the estimate matrix or the reference data; and determining a current outcome of the object based upon the inferred estimate values. 2. The method of claim 1 further comprising using the inferred estimate values to determine a future condition of the object. 3. The method of claim 1 wherein the estimate matrices only include estimate vectors that represent time points that are not represented by the input vectors. 4. The method of claim 1 wherein the estimate matrices include at least one estimate vector that represents the same time point represented by the input vectors and at least one estimate vector that represents a time point that is not represented by the input vectors. 5. The method of claim 1 wherein the estimate matrices include estimate values that represent parameters that indicate the condition of the object and that are not represented by the input values. 6. The method of claim 1 wherein each estimate matrix represents a primary current time point and time points not represented by the input vectors and that are succeeding time points relative to the current time point. 7. The method of claim 1 comprising generating weight values by using the similarity measures, and uses the weight values in a calculation with the reference data to generate the estimate matrix. 8. The method of claim 7 wherein the weights values are in the form of a weight vector. 9. The method of claim 7 wherein the reference data used in the calculation with weight values comprises reference values that represent time points that are not represented by the input pattern arrays. 10. The method of claim 9 wherein the reference data used in the calculation with weight values represents a primary current time point and the time points not represented by the input vectors are succeeding time points relative to the current time point. 11. The method of claim 7 wherein the reference data used in the calculation with the weight values is in the form of a three-dimensional collection of learned sequential pattern matrices, each learned sequential pattern matrix comprising reference vectors of reference values, wherein each reference vector represents a different time point within the learned sequential pattern matrix. 12. The method of claim 11 wherein each learned sequential pattern matrix comprises a primary current time point and time points that represent succeeding time points relative to the primary current time point and that are not represented by the input pattern arrays. 13. The method of claim 1 wherein the same time point is represented in multiple estimate matrices. 14. The method of claim 2 comprising using the most recent estimate matrix to update the estimate values for use to determine the condition of the object. 15. The method of claim 2 comprising providing values for a single estimate vector to represent a single time point across multiple estimate matrices. 16. The method of claim 15 wherein the single estimate vector is an average, a weighted average, or a weighted norm of all of the estimate vectors at the single time point. 17. The method of claim 2 comprising providing values for a single estimate vector to represent each estimate matrix. 18. The method of claim 17 wherein the single estimate vector is an average, weighted average, or weighted norm of the estimate vectors within the estimate matrix. 19. The method of claim 2 comprising forming a trend line for at least one parameter represented by the inferred estimate values to indicate the expected behavior of the object. 20. The method of claim 19 comprising form a new trend line with each new estimate matrix. 21. The method of claim 19 comprising forming boundary trend lines to define a range of expected behavior of the object. 22. The method of claim 19 comprising forming an upper boundary trend line with maximum estimate values from the time points, and a lower boundary trend line with minimum estimate values from the time points.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method for determining the future operational condition of an object includes receiving reference data that indicates the normal operational state of the object, and receiving input multi-dimensional pattern arrays. Each input pattern array has a plurality of input vectors, while each input vector represents a plurality of parameters indicating the condition of the object obtained from one or more first sensors at any time. Estimate values are generated based on a calculation that uses an input pattern array and the reference data to determine a similarity measure between the input values and reference data. The similarity measure accounts for the predetermined and ordered time relationship. The estimate values, in the form of an estimate matrix, include at least one estimate vector of inferred estimate values. A current outcome of the object is determined based upon the inferred estimate values.
G06N5048
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method for determining the future operational condition of an object includes receiving reference data that indicates the normal operational state of the object, and receiving input multi-dimensional pattern arrays. Each input pattern array has a plurality of input vectors, while each input vector represents a plurality of parameters indicating the condition of the object obtained from one or more first sensors at any time. Estimate values are generated based on a calculation that uses an input pattern array and the reference data to determine a similarity measure between the input values and reference data. The similarity measure accounts for the predetermined and ordered time relationship. The estimate values, in the form of an estimate matrix, include at least one estimate vector of inferred estimate values. A current outcome of the object is determined based upon the inferred estimate values.
The present disclosure pertains to a system and method for predictive modeling of data clusters. The system and method include creating a dataset from a data source comprising data points, identifying a number of clusters based at least in part on a similarity metric between the data points, generating a model for each of the number of clusters based at least in part on identifying the number of clusters, visually displaying the number of clusters, receiving an indication of selection of a particular cluster, and replacing the visual display of the identified number of clusters with a visual display of the model corresponding to the particular cluster in response to receiving an indication of selection of a model icon.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method, comprising: creating, using a computing device, a dataset from a data source comprising data points; identifying, using the computing device, a number of clusters based at least in part on a similarity metric between the data points; generating, using the computing device, a model for each of the number of clusters based at least in part on identifying the number of clusters; visually displaying, using the computing device, the number of clusters on a display device; and replacing, using the computing device, the visual display of the number of clusters on the display device with a visual display of the model corresponding to a particular cluster in response to receiving an indication of selection of a model icon. 2. The method of claim 1, wherein the visually displaying the number of clusters occurs in response to selection of a create cluster icon. 3. The method of claim 1, wherein visually displaying the number of clusters further comprises modifying the visual display of the number of clusters to ensure that that none of clusters overlaps another cluster. 4. The method of claim 1, wherein visually displaying the number of clusters further comprises representing each cluster with a size proportional to a number of data points comprised therein. 5. The method of claim 1, wherein identifying the number of clusters occurs in response to receiving an indication of selection of a generate cluster icon; and wherein generating the model for each of the number of clusters occurs in response to receiving an indication of a selection of a generate model icon. 6. The method of claim 1, wherein the model for each of the number of clusters is configured to predict whether a new data point belongs to the corresponding cluster. 7. The method of claim 1, further comprising: storing the model for each of the number of clusters in a memory device; and retrieving the model for the particular cluster from the memory device prior to visually displaying the model for the particular cluster on the display device. 8. A system, comprising: a memory device configured to store instructions; and one or more processors configured to execute the instructions stored in the memory device to: create a dataset from a data source comprising data points; identify a number of clusters based at least in part on a similarity metric between the data points; generate a model for each of the number of clusters based at least in part on identifying the number of clusters; visually display the number of clusters on a display device; and replace the visual display of the number of clusters on the display device with a visual display of the model corresponding to a particular cluster in response to receiving an indication of selection of a model icon. 9. The system of claim 8, wherein the one or more processors is configured to execute the instructions stored in the memory device further to visually display the number of clusters in response to selection of a create cluster icon. 10. The system of claim 8, wherein the one or more processors is configured to execute the instructions stored in the memory device further to modify the visual display of the number of clusters to ensure that that none of clusters overlaps another cluster. 11. The system of claim 8, wherein the one or more processors is configured to execute the instructions stored in the memory device further to visually represent each cluster with a size proportional to a number of data points comprised therein. 12. The system of claim 8, wherein the one or more processors is configured to execute the instructions stored in the memory device further to: identify the number of clusters occurs in response to selection of a generate cluster icon; and generate the model for each of the number of clusters occurs in response to selection of a generate model icon. 13. The system, of claim 8, wherein the one or more processors is configured to execute the instructions stored in the memory device further to, for each of the number of clusters, predict whether a new data point belongs to the corresponding cluster. 14. The system of claim 8, wherein the one or more processors is configured to execute the instructions stored in the memory device further to: store the model for each of the number of clusters in a memory device; and retrieve the model for the particular cluster from the memory device before visually displaying the model for the particular cluster on the display device. 15. A physical computer-readable medium comprising instructions stored thereon that, when executed by one or more processing devices, cause the one or more processing devices to: create a dataset from a data source comprising data points; identify a number of clusters based at least in part on a similarity metric between the data points; generate a model for each of the number of clusters based at least in part on identifying the number of clusters; visually display the number of clusters on a display device; and replace the visual display of the number of clusters on the display device with a visual display of the model corresponding to a particular cluster in response to receiving an indication of selection of a model icon. 16. The physical computer-readable medium of claim 15, wherein executing the instructions further cause the one or more processing devices to visually display the number of clusters in response to selection of a create cluster icon. 17. The physical computer-readable medium of claim 15, wherein executing the instructions further cause the one or more processing devices to modify the visual display of the number of clusters to ensure that that none of clusters overlaps another cluster. 18. The physical computer-readable medium of claim 15, wherein executing the instructions further cause the one or more processing devices to visually represent each cluster with a size proportional to a number of data points comprised therein. 19. The physical computer-readable medium of claim 15, wherein executing the instructions further cause the one or more processing devices to: identify the number of clusters occurs in response to selection of a generate cluster icon; and generate the model for each of the number of clusters occurs in response to selection of a generate model icon. 20. The physical computer-readable medium of claim 15, wherein executing the instructions further cause the one or more processing devices to, for each of the number of clusters, predict whether a new data point belongs to the corresponding cluster.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: The present disclosure pertains to a system and method for predictive modeling of data clusters. The system and method include creating a dataset from a data source comprising data points, identifying a number of clusters based at least in part on a similarity metric between the data points, generating a model for each of the number of clusters based at least in part on identifying the number of clusters, visually displaying the number of clusters, receiving an indication of selection of a particular cluster, and replacing the visual display of the identified number of clusters with a visual display of the model corresponding to the particular cluster in response to receiving an indication of selection of a model icon.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: The present disclosure pertains to a system and method for predictive modeling of data clusters. The system and method include creating a dataset from a data source comprising data points, identifying a number of clusters based at least in part on a similarity metric between the data points, generating a model for each of the number of clusters based at least in part on identifying the number of clusters, visually displaying the number of clusters, receiving an indication of selection of a particular cluster, and replacing the visual display of the identified number of clusters with a visual display of the model corresponding to the particular cluster in response to receiving an indication of selection of a model icon.
Training prediction models and applying machine learning prediction to data is illustrated herein. A prediction instance comprising a set of data and metadata associated with the set of data identifying a prediction type is obtained. The data and metadata are used to determine an entity to train a prediction model using the prediction type. A trained prediction model is obtained from the entity. A notification system may be configured to react to monitor contextual information and apply the prediction. A workflow system may automatically perform a function in a workflow based on prediction.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer system comprising: one or more processors; and one or more computer-readable media having stored thereon instructions that are executable by the one or more processors to configure the computer system to apply machine learning prediction to data, including instructions that are executable to configure the computer system to perform at least the following: obtain a prediction instance comprising a set of data and metadata associated with the set of data, the metadata including a prediction type; based on the data and metadata determine an entity to train a prediction model using the prediction type; and as a result, obtain a trained prediction model from the entity. 2. The computing system of claim 1, wherein the entity is a remote machine learning service. 3. The computing system of claim 1, wherein the one or more computer-readable media further have stored thereon instructions that are executable by the one or more processors to configure the computer system to monitor user contextual information and proactively apply the prediction model to the prediction instance when contextually relevant and provide contextually relevant suggestions based on the results of applying the prediction model to the prediction instance. 4. The computing system of claim 1, wherein the one or more computer-readable media further have stored thereon instructions that are executable by the one or more processors to configure the computer system to monitor current conditions and automatically perform a function based on application of the prediction model to the prediction instance. 5. The computing system of claim 1, wherein the prediction instance is updated to include a record for prediction, wherein the one or more computer-readable media further have stored thereon instructions that are executable by the one or more processors to configure the computer system to determine that a prediction should be performed locally for the record for prediction using the trained prediction model and the prediction instance. 6. The computing system of claim 1, wherein the entity is a local system. 7. The computing system of claim 6, wherein determining the entity to train the prediction model is performed as a result of determining that a time series in the data is white noise. 8. The computing system of claim 1, wherein the metadata is included in a table with the data or in a side structure separate from a table with the data. 9. A computer implemented method of applying machine learning prediction to data, the method comprising: obtaining a prediction instance comprising a set of data and metadata associated with the set of data, the metadata including a prediction type; based on the data and metadata determining an entity to train a prediction model using the prediction type; and as a result, obtaining a trained prediction model from the entity. 10. The method of claim 9, wherein the entity is a remote machine learning service. 11. The method of claim 9, further comprising monitoring user contextual information and proactively applying the prediction model to the prediction instance when contextually relevant and providing contextually relevant suggestions based on the results of applying the prediction model to the prediction instance. 12. The method of claim 9, further comprising monitoring current conditions and automatically performing a function based on application of the prediction model to the prediction instance. 13. The method of claim 9, wherein the prediction instance is updated to include a record for prediction, the method further comprising determining that a prediction should be performed locally for the record for prediction using the trained prediction model and the prediction instance. 14. The method of claim 9, wherein the entity is a local system. 15. The method of claim 14, wherein determining the entity to train the prediction mode is performed as a result of determining that a time series in the data is white noise. 16. The method of claim 9 wherein the metadata is included in a table with the data or in a side structure separate from a table with the data. 17. A computer system comprising: a machine learning subsystem comprising: a machine learning optimization system configured to obtain a prediction instance comprising a set of data and metadata associated with the set of data, the metadata identifying a prediction type and based on the data and metadata determine an entity to train a prediction model using the prediction type. 18. The computing system of claim 17, further comprising a machine learning notification system configured to monitor user contextual information and proactively apply the prediction model to the prediction instance when contextually relevant and provide contextually relevant suggestions based on the results of applying the prediction model to the prediction instance. 19. The computing system of claim 17, further comprising a machine learning workflow system to monitor current conditions and automatically perform a function based on application of the prediction model to the prediction instance. 20. The computing system of claim 17, further comprising a machine learning prediction subsystem configured to train the prediction model at the machine learning subsystem according to the prediction type and update the prediction instance with the trained prediction model.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Training prediction models and applying machine learning prediction to data is illustrated herein. A prediction instance comprising a set of data and metadata associated with the set of data identifying a prediction type is obtained. The data and metadata are used to determine an entity to train a prediction model using the prediction type. A trained prediction model is obtained from the entity. A notification system may be configured to react to monitor contextual information and apply the prediction. A workflow system may automatically perform a function in a workflow based on prediction.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Training prediction models and applying machine learning prediction to data is illustrated herein. A prediction instance comprising a set of data and metadata associated with the set of data identifying a prediction type is obtained. The data and metadata are used to determine an entity to train a prediction model using the prediction type. A trained prediction model is obtained from the entity. A notification system may be configured to react to monitor contextual information and apply the prediction. A workflow system may automatically perform a function in a workflow based on prediction.
An approach for providing guidance and management of a data processing system. A processor stores at least one design pattern corresponding to a plurality of components of the data processing system. A processor generates a behavioral model of the data processing system based, at least in part, on the stored at least one design pattern. A processor monitors actual behavior of the data processing system. A processor compares the actual behavior of the data processing system to the behavioral model of the data processing system. A processor recommends a solution, based, at least in part, on the comparison.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for providing guidance and management of a data processing system, the method comprising: storing, by one or more processors, at least one design pattern corresponding to a plurality of components of the data processing system; generating, by one or more processors, a behavioral model of the data processing system based, at least in part, on the stored at least one design pattern; monitoring, by one or more processors, actual behavior of the data processing system; comparing, by one or more processors, the actual behavior of the data processing system to the behavioral model of the data processing system; and recommending, by one or more processors, a solution, based, at least in part, on the comparison. 2. The method of claim 1, wherein generating the behavioral model of the data processing system comprises: receiving, by one or more processors, system data and metadata; matching, by one or more processors, the system data and the metadata to the stored at least one design pattern; annotating, by one or more processors, the system data and the metadata to indicate an associated with the at least one design pattern; and storing, by one or more processors, the annotated system data and the annotated metadata. 3. The method of claim 1, wherein monitoring actual behavior of the data processing system comprises, at least, receiving system information, wherein the system information is information describing behavior of the data processing system. 4. The method of claim 1, further comprising: performing an action, by one or more processors, on the at least one design pattern corresponding to the plurality of components of the data processing system. 5. The method of claim 4, wherein the action is selected from the group consisting of creating, modifying, merging, and deleting. 6. The method of claim 1, wherein the solution is a suggested need for a modification of the behavioral model based on a behavior of the data processing system that emerges as a consequence of one or more ways components of the plurality of components are connected. 7. The method of claim 6, wherein the behavior is selected from a group of defined emergent properties.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: An approach for providing guidance and management of a data processing system. A processor stores at least one design pattern corresponding to a plurality of components of the data processing system. A processor generates a behavioral model of the data processing system based, at least in part, on the stored at least one design pattern. A processor monitors actual behavior of the data processing system. A processor compares the actual behavior of the data processing system to the behavioral model of the data processing system. A processor recommends a solution, based, at least in part, on the comparison.
G06N5047
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An approach for providing guidance and management of a data processing system. A processor stores at least one design pattern corresponding to a plurality of components of the data processing system. A processor generates a behavioral model of the data processing system based, at least in part, on the stored at least one design pattern. A processor monitors actual behavior of the data processing system. A processor compares the actual behavior of the data processing system to the behavioral model of the data processing system. A processor recommends a solution, based, at least in part, on the comparison.
A system and method for identifying an unknown person based on a static posture of the unknown person is described. The method includes receiving data of N skeleton joints of the unknown person from a skeleton recording device. The method further includes identifying the static posture of the unknown person. The method includes dividing a skeleton structure of the unknown person in a plurality of body parts based on joint types of the skeleton structure. In addition, the method includes extracting feature vectors for each of the joint type from each of the plurality of body parts. The method further includes identifying the unknown person based on comparison of the feature vectors for the unknown person with one of a constrained feature dataset and an unconstrained feature dataset for a plurality of known persons.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for identifying an unknown person based on a static posture of the unknown person, the method comprising: receiving data of N skeleton joints of the unknown person, wherein the data of the N skeleton joints is received from a skeleton recording device; identifying, by a processor, the static posture of the unknown person by dividing a skeleton structure of the unknown person in a plurality of body parts based on joint types of the skeleton structure; extracting, by the processor, feature vectors for each of the joint types from each of the plurality of body parts, corresponding to the static posture of the unknown person for identification of the unknown person, wherein the feature vectors are extracted based on the data of the N skeleton joints of the unknown person; and identifying, by the processor, the unknown person, based on comparison of the feature vectors for the unknown person with one of a constrained feature dataset and an unconstrained feature dataset for a plurality of known persons, wherein the constrained and the unconstrained feature datasets comprise at least one feature set for each of the plurality of known persons. 2. The method as claimed in claim 1, wherein the N skeleton joints of the unknown person comprises a head joint, a shoulder centre joint, a shoulder left joint, a shoulder right joint, a spine joint, a hand left joint, a hand right joint, an elbow right joint, an elbow left joint, a wrist right joint, a wrist left joint, a hip left joint, a hip right joint, a hip centre joint, a knee right joint, a knee left joint, a foot left joint, a foot right joint, an ankle right joint, and an ankle left joint. 3. The method as claimed in claim 1 further comprising: determining joint coordinates of the N skeleton joints of the unknown person, wherein the joint coordinates comprise Cartesian joint coordinates and spherical joint coordinates of each of the N skeleton joints, and wherein the static posture feature vectors is extracted based on the joint coordinates. 4. The method as claimed in claim 1, wherein the dividing comprises grouping the joint coordinates of the skeleton structure using density based clustering technique. 5. The method as claimed in claim 1, wherein the static posture of the unknown person is identified as a predefined static posture based on joint coordinates of predefined skeleton joints, from amongst the N skeleton joints, of the unknown person and the predefined static posture is one of a sitting posture, a standing posture, a lying posture, a bending posture, and a leaning posture and the joint types comprise static joints, dynamic joints, and noisy joints. 6. The method as claimed in claim 1, wherein, when the unknown person is identified to be in a sitting posture, the static posture feature vector for the unknown person is a sitting feature vector, and the training static posture feature vectors are training sitting feature vectors of the plurality of known persons, wherein the sitting feature vector comprises a first vector of static features, and wherein the first vector of static features comprises angle between a shoulder left joint, a shoulder centre joint, and a spine joint, angle between a shoulder right joint, the shoulder centre joint, and a spine joint, angle between the shoulder centre joint and the spine with respect to a vertical axis, area occupied by a polygon formed by the shoulder left joint, the shoulder centre joint, and the shoulder right joint, and a distance between two joints in each of a Cartesian co-ordinate system and a spherical co-ordinate system. 7. The method as claimed in claim 1, wherein, when the unknown person is identified to be in the standing posture, the static posture feature vector for the unknown person is a standing feature vector, and the training static posture feature vectors are training standing feature vectors of the plurality of known persons, wherein the standing feature vector comprises a second vector of static features, and wherein the second vector of static features comprises an angle between a shoulder left joint, a shoulder centre joint, and a spine joint, an angle between a shoulder right joint, the shoulder centre joint, and the spine joint, an angle between the shoulder centre joint and the spine with respect to a vertical axis, an angle between a hip left joint, a hip centre joint, and a hip right joint, an area occupied by a polygon formed by the shoulder left joint, the shoulder centre joint, and the shoulder right joint, an area occupied by a polygon formed by the hip left joint, the hip centre joint, and the hip right joint, and a distance between two joints in each of a Cartesian co-ordinate system and a spherical co-ordinate system. 8. The method as claimed in claim 1, wherein identifying the unknown person comprises evaluating person identification accuracy. 9. The method as claimed in claim 1, wherein the method further comprising: receiving data of N skeleton joints of each of the plurality of known persons for the predefined static posture at different positions and predefined poses in each of the position within a field of view (FOV) of the skeleton recording device, wherein the data of the N skeleton joints is received from the skeleton recording device; determining, by the processor, joint coordinates of each of the skeleton joints of each of the plurality of known persons, wherein a static posture of each of the plurality of known persons is determined based on the joint coordinates; dividing, by the processor, a skeleton structure of each of the plurality of known persons in a plurality of body parts based on joint types of the skeleton structure; extracting, by the processor, feature vectors for each of the plurality of body parts of the known persons, wherein the feature vectors are indicative of a pose of the known person in the static posture; selecting, by the processor, an optimal set of feature vectors for each of a constrained poses and an unconstrained poses and obtaining the optimal feature vector for the plurality of body parts for each posture and for all positions and poses; and storing, by the processor, the optimal feature vectors in a training database to identify the unknown person. 10. The method as claimed in claim 9, wherein a person identification system is trained using a classifier, wherein the classifier is a Support Vector Machine (SVM) with Radial Basis Function as kernel. 11. The method as claimed in claim 9, further comprising dividing the FOV of the skeleton recording device in a plurality of blocks to determine a position of the known persons. 12. A person identification system for identifying an unknown person based on a static posture of the unknown person, the person identification system comprising: a processor; a skeleton data processing module coupled to, and executable by, the processor to, receive data of N skeleton joints of the unknown person from a skeleton recording device; and determine joint coordinates of the N skeleton joints of the unknown person; a feature extraction module, coupled to the processor to, divide a skeleton structure of the unknown person in a plurality of body parts based on joint types, based on the static posture of the unknown person; and extract feature vectors for each of the plurality of body parts, wherein the feature vectors are indicative of the pose of the unknown person in the static posture; and an identification module coupled to the processor to, extract a feature set from a training database corresponding to the pose of the unknown person; and identify the unknown person, based on comparison of the feature vectors for the unknown person with one of a constrained feature dataset and an unconstrained feature dataset for a plurality of known persons, wherein the constrained and the unconstrained training feature datasets comprise at least one feature set for each of the plurality of known persons. 13. The person identification system as claimed in claim 12, wherein the skeleton data processing module further determines spherical joint coordinates and Cartesian joint coordinates of each of the N skeleton joints of the unknown person. 14. The person identification system as claimed in claim 12, when the unknown person is identified to be in a sitting posture, the static posture feature vector for the unknown person is a sitting feature vector, and the training static posture feature vectors are training sitting feature vectors of the plurality of known persons, wherein the sitting feature vector comprises a first set of static features, and wherein the first set of static features comprises an angle between a shoulder left joint, a shoulder centre joint, and a spine joint, an angle between a shoulder right joint, the shoulder centre joint, and a spine joint, an angle between the shoulder centre joint and the spine with respect to a vertical axis, an area occupied by a polygon formed by the shoulder left joint, the shoulder centre joint, and the shoulder right joint, and a distance between two joints in each of a Cartesian coordinate system and a spherical co-ordinate system. 15. The person identification system as claimed in claim 12, when the unknown person is identified to be in the standing posture, the static posture feature vector for the unknown person is a standing feature vector, and the training static posture feature vectors are training standing feature vectors of the plurality of known persons, wherein the standing feature vector comprises a second set of static features, and wherein the second set of static features comprises an angle between a shoulder left joint, a shoulder centre joint, and a spine joint, an angle between a shoulder right joint, the shoulder centre joint, and the spine joint, an angle between the shoulder centre joint and the spine with respect to a vertical axis, an angle between a hip left joint, a hip centre joint, and a hip right joint, an area occupied by a polygon formed by the shoulder left joint, the shoulder centre joint, and the shoulder right joint, an area occupied by a polygon formed by the hip left joint, the hip centre joint, and the hip right joint, and a distance between two joints in each of a Cartesian co-ordinate system and a spherical co-ordinate system. 16. The person identification system as claimed in claim 12, wherein the skeleton data processing module further comprising: receives data of N skeleton joints of each of the plurality of known persons for a predefined static posture, wherein the data of N skeleton joints is received from a skeleton recording device, and wherein the predefined static posture is one of a sitting posture, a standing posture, a lying posture, a bending posture, and a leaning posture; extracts a training static posture feature vector for each of the plurality of known persons based on the data of N skeleton joints of a respective known person; and stores the training static posture feature vector for each of the plurality of known persons to identify the unknown person, from amongst the plurality of known persons. 17. A non-transitory computer-readable medium having embodied thereon a computer program for executing a method comprising: receiving data of N skeleton joints of the unknown person, wherein the data of the N skeleton joints is received from a skeleton recording device; identifying, by a processor, the static posture of the unknown person by dividing a skeleton structure of the unknown person in a plurality of body parts based on joint types of the skeleton structure; extracting, by the processor, feature vectors for each of the joint types from each of the plurality of body parts, corresponding to the static posture of the unknown person for identification of the unknown person, wherein the feature vectors are extracted based on the data of the N skeleton joints of the unknown person; and identifying, by the processor, the unknown person, based on comparison of the feature vectors for the unknown person with one of a constrained feature dataset and an unconstrained feature dataset for a plurality of known persons, wherein the constrained and the unconstrained feature datasets comprise at least one feature set for each of the plurality of known persons.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A system and method for identifying an unknown person based on a static posture of the unknown person is described. The method includes receiving data of N skeleton joints of the unknown person from a skeleton recording device. The method further includes identifying the static posture of the unknown person. The method includes dividing a skeleton structure of the unknown person in a plurality of body parts based on joint types of the skeleton structure. In addition, the method includes extracting feature vectors for each of the joint type from each of the plurality of body parts. The method further includes identifying the unknown person based on comparison of the feature vectors for the unknown person with one of a constrained feature dataset and an unconstrained feature dataset for a plurality of known persons.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A system and method for identifying an unknown person based on a static posture of the unknown person is described. The method includes receiving data of N skeleton joints of the unknown person from a skeleton recording device. The method further includes identifying the static posture of the unknown person. The method includes dividing a skeleton structure of the unknown person in a plurality of body parts based on joint types of the skeleton structure. In addition, the method includes extracting feature vectors for each of the joint type from each of the plurality of body parts. The method further includes identifying the unknown person based on comparison of the feature vectors for the unknown person with one of a constrained feature dataset and an unconstrained feature dataset for a plurality of known persons.
A method for global data flow optimization for machine learning (ML) programs. The method includes receiving, by a storage device, an initial plan for an ML program. A processor builds a nested global data flow graph representation using the initial plan. Operator directed acyclic graphs (DAGs) are connected using crossblock operators according to inter-block data dependencies. The initial plan for the ML program is re-written resulting in an optimized plan for the ML program with respect to its global data flow properties. The re-writing includes re-writes of: configuration dataflow properties, operator selection and structural changes.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: receiving, by a storage device, an initial plan for a machine learning (ML) program; generating, by a processor, a nested global data flow graph representation using the initial plan; connecting operator directed acyclic graphs (DAGs) using crossblock operators according to inter-block data dependencies; and re-writing the initial plan for the ML program resulting in an optimized plan for the ML program with respect to its global data flow properties, wherein the re-writing comprises re-writes of: configuration dataflow properties, operator selection and structural changes. 2. The method of claim 1, further comprising determining re-writes of the initial plan for data flow and control flow of the ML program. 3. The method of claim 2, further comprising: bounding the initial plan for the ML program based on estimated execution time of the optimized plan of the ML program. 4. The method of claim 3, wherein the re-writing includes: changing data block size of one or more operations for selecting efficient physical operators and taking memory and parallelism into account to prevent block size increases to an extent where memory increase is counter-productive; and changing data format of intermediate results for prevention of unnecessary format conversions. 5. The method of claim 4, wherein the re-writing includes: selecting execution type of one or more operators for either in-memory execution of individual operations or distributed ML program execution of individual operations; performing automatic data partitioning of one or more matrices into direct accessible rows and columns or blocks, wherein the performing automatic data partitioning prevents unnecessary ML program jobs and repeated scans of program data; changing a replication factor of an ML program job based on the execution type selection; performing checkpointing that includes determining distributed caching and a particular storage level; and empty block materialization to enable operations for data parallel distributed computing. 6. The method of claim 5, wherein the re-writing includes: vectorizing loops for replacing cell-wise, column-wise or row-wise operations with coarse-grained operations for reducing overhead from one or more of: instruction execution, data copying, buffer pool maintenance and reducing a number of ML program jobs. 7. The method of claim 6, further comprising: determining the optimized plan based on one or more of: performing a transformation based search for determining the optimized plan having lowest run-time memory usage and processing latency; and performing an enumeration based search of trees for determining the optimized plan using interesting properties (IP) from each re-write of the re-writing and selecting a set of plans having lowest run-time memory usage and processing latency. 8. The method of claim 7, further comprising: performing an enumeration based search of DAGs for determining the optimized plan; and performing an enumeration based search of DAGs with control flow for determining the optimized plan. 9. A computer program product for global data flow optimization for machine learning (ML) programs, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to: receive, by a storage device, an initial plan for an ML program; generate, by the processor, a nested global data flow graph representation using the initial plan; connect, by the processor, operator directed acyclic graphs (DAGs) using crossblock operators according to inter-block data dependencies; and re-write, by the processor, the initial plan for the ML program resulting in an optimized plan for the ML program with respect to its global data flow properties, wherein the re-write comprises re-writes of: configuration dataflow properties, operator selection and structural changes. 10. The computer program product of claim 9, further comprising program instructions executable by the processor to cause the processor to: determine, by the processor, re-writes of the initial plan for data flow and control flow of the ML program. 11. The computer program product of claim 10, further comprising program instructions executable by the processor to cause the processor to: bound, by the processor, the initial plan for the ML program based on estimated execution time of the optimized plan of the ML program. 12. The computer program product of claim 11, wherein the re-write further comprising program instructions executable by the processor to cause the processor to: change, by the processor, data block size of one or more operations for selecting efficient physical operators and taking memory and parallelism into account to prevent block size increases to an extent where memory increase is counter-productive; and change, by the processor, data format of intermediate results for prevention of unnecessary format conversions. 13. The computer program product of claim 12, wherein the re-write further comprising program instructions executable by the processor to cause the processor to: select, by the processor, execution type of one or more operators for either in-memory execution of individual operations or distributed ML program execution of individual operations; perform, by the processor, automatic data partitioning of one or more matrices into direct accessible rows and columns or blocks, wherein the performing automatic data partitioning prevents unnecessary ML program jobs and repeated scans of program data; change, by the processor, a replication factor of an ML program job based on the execution type selection; perform, by the processor, checkpointing that includes determining distributed caching and a particular storage level; and provide, by the processor, empty block materialization to enable operations for data parallel distributed computing. 14. The computer program product of claim 13, wherein the re-write further comprising program instructions executable by the processor to cause the processor to: vectorize, by the processor, loops for replacing cell-wise, column-wise or row-wise operations with coarse-grained operations for reducing overhead from one or more of: instruction execution, data copying, buffer pool maintenance and reducing a number of ML program jobs. 15. The computer program product of claim 14, further comprising program instructions executable by the processor to cause the processor to: determine, by the processor, the optimized plan based on one or more of: perform, by the processor, a transformation based search for determining the optimized plan having lowest run-time memory usage and processing latency; perform, by the processor, an enumeration based search of trees for determining the optimized plan using interesting properties (IP) from each re-write of the re-writing and selecting a set of plans having lowest run-time memory usage and processing latency; perform, by the processor, an enumeration based search of DAGs for determining the optimized plan; and perform, by the processor, an enumeration based search of DAGs with control flow for determining the optimized plan. 16. An apparatus comprising: a storage device configured to receive an initial plan for a machine learning (ML) program; a graph generation processor configured to generate a nested global data flow graph representation using the initial plan, and to connect operator directed acyclic graphs (DAGs) using crossblock operators according to inter-block data dependencies; and an optimizer processor configured to re-write the initial plan for the ML program resulting in an optimized plan for the ML program with respect to its global data flow properties, wherein the re-write comprises re-writes of: configuration dataflow properties, operator selection and structural changes. 17. The apparatus of claim 16, further comprising: a planning processor configured to: determine re-writes of the initial plan for data flow and control flow of the ML program; bound the initial plan for the ML program based on estimated execution time of the optimized plan of the ML program; and the optimizer processor is configured to: change data block size of one or more operations to select efficient physical operators and take memory and parallelism into account to prevent block size increases to an extent where memory increase is counter-productive; and change data format of intermediate results to prevent of unnecessary format conversions. 18. The apparatus of claim 17, wherein the optimizer processor is configured to: select execution type of one or more operators for either in-memory execution of individual operations or distributed ML program execution of individual operations; perform automatic data partitioning of one or more matrices into direct accessible rows and columns or blocks, wherein the performing automatic data partitioning prevents unnecessary ML program jobs and repeated scans of program data; change a replication factor of an ML program job based on the execution type selection; perform checkpointing that includes determining distributed caching and a particular storage level; and provide empty block materialization to enable operations for data parallel distributed computing. 19. The apparatus of claim 18, wherein the optimizer processor is configured to vectorize loops to replace cell-wise, column-wise or row-wise operations with coarse-grained operations to reduce overhead from one or more of: instruction execution, data copying, buffer pool maintenance and reducing a number of ML program jobs. 20. The apparatus of claim 19, wherein the planning processor is configured to: determine the optimized plan by being configured to perform one or more of: a transformation based search to determine the optimized plan having lowest run-time memory usage and processing latency; an enumeration based search of trees to determine the optimized plan using interesting properties (IP) from each re-write and select a set of plans having lowest run-time memory usage and processing latency; an enumeration based search of DAGs to determine the optimized plan; and an enumeration based search of DAGs with control flow to determine the optimized plan.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method for global data flow optimization for machine learning (ML) programs. The method includes receiving, by a storage device, an initial plan for an ML program. A processor builds a nested global data flow graph representation using the initial plan. Operator directed acyclic graphs (DAGs) are connected using crossblock operators according to inter-block data dependencies. The initial plan for the ML program is re-written resulting in an optimized plan for the ML program with respect to its global data flow properties. The re-writing includes re-writes of: configuration dataflow properties, operator selection and structural changes.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method for global data flow optimization for machine learning (ML) programs. The method includes receiving, by a storage device, an initial plan for an ML program. A processor builds a nested global data flow graph representation using the initial plan. Operator directed acyclic graphs (DAGs) are connected using crossblock operators according to inter-block data dependencies. The initial plan for the ML program is re-written resulting in an optimized plan for the ML program with respect to its global data flow properties. The re-writing includes re-writes of: configuration dataflow properties, operator selection and structural changes.
Embodiments are directed towards classifying data using machine learning that may be incrementally refined based on expert input. Data provided to a deep learning model that may be trained based on a plurality of classifiers and sets of training data and/or testing data. If the number of classification errors exceeds a defined threshold classifiers may be modified based on data corresponding to observed classification errors. A fast learning model may be trained based on the modified classifiers, the data, and the data corresponding to the observed classification errors. And, another confidence value may be generated and associated with the classification of the data by the fast learning model. Report information may be generated based on a comparison result of the confidence value associated with the fast learning model and the confidence value associated with the deep learning model.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for classifying information over a network using a computer that includes one or more hardware processors, where each action of the method is performed by the one or more hardware processors, comprising: providing data to a deep learning model, wherein the deep learning model was previously trained based on a plurality of classifiers and one or more sets of training data; classifying the data using the deep learning model and the one or more classifiers, wherein a confidence value is associated with the classification of the data; when a number of classification errors exceeds a defined threshold, performing further actions, including: modifying one or more classifiers of the plurality of classifiers based on data corresponding to one or more classification errors; training a fast learning model based on at least the one or more modified classifiers and that portion of the data that corresponds to the one or more classification errors; classifying the data based on the fast learning model and the one or more modified classifiers, wherein another confidence value is associated with the classification of the data by the fast learning model; and generating report information based on a comparison result of the confidence value associated with the fast learning model and the confidence value associated with the deep learning model. 2. The method of claim 1, further comprising: retraining the deep learning model using the one or more modified classifiers; and discarding the trained fast learning model. 3. The method of claim 1, wherein the data includes one or more of real-time network information or captured network information. 4. The method of claim 1, further comprising, when the data is classified as associated with anomalous activity, generating one or more notifications based on a type of the anomalous activity. 5. The method of claim 1, further comprising: when the data is classified as associated with a new network entity, performing further actions, including: associating historical network information with the new network entity based on a type of the new network entity; and buffering real-time network information that is associated with the new network entity. 6. The method of claim 1, further comprising, buffering the data in real-time using a sensor computer, wherein the data is network information. 7. The method of claim 1, wherein exceeding a defined threshold, further comprises, exceeding one or more different thresholds that are defined for different types classification errors, wherein classification errors related to dangerous events have a lower defined threshold than classification errors related to safe events. 8. The method of claim 1, further comprising, retraining the deep learning model based on a defined schedule. 9. A system for classifying information over a network, comprising: a network computer, comprising: a transceiver that communicates over the network; a memory that stores at least instructions; and a processor device that executes instructions that perform actions, including: providing data to a deep learning model, wherein the deep learning model was previously trained based on a plurality of classifiers and one or more sets of training data; classifying the data using the deep learning model and the one or more classifiers, wherein a confidence value is associated with the classification of the data; and when a number of classification errors exceeds a defined threshold, performing further actions, including: modifying one or more classifiers of the plurality of classifiers based on data corresponding to one or more classification errors; training a fast learning model based on at least the one or more modified classifiers and that portion of the data that corresponds to the one or more classification errors; classifying the data based on the fast learning model and the one or more modified classifiers, wherein another confidence value is associated with the classification of the data by the fast learning model; and generating report information based on a comparison result of the confidence value associated with the fast learning model and the confidence value associated with the deep learning model; and a client computer, comprising: a transceiver that communicates over the network; a memory that stores at least instructions; and a processor device that executes instructions that perform actions, including: providing at least a portion of the data to the deep learning model. 10. The system of claim 9, wherein the network computer processor device executes instructions that perform actions, further comprising: retraining the deep learning model using the one or more modified classifiers; and discarding the trained fast learning model. 11. The system of claim 9, wherein the data includes one or more of real-time network information or captured network information. 12. The system of claim 9, wherein the network computer processor device executes instructions that perform actions, further comprising, when the data is classified as associated with anomalous activity, generating one or more notifications based on a type of the anomalous activity. 13. The system of claim 9, wherein the network computer processor device executes instructions that perform actions, further comprising: when the data is classified as associated with a new network entity, performing further actions, including: associating historical network information with the new network entity based on a type of the new network entity; and buffering real-time network information that is associated with the new network entity. 14. The system of claim 9, wherein the network computer processor device executes instructions that perform actions, further comprising, buffering the data in real-time using a sensor computer, wherein the data is network information. 15. The system of claim 9, wherein exceeding a defined threshold, further comprises, exceeding one or more different thresholds that are defined for different types classification errors, wherein classification errors related to dangerous events have a lower defined threshold than classification errors related to safe events. 16. The system of claim 9, wherein the network computer processor device executes instructions that perform actions, further comprising, retraining the deep learning model based on a defined schedule. 17. A processor readable non-transitory storage media that includes instructions for classifying information, wherein execution of the instructions by a processor device performs actions, comprising: providing data to a deep learning model, wherein the deep learning model was previously trained based on a plurality of classifiers and one or more sets of training data; classifying the data using the deep learning model and the one or more classifiers, wherein a confidence value is associated with the classification of the data; when a number of classification errors exceeds a defined threshold, performing further actions, including: modifying one or more classifiers of the plurality of classifiers based on data corresponding to one or more classification errors; training a fast learning model based on at least the one or more modified classifiers and that portion of the data that corresponds to the one or more classification errors; classifying the data based on the fast learning model and the one or more modified classifiers, wherein another confidence value is associated with the classification of the data by the fast learning model; and generating report information based on a comparison result of the confidence value associated with the fast learning model and the confidence value associated with the deep learning model. 18. The media of claim 17, further comprising: retraining the deep learning model using the one or more modified classifiers; and discarding the trained fast learning model. 19. The media of claim 17, wherein the data includes one or more of real-time network information or captured network information. 20. The media of claim 17, further comprising, when the data is classified as associated with anomalous activity, generating one or more notifications based on a type of the anomalous activity. 21. The media of claim 17, further comprising: when the data is classified as associated with a new network entity, performing further actions, including: associating historical network information with the new network entity based on a type of the new network entity; and buffering real-time network information that is associated with the new network entity. 22. The media of claim 17, further comprising, buffering the data in real-time using a sensor computer, wherein the data is network information. 23. The media of claim 17, wherein exceeding a defined threshold, further comprises, exceeding one or more different thresholds that are defined for different types classification errors, wherein classification errors related to dangerous events have a lower defined threshold than classification errors related to safe events. 24. A network computer for classifying information, comprising: a transceiver that communicates over the network; a memory that stores at least instructions; and a processor device that executes instructions that perform actions, including: providing data to a deep learning model, wherein the deep learning model was previously trained based on a plurality of classifiers and one or more sets of training data; classifying the data using the deep learning model and the one or more classifiers, wherein a confidence value is associated with the classification of the data; when a number of classification errors exceeds a defined threshold, performing further actions, including: modifying one or more classifiers of the plurality of classifiers based on data corresponding to one or more classification errors; training a fast learning model based on at least the one or more modified classifiers and that portion of the data that corresponds to the one or more classification errors; classifying the data based on the fast learning model and the one or more modified classifiers, wherein another confidence value is associated with the classification of the data by the fast learning model; and generating report information based on a comparison result of the confidence value associated with the fast learning model and the confidence value associated with the deep learning model. 25. The network computer of claim 24, further comprising: retraining the deep learning model using the one or more modified classifiers; and discarding the trained fast learning model. 26. The network computer of claim 24, wherein the data includes one or more of real-time network information or captured network information. 27. The network computer of claim 24, further comprising, when the data is classified as associated with anomalous activity, generating one or more notifications based on a type of the anomalous activity. 28. The network computer of claim 24, further comprising: when the data is classified as associated with a new network entity, performing further actions, including: associating historical network information with the new network entity based on a type of the new network entity; and buffering real-time network information that is associated with the new network entity. 29. The network computer of claim 24, further comprising, buffering the data in real-time using a sensor computer, wherein the data is network information. 30. The network computer of claim 24, wherein exceeding a defined threshold, further comprises, exceeding one or more different thresholds that are defined for different types classification errors, wherein classification errors related to dangerous events have a lower defined threshold than classification errors related to safe events.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Embodiments are directed towards classifying data using machine learning that may be incrementally refined based on expert input. Data provided to a deep learning model that may be trained based on a plurality of classifiers and sets of training data and/or testing data. If the number of classification errors exceeds a defined threshold classifiers may be modified based on data corresponding to observed classification errors. A fast learning model may be trained based on the modified classifiers, the data, and the data corresponding to the observed classification errors. And, another confidence value may be generated and associated with the classification of the data by the fast learning model. Report information may be generated based on a comparison result of the confidence value associated with the fast learning model and the confidence value associated with the deep learning model.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Embodiments are directed towards classifying data using machine learning that may be incrementally refined based on expert input. Data provided to a deep learning model that may be trained based on a plurality of classifiers and sets of training data and/or testing data. If the number of classification errors exceeds a defined threshold classifiers may be modified based on data corresponding to observed classification errors. A fast learning model may be trained based on the modified classifiers, the data, and the data corresponding to the observed classification errors. And, another confidence value may be generated and associated with the classification of the data by the fast learning model. Report information may be generated based on a comparison result of the confidence value associated with the fast learning model and the confidence value associated with the deep learning model.
An electrical filter includes a dielectric substrate with inner and outer coils about a first region and inner and outer coils about a second region, a portion of cladding removed from wires that form the coils and coupled to electrically conductive traces on the dielectric substrate via a solder joint in a switching region. An apparatus to thermally couple a superconductive device to a metal carrier with a through-hole includes a first clamp and a vacuum pump. A composite magnetic shield for use at superconductive temperatures includes an inner layer with magnetic permeability of at least 50,000; and an outer layer with magnetic saturation field greater than 1.2 T, separated from the inner layer by an intermediate layer of dielectric. An apparatus to dissipate heat from a superconducting processor includes a metal carrier with a recess, a post that extends upwards from a base of the recess and a layer of adhesive on top of the post. Various cryogenic refrigeration systems are described.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of fabricating an electrical filter for use with differential signals, the method comprising: winding a first length of wire about a first region of a dielectric substrate, to form a first inner coil of wire, wherein the first length of wire comprises a first wire core and a first resistive cladding; winding the first length of wire about a second region of the dielectric substrate, to form a second inner coil of wire, wherein the first and the second regions of the dielectric substrate are separated by a switching region of the dielectric substrate; winding a second length of wire around the first inner coil of wire, to form a first outer coil of wire, wherein the second length of wire comprises a second wire core and a second resistive cladding; winding the second length of wire around the second inner coil of wire, to form a second outer coil of wire; exposing a portion of the first and the second wire core in the switching region by removing a first portion of the first and the second resistive cladding, respectively, and tapering a second portion of the first and the second resistive cladding on each side of the first portion of the first and the second wire cladding, respectively; soldering the first and the second lengths of wire to each of a first and a second conductive trace in the switching region to form a solder joint, wherein the solder joint overlies the exposed portion of the first and the second wire core, and overlies the tapered portion of the first and the second cladding, and wherein the solder joint electrically couples the first and the second wire cores to the first and the second conductive traces, and mechanically couples the first and the second lengths of wire to the switching region; and cutting each of the first and the second lengths of wire between the first and the second conductive trace, to form a first conductive signal path comprising the first inner coil of wire and the second outer coil of wire, and a second conductive signal path comprising the first outer coil of wire and the second inner coil of wire. 2. The method of claim 1 wherein winding a first length of wire about a first region of a dielectric substrate includes winding a first length of continuous superconductive wire about a first region of a dielectric substrate, the first length of continuous superconductive wire comprising a core material that is superconducting below a first critical temperature, and wherein winding a second length of wire about a first region of the dielectric substrate includes winding a second length of continuous superconductive wire about a first region of the dielectric substrate, the second length of continuous superconductive wire comprising a core material that is superconducting below a second critical temperature. 3. The method of claim 1 wherein exposing a portion of the first and the second wire core in the switching region by removing a first portion of the first and the second resistive cladding, respectively, and tapering a second portion of the first and the second resistive cladding on each side of the first portion of the first and the second wire resistive cladding, respectively, includes applying an etchant to a portion of the first and the second lengths of wire. 4. The method of claim 3, further comprising: heating the switching region to cause the etchant to corrode the first and the second resistive claddings. 5. The method of claim 3, further comprising: applying a protective mask to the first and the second conductive traces to protect the first and the second conductive traces from the etchant, wherein soldering the first and the second lengths of wire to each of a first and a second conductive trace in the switching region to form a solder joint includes removing the protective mask from the first and the second conductive traces. 6. The method of claim 3 wherein applying an etchant to a portion of the first and the second lengths of wire includes applying ferric chloride to a portion of the first and the second lengths of wire. 7. The method of claim 1 wherein winding a first length of wire about a first or a second region of a dielectric substrate, to form a first or a second inner coil of wire, respectively, includes winding a first number of turns in the first length of wire, and winding a second length of continuous wire about a first or a second region of a dielectric substrate, to form a first or a second outer coil of wire, respectively, includes winding a second number of turns in the second length of wire, wherein the second number of turns is less than or equal to the first number of turns. 8. The method of claim 1 wherein winding a first length of wire about a first region of a dielectric substrate, to form a first inner coil of wire includes winding a first length of wire in a first direction about the dielectric substrate, and winding a second length of wire about a first region of a dielectric substrate, to form a first outer coil of wire includes winding a second length of wire in a second direction about the dielectric substrate, wherein the first direction is the same as the second direction. 9. The method of claim 1 wherein winding a first and a second length of wire includes winding a first and a second length of wire each with respective niobium-titanium wire cores, and copper-nickel claddings. 10.-71. (canceled) 72. An apparatus, comprising: a first tubular shield comprised of a first plurality of distinct U-shaped pieces of a nanocrystalline material that form a perimeter wall which forms a first cavity, the first cavity having at least a respective lateral dimension, the first tubular shield having a first end and a second end, the second end opposed to the first end along a length of the first tubular shield, the second end closed; and a first end cap comprised of a second plurality of distinct U-shaped pieces of a nanocrystalline material, the first end cap sized and dimensioned to be removably secured to the first tubular shield proximate the first end of the first tubular shield. 73. The apparatus of claim 72 wherein the first plurality of distinct U-shaped pieces of a nanocrystalline material are each comprised of a plurality of layers of a nanocrystalline amorphous iron-based material. 74. The apparatus of claim 72 wherein the first plurality of distinct U-shaped pieces of a nanocrystalline material are each comprised of a laminate of a plurality of layers of a nanocrystalline amorphous iron-based material in a polyethylene terephthalate (PET). 75. The apparatus of claim 72 wherein the first plurality of distinct U-shaped pieces of a nanocrystalline material are each comprised of a laminate of a plurality of layers of a nanocrystalline amorphous iron-based material in a polyethylene terephthalate (PET) loaded with a copper powder. 76. The apparatus of claim 72 wherein the first plurality of distinct U-shaped pieces of a nanocrystalline material are each comprised of a laminate of a plurality of layers of a tape comprised of a nanocrystalline amorphous Iron-based material in a polyethylene terephthalate (PET). 77. The apparatus of claim 72 wherein the U-shaped pieces are each from a single strip or a single piece of a laminate strip, with a first bend and a second bend each of which extend laterally across a longitudinal axis of the strip. 78. The apparatus of claim 72, further comprising: a copper base, about which the first plurality of distinct U-shaped pieces of a nanocrystalline material are arranged to form the first tubular shield. 79. The apparatus of claim 72 wherein the second plurality of distinct U-shaped pieces of a nanocrystalline material are each comprised of a laminate of a plurality of layers of a tape comprised of a nanocrystalline amorphous Iron-based material in a polyethylene terephthalate (PET). 80. The apparatus of claim 72, further comprising: a second tubular shield comprised of a third plurality of distinct U-shaped pieces of a nanocrystalline material that form a perimeter wall which forms a second cavity, the second cavity having at least a respective lateral dimension, the second tubular shield having a first end and a second end, the second end opposed to the first end along a length of the second tubular shield, the second end closed, the second tubular shield positioned in the first cavity of the first tubular shield. 81. The apparatus of claim 80, further comprising: a copper base, about which the first plurality of distinct U-shaped pieces of a nanocrystalline material are arranged to form the first tubular shield. 82. The apparatus of claim 81, further comprising: a processor received within the second cavity of the second tubular shield. 83. The apparatus of claim 82, further comprising: at least one degaussing wire wrapped about a portion of the first tubular shield, the degaussing wire electrically coupled to receive a periodic waveform of gradually decreasing amplitude to reduce a residual magnetism in the first tubular shield. 84. A method of forming an apparatus, comprising: arranging a first plurality of distinct U-shaped pieces of a nanocrystalline material to form at least a perimeter wall of a first tubular shield having a first cavity, the first cavity having at least a respective lateral dimension, the first tubular shield having a first end and a second end, the second end opposed to the first end along a length of the first tubular shield, the second end closed; and arranging a second plurality of distinct U-shaped pieces of a nanocrystalline material to form at least a perimeter wall of a first end cap, the first end cap sized and dimensioned to be removably secured to the first tubular shield proximate the first end of the first tubular shield. 85. The method of claim 84, further comprising: arranging a third plurality of distinct U-shaped pieces of a nanocrystalline material to form at least a perimeter wall of a second tubular shield having a second cavity, the second cavity having at least a respective lateral dimension, the second tubular shield having a first end and a second end, the second end opposed to the first end along a length of the second tubular shield, the second end closed, the second tubular shield sized and dimensioned to be removably received in the first cavity of the first tubular shield. 86. The method of claim 85, further comprising: inserting a processor in the second cavity of the second tubular shield; inserting the second tubular shield in the first cavity of the first tubular shield; and securing the first end cap to the first end of the first tubular shield. 87.-214. (canceled)
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: An electrical filter includes a dielectric substrate with inner and outer coils about a first region and inner and outer coils about a second region, a portion of cladding removed from wires that form the coils and coupled to electrically conductive traces on the dielectric substrate via a solder joint in a switching region. An apparatus to thermally couple a superconductive device to a metal carrier with a through-hole includes a first clamp and a vacuum pump. A composite magnetic shield for use at superconductive temperatures includes an inner layer with magnetic permeability of at least 50,000; and an outer layer with magnetic saturation field greater than 1.2 T, separated from the inner layer by an intermediate layer of dielectric. An apparatus to dissipate heat from a superconducting processor includes a metal carrier with a recess, a post that extends upwards from a base of the recess and a layer of adhesive on top of the post. Various cryogenic refrigeration systems are described.
G06N99002
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An electrical filter includes a dielectric substrate with inner and outer coils about a first region and inner and outer coils about a second region, a portion of cladding removed from wires that form the coils and coupled to electrically conductive traces on the dielectric substrate via a solder joint in a switching region. An apparatus to thermally couple a superconductive device to a metal carrier with a through-hole includes a first clamp and a vacuum pump. A composite magnetic shield for use at superconductive temperatures includes an inner layer with magnetic permeability of at least 50,000; and an outer layer with magnetic saturation field greater than 1.2 T, separated from the inner layer by an intermediate layer of dielectric. An apparatus to dissipate heat from a superconducting processor includes a metal carrier with a recess, a post that extends upwards from a base of the recess and a layer of adhesive on top of the post. Various cryogenic refrigeration systems are described.
According to an aspect, learning parameters in a feed forward probabilistic graphical model includes creating an inference model via a computer processor. The creation of the inference model includes receiving a training set that includes multiple scenarios, each scenario comprised of one or more natural language statements, and each scenario corresponding to a plurality of candidate answers. The creation also includes constructing evidence graphs for each of the multiple scenarios based on the training set, and calculating weights for common features across the evidence graphs that will maximize a probability of the inference model locating correct answers from corresponding candidate answers across all of the multiple scenarios. In response to an inquiry from a user that includes a scenario, the inference model constructs an evidence graph and recursively constructs formulas to express a confidence of each node in the evidence graph in terms of its parents in the evidence graph.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: creating an inference model via a computer processor, the creating comprising: receiving a training set that includes multiple scenarios, each scenario comprised of one or more natural language statements, and each scenario corresponding to a plurality of candidate answers; constructing evidence graphs for each of the multiple scenarios based on the training set; and calculating weights for common features across the evidence graphs that will maximize a probability of the inference model locating correct answers from corresponding candidate answers across all of the multiple scenarios; in response to an inquiry from a user via the computer processor, the inquiry comprising a scenario, the inference model constructs an evidence graph and recursively constructs formulas to express a confidence of each node in the evidence graph in terms of its parents in the evidence graph. 2. The method of claim 1, wherein the constructing evidence graphs includes for each scenario: extracting factors from the scenario; and generating intermediate nodes based on the extracted factors, wherein the factors are root nodes in the evidence graph and the candidate answers are terminal nodes in the evidence graph. 3. The method of claim 2, wherein the constructing evidence graphs further includes generating questions for the factors and the intermediate nodes represent the generated questions. 4. The method of claim 2, wherein edges of the intermediate nodes are determined using a question answering (QA) system that assigns a confidence value and a feature vector to each edge. 5. The method of claim 1, wherein the candidate answers are expressed as a parameterized mathematical expression that matches semantics of the inference model. 6. The method of claim 1, wherein the inference model applies different weights to the factors. 7. The method of claim 1, wherein the scenario corresponds to a medical environment and the candidate answers indicate corresponding diagnoses.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: According to an aspect, learning parameters in a feed forward probabilistic graphical model includes creating an inference model via a computer processor. The creation of the inference model includes receiving a training set that includes multiple scenarios, each scenario comprised of one or more natural language statements, and each scenario corresponding to a plurality of candidate answers. The creation also includes constructing evidence graphs for each of the multiple scenarios based on the training set, and calculating weights for common features across the evidence graphs that will maximize a probability of the inference model locating correct answers from corresponding candidate answers across all of the multiple scenarios. In response to an inquiry from a user that includes a scenario, the inference model constructs an evidence graph and recursively constructs formulas to express a confidence of each node in the evidence graph in terms of its parents in the evidence graph.
G06N5046
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: According to an aspect, learning parameters in a feed forward probabilistic graphical model includes creating an inference model via a computer processor. The creation of the inference model includes receiving a training set that includes multiple scenarios, each scenario comprised of one or more natural language statements, and each scenario corresponding to a plurality of candidate answers. The creation also includes constructing evidence graphs for each of the multiple scenarios based on the training set, and calculating weights for common features across the evidence graphs that will maximize a probability of the inference model locating correct answers from corresponding candidate answers across all of the multiple scenarios. In response to an inquiry from a user that includes a scenario, the inference model constructs an evidence graph and recursively constructs formulas to express a confidence of each node in the evidence graph in terms of its parents in the evidence graph.
An information conversion method includes: first moving positions of a plurality of particles on a unit sphere according to a value of a probability density function, defining a positional vector of a particle on the unit sphere in a multidimensional space, as a normal vector of a hyperplane configured to divide a feature vector space, defining a predetermined evaluation function configured to evaluate the hyperplane, as the probability density function configured to indicate a probability of existence of a particle on the unit sphere, by a processor; and converting the feature vector to a binary string, considering a positional vector of the moved particle as a normal vector of the hyperplane, by the processor.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. An information conversion method comprising: first moving positions of a plurality of particles on a unit sphere according to a value of a probability density function, defining a positional vector of a particle on the unit sphere in a multidimensional space, as a normal vector of a hyperplane configured to divide a feature vector space, defining a predetermined evaluation function configured to evaluate the hyperplane, as the probability density function configured to indicate a probability of existence of a particle on the unit sphere, by a processor; and converting the feature vector to a binary string, considering a positional vector of the moved particle as a normal vector of the hyperplane, by the processor. 2. The information conversion method according to claim 1, wherein the first moving includes extracting a movement destination candidate of the particle, by the processor, calculating a value obtained by dividing a value of the evaluation function calculated by defining a positional vector of the particle moved to the movement destination candidate, as a normal vector of the hyperplane, by a value of the evaluation function calculated by defining a current positional vector of the particle, as a normal vector of the hyperplane, by the processor, and second moving the particle to the extracted movement destination candidate, upon obtaining the calculated value larger than a random number of not less than 0 and not more than 1, or preventing movement of the particle, upon obtaining the calculated value not more than the random number, by the processor. 3. The information conversion method according to claim 1, wherein the first moving is repetitively performed a predetermined number of times, by the processor. 4. The information conversion method according to claim 1, wherein, as the probability density function, an evaluation function is used to move the plurality of particles on the unit sphere, the evaluation function selecting a pair of feature vectors to which the same label is applied, as a positive example pair, selecting a pair of feature vectors to which different labels are applied, as a negative example pair, and having a value reduced upon dividing two feature vectors included in the positive example pair into different areas by the hyperplane, and the value increased upon dividing two feature vectors included in the negative example pair into different areas by the hyperplane. 5. The information conversion method according to claim 4, wherein, as the probability density function, an evaluation function is used, the evaluation function selecting a plurality of the positive example pairs and a plurality of the negative example pairs, defining, as an index, the sum of the number of positive example pairs including both feature vectors in one of the areas divided by the hyperplane, of the selected positive example pairs, and the number of negative example pairs including two feature vectors divided into different areas by the hyperplane, of the selected negative example pairs, and defining a Napier's constant as a base. 6. The information conversion method according to claim 4, wherein, as the probability density function, an evaluation function is used, the evaluation function selecting a plurality of the positive example pairs and a plurality of the negative example pairs, and defining, as an index, the sum of a ratio of positive example pairs including both feature vectors in one of the areas divided by the hyperplane, of the selected positive example pairs, and a ratio of negative example pairs including two feature vectors divided into different areas by the hyperplane, of the selected negative example pairs, and defining a Napier's constant as a base. 7. The information conversion method according to claim 4, wherein, as the probability density function, an evaluation function is used, the evaluation function selecting a plurality of the positive example pairs and a plurality of the negative example pairs, adding, to all the positive example pairs, an absolute value obtained by adding a cosine value of an angle between one feature vector included in the positive example pair and a normal vector of the hyperplane, and a cosine value of an angle between the other feature vector the normal vector of the hyperplane, adding, to all the negative example pair, an absolute value of a difference between a cosine value of an angle between one feature vector included in the negative example pair and a normal vector of the hyperplane, and a cosine value of an angle between the other feature vector and the normal vector of the hyperplane and defining, as an index, the sum of a value added to the positive example pair and a value added to the negative example pair, and defining a Napier's constant as a base. 8. The information conversion method according to claim 4, wherein, as the probability density function, an evaluation function is used, the evaluation function selecting a plurality of the positive example pairs and a plurality of the negative example pairs, adding, to all the positive example pairs, an absolute value obtained by adding a cosine value of an angle between one feature vector included in the positive example pair and a normal vector of the hyperplane, and a cosine value of an angle between the other feature vector the normal vector of the hyperplane, adding, to all the negative example pair, an absolute value of a difference between a cosine value of an angle between one feature vector included in the negative example pair and a normal vector of the hyperplane, and a cosine value of an angle between the other feature vector and the normal vector of the hyperplane and defining, as an index, the sum of a value obtained by dividing a value added to the positive example pair by the number of the positive example pairs, and a value obtained by dividing a value added to the negative example pair by a value of the negative example pair, and defining a Napier's constant as a base. 9. An information conversion device comprising: a processor configured to execute a process including: calculating positions to which a plurality of particles on a unit sphere are moved, defining a positional vector of a particle on the unit sphere in a multidimensional space, as a normal vector of a hyperplane configured to divide a feature vector space, considering a predetermined evaluation function configured to evaluate the hyperplane, as the probability density function configured to indicate a probability of existence of a particle on the unit sphere; and converting the feature vector to a binary string, considering a positional vector indicating a position of each particle calculated at the calculating, as the normal vector of the hyperplane. 10. A non-transitory computer-readable recording medium storing an information conversion program that causes a computer to execute a process comprising: first moving positions of a plurality of particles on a unit sphere according to a value of a probability density function, defining a positional vector of a particle on the unit sphere in a multidimensional space, as a normal vector of a hyperplane configured to divide a feature vector space, defining a predetermined evaluation function configured to evaluate the hyperplane, as the probability density function configured to indicate a probability of existence of a particle on the unit sphere; and converting the feature vector to a binary string, considering a positional vector of the moved particle as a normal vector of the hyperplane.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: An information conversion method includes: first moving positions of a plurality of particles on a unit sphere according to a value of a probability density function, defining a positional vector of a particle on the unit sphere in a multidimensional space, as a normal vector of a hyperplane configured to divide a feature vector space, defining a predetermined evaluation function configured to evaluate the hyperplane, as the probability density function configured to indicate a probability of existence of a particle on the unit sphere, by a processor; and converting the feature vector to a binary string, considering a positional vector of the moved particle as a normal vector of the hyperplane, by the processor.
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An information conversion method includes: first moving positions of a plurality of particles on a unit sphere according to a value of a probability density function, defining a positional vector of a particle on the unit sphere in a multidimensional space, as a normal vector of a hyperplane configured to divide a feature vector space, defining a predetermined evaluation function configured to evaluate the hyperplane, as the probability density function configured to indicate a probability of existence of a particle on the unit sphere, by a processor; and converting the feature vector to a binary string, considering a positional vector of the moved particle as a normal vector of the hyperplane, by the processor.
Disclosed are systems and methods that implement efficient engines for computation-intensive tasks such as neural network deployment. Various embodiments of the invention provide for high-throughput batching that increases throughput of streaming data in high-traffic applications, such as real-time speech transcription. In embodiments, throughput is increased by dynamically assembling into batches and processing together user requests that randomly arrive at unknown timing such that not all the data is present at once at the time of batching. Some embodiments allow for performing steaming classification using pre-processing. The gains in performance allow for more efficient use of a compute engine and drastically reduce the cost of deploying large neural networks at scale, while meeting strict application requirements and adding relatively little computational latency so as to maintain a satisfactory application experience.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A batching method for increasing throughput of data processing requests, the method comprising: receiving, with at least partially unknown timing, data associated with requests to be processed by using a neural network model, the requests being subject to one or more constraints; dynamically assembling at least some of the data into a batch using at least one of the one or more constraints; and processing the batch using a single thread that orchestrates a plurality of threads to share a burden of loading the neural network model from memory to increase data throughput. 2. The method according to claim 1, wherein the one or more constraints comprise a latency requirement. 3. The method according to claim 2, wherein the latency requirement comprises at least one of a requirement to process a request within a predetermined amount of time after a last packet in the request arrives and a requirement to not add data into a batch that already contains data from that request. 4. The method according to claim 2, further comprising: assembling data from two or more requests that are latency sensitive into a latency-sensitive batch; and assembling data from two or more requests that are less latency sensitive into a throughput-oriented batch for processing, the latency-sensitive batch having a higher priority for processing than the throughput-oriented batch. 5. The method according to claim 1, wherein the batch comprises at least one stateful request. 6. The method according to claim 1, further comprising the steps of: pre-processing the data, the data comprising a packet; assembling pre-processed data into a batch matrix that is shared by at least two of the plurality of users; and providing the batch matrix to a compute engine. 7. The method according to claim 6, further comprising maintaining a batch list and, for each of a plurality of users: an input buffer and a pre-processed buffer. 8. The method according to claim 7, further comprising performing the steps of: copying data from the packet to the input buffer associated with the one of the plurality of users; discarding the packet; pre-processing the input buffer to obtain a first set of results; and placing the first set of results in the pre-processed buffer associated with the one of the plurality of users. 9. The method according to claim 8, wherein the step of pre-processing comprises transferring a predetermined amount of data that represents one of an image and a length of a spectrogram from the pre-processed buffer associated with the one of the plurality of users to an eligible batch in the batch list. 10. The method according to claim 8, further comprising, in response to looping over active users to fill up the batch list, deciding, based on a status of the compute engine, whether to provide one or more batches to the compute engine. 11. The method according to claim 10, wherein the step of deciding is based on a determination of at least one of a time needed for an additional iteration exceeding a delay constraint and an effect of a status of the batch list on a latency requirement. 12. A batch processing system for processing requests regarding a neural network model, the system comprising: one or more computing devices, in which each computing device comprises: at least one processor and a memory device; a batch producer component that receives data associated with different requests and dynamically assembles chunks of data from at least two different requests into a batch according to one or more constraints; and a compute engine component communicatively coupled to the batch producer, the compute engine component processes the batch in a single thread that orchestrates a plurality of threads to share a burden of loading the neural network model from memory to increase data throughput. 13. The batch processing system according to claim 12 wherein an input size of a neural network model determines a size for the chunks of data. 14. The batch processing system according to claim 12 further comprising a load balancer that receives, at unknown timings, a plurality of requests and load balances the plurality of requests across the one or more computing devices such that data associated with a same request are sent to a same computing device. 15. The batch producer according to claim 12, wherein the compute engine separates the processed batch into a plurality of responses that each are associated with one user. 16. A batch producer comprising: non-transitory computer-readable medium or media comprising one or more sequences of instructions which, when executed by at least one processor, causes steps to be performed comprising: receiving, with at least partially unknown timing, data associated with requests to be processed by using a neural network model, the requests being subject to one or more constraints; dynamically assembling at least some of the data into a batch using at least one of the one or more constraints; and processing the batch using a single thread that orchestrates a plurality of threads to share a burden of loading the neural network model from memory to increase data throughput. 17. The batch producer according to claim 16, wherein the batch producer comprises an input buffer and a pre-processed buffer for each of a plurality of users, each user being associated with a request to be processed. 18. The batch producer according to claim 16, wherein the batch producer receives the data processing requests asynchronously. 19. The batch producer according to claim 16, wherein the one or more constraints comprise at least one of a requirement to process a request within a predetermined amount of time after a last packet in the request arrives and a requirement to not add data to a batch that already contains data from that request. 20. The batch producer according to claim 16, wherein the steps to be performed further comprise: assembling data from two or more requests that are latency sensitive into a latency-sensitive batch; and assembling data from two or more requests that are less latency sensitive into a throughput-oriented batch for processing, the latency-sensitive batch having a higher priority for processing than the throughput-oriented batch.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Disclosed are systems and methods that implement efficient engines for computation-intensive tasks such as neural network deployment. Various embodiments of the invention provide for high-throughput batching that increases throughput of streaming data in high-traffic applications, such as real-time speech transcription. In embodiments, throughput is increased by dynamically assembling into batches and processing together user requests that randomly arrive at unknown timing such that not all the data is present at once at the time of batching. Some embodiments allow for performing steaming classification using pre-processing. The gains in performance allow for more efficient use of a compute engine and drastically reduce the cost of deploying large neural networks at scale, while meeting strict application requirements and adding relatively little computational latency so as to maintain a satisfactory application experience.
G06N310
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Disclosed are systems and methods that implement efficient engines for computation-intensive tasks such as neural network deployment. Various embodiments of the invention provide for high-throughput batching that increases throughput of streaming data in high-traffic applications, such as real-time speech transcription. In embodiments, throughput is increased by dynamically assembling into batches and processing together user requests that randomly arrive at unknown timing such that not all the data is present at once at the time of batching. Some embodiments allow for performing steaming classification using pre-processing. The gains in performance allow for more efficient use of a compute engine and drastically reduce the cost of deploying large neural networks at scale, while meeting strict application requirements and adding relatively little computational latency so as to maintain a satisfactory application experience.
The described technology can provide semantic translations of a selected language snippet. This can be accomplished by mapping snippets for output languages into a vector space; creating predicates that can map new snippets into that vector space; and, when a new snippet is received, generating and matching a vector representing that new snippet to the closest vector for a snippet of a desired output language, which is used as the translation of the new snippet. The procedure for mapping new snippets into the vector space can include creating a dependency structure for the new snippet and computing a vector for each dependency structure node. The vector computed for the root node of the dependency structure is the vector representing the new snippet. A similar process is used to train a transformation function for each possible node type, using language snippets already associated with a dependency structure and corresponding vectors.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for generating vector space predicate training data comprising: receiving multiple snippets in a particular language wherein at least one of the multiple snippets comprises at least two words; for each selected snippet of one or more of the multiple snippets: building a dependency structure for the selected snippet, wherein the dependency structure for the selected snippet comprises multiple nodes having corresponding node types; for each selected node of one or more of the multiple nodes of the dependency structure for the selected snippet: obtaining a vector corresponding to the selected node; and storing, as a part of the vector space predicate training data, a grouping corresponding to the selected node, the grouping comprising: the obtained vector for the selected node, the node-type associated with the selected node, and parameter vectors representing a word groupings associated with lower level nodes and used to obtain a vector; and providing an indication of the vector space predicate training data. 2. The method of claim 1, wherein the multiple nodes of the dependency structure for the selected snippet comprise: one or more leaf nodes each corresponding to one of the one or more words of the selected snippet; one or more intermediate nodes based on one or more of: the one or more leaf nodes of the dependency structure for the selected snippet or one or more other intermediate nodes of the dependency structure for the selected snippet; and a root node based on at least one of the one or more intermediate nodes of the dependency structure for the selected snippet. 3. The method of claim 2, wherein building the dependency structure for the selected snippet comprises: dividing the selected snippet into word groups; creating a leaf node corresponding to each word group; and until the root node is added corresponding to a word group comprising all words of the selected snippet: selecting two or more nodes from the dependency structure as combine nodes wherein the combine nodes are nodes that have not been combined with any higher level node and that have a determined relationship; and creating a new node at a level one level higher the selected combine node with a highest level, wherein the new node corresponds to a combination of the word groups corresponding to the selected combine nodes, and wherein the new node is connected by edges to the selected combine nodes. 4. The method of claim 2, wherein the parameter vectors for each intermediate node are obtained from a pair of lower level vectors or a tuple of lower level vectors obtained from nodes at a level closer to a lowest level than that intermediate node; and wherein the parameter vectors for the root node are obtained from a pair of lower level vectors or a tuple of lower level vectors obtained from nodes at a level closer to the lowest level than the root node. 5. The method of claim 2, wherein obtaining the vector corresponding to the selected node is performed by: where the selected node is a leaf node, obtaining a pre-defined vector for the word group corresponding to the selected node; and where the selected node is not a leaf node, combining vectors corresponding to two or more lower level nodes connected to the selected node by edges, wherein the combining is based on a type determined between the word groups associated with the two or more lower level nodes. 6. The method of claim 1, wherein node types correspond to at least two of: modifier, noun phrase, determiner, verb phrase, having, doing, affection, position phrase, time phrase, quality phrase, or quantity phrase. 7. The method of claim 1, wherein the dependency structure is an inverted tree structure. 8. A computer-readable storage medium storing instructions that, when executed by a computing system, cause the computing system to perform operations for training a vector space predicate, the operations comprising: receiving vector space predicate training data comprising one or more groupings, wherein: each of the one or more groupings is associated with a type that is the same as other types in the one or more groupings; and each of the one or more groupings includes: an output vector and one or more parameter vectors used to create the output vector; obtaining the vector space predicate; for each selected grouping of at least one of the one or more groupings: applying the obtained vector space predicate to the one or more parameter vectors of the selected grouping to obtain a predicate vector; comparing the predicate vector to the output vector of the selected grouping to obtain a difference; and modifying the vector space predicate based on the difference; and providing an indication of the modified vector space predicate. 9. The computer-readable storage medium of claim 8, wherein comparing the predicate vector to the output vector comprises determining a cosine distance between the predicate vector to the output vector. 10. The computer-readable storage medium of claim 8, wherein the vector space predicate comprises a vector transformation function. 11. The computer-readable storage medium of claim 8, wherein the vector space predicate comprises a neural network. 12. The computer-readable storage medium of claim 8, wherein multiple iterations of the operations for training the vector space predicate are performed, each iteration performed with different sets of groupings, each set of groupings for a particular iteration associated with a different type than types associated with other sets of groupings for other iterations; and wherein each iteration creates a different vector space predicate associated with the type associated with the set of groupings used in that iteration. 13. The computer-readable storage medium of claim 8, wherein the obtained vector space predicate has been partially trained prior to the modifying of the vector space predicate. 14. A system for semantically transforming a snippet into an alternate domain comprising: a memory; one or more processors; an interface configured to receive the snippet; a dependency structure building module configured to build a dependency structure for the snippet comprising multiple nodes, the multiple nodes comprising at least one leaf node, at least one intermediate node, and a root node; a predicate applying module configured to, for each selected non-leaf node of one or more of the multiple nodes including at least the root node, compute a vector based on one or more nodes at a level lower than the selected non-leaf node; and a vector space computing module configured to: map the computed vector for the root node into a vector space; determine a matching vector previously mapped into the vector space that is in the alternate domain and that is closest to the computed vector for the root node mapped into the vector space; and select an output snippet in the alternate domain that corresponds to the matching vector; wherein the interface is further configured to provide an indication of the output snippet. 15. The system of claim 14, wherein each selected non-leaf node of one or more of the multiple nodes of the dependency structure is associated with a type; and wherein the type determined is based on a relationship between word groups associated with two or more parent nodes of the selected non-leaf node. 16. The system of claim 15, wherein the predicate applying module is configured to, for each selected non-leaf node of one or more of the multiple nodes, compute the vector based on the one or more nodes the level lower than the selected non-leaf node by: selecting a vector space predicate with a vector space predicate type corresponding to the type determined for the selected non-leaf node; and applying the selected vector space predicate to the vectors corresponding to the two or more parent nodes of the selected non-leaf node. 17. The system of claim 14, further comprising a vector space building module configured to: compute vectors representing snippets in the alternate domain; and map the vectors representing snippets in the alternate domain into the vector space; wherein the matching vector previously mapped into the vector space is one of the vectors representing snippets in the alternate domain. 18. The system of claim 14, wherein: the interface is further configured to receive a second snippet; the dependency structure building module is further configured to build a second dependency structure for the second snippet; the predicate applying module is further configured to compute a second vector for a second root node of the second dependency structure; and the vector space computing module is further configured to: map the computed second vector into the vector space; determine a second matching vector previously mapped into the vector space that is in the alternate domain and that is closest to the computed second vector; compare the second matching vector to the computed second vector to determine a difference; determining that the difference is above a threshold value; based on the determining that the difference is above the threshold value, using output from an alternate form of machine translation on the second snippet to obtain a second output snippet; and the interface is further configured to provide an indication of the second output snippet. 19. The system of claim 14, wherein the at least one leaf node is at a lowest level, the at least one intermediate node is at a level above the lowest level, and the root node is at a highest level. 20. The system of claim 14, wherein the snippet is in a domain of a particular natural language and the alternate domain is a domain of a natural language other than the particular natural language.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: The described technology can provide semantic translations of a selected language snippet. This can be accomplished by mapping snippets for output languages into a vector space; creating predicates that can map new snippets into that vector space; and, when a new snippet is received, generating and matching a vector representing that new snippet to the closest vector for a snippet of a desired output language, which is used as the translation of the new snippet. The procedure for mapping new snippets into the vector space can include creating a dependency structure for the new snippet and computing a vector for each dependency structure node. The vector computed for the root node of the dependency structure is the vector representing the new snippet. A similar process is used to train a transformation function for each possible node type, using language snippets already associated with a dependency structure and corresponding vectors.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: The described technology can provide semantic translations of a selected language snippet. This can be accomplished by mapping snippets for output languages into a vector space; creating predicates that can map new snippets into that vector space; and, when a new snippet is received, generating and matching a vector representing that new snippet to the closest vector for a snippet of a desired output language, which is used as the translation of the new snippet. The procedure for mapping new snippets into the vector space can include creating a dependency structure for the new snippet and computing a vector for each dependency structure node. The vector computed for the root node of the dependency structure is the vector representing the new snippet. A similar process is used to train a transformation function for each possible node type, using language snippets already associated with a dependency structure and corresponding vectors.
Systems and methods for automated mathematical chatting. The systems and methods convert any identified non-numerical inputs into vectors and then perform the mathematical equation utilizing the vectors instead of the nonnumeric inputs along with any other identified numeric inputs to obtain a numerical vector result. The systems and methods decode the numerical vector result into a result feature and then search one or more databases for output based on the result feature. The systems and methods provide the selected output from the one or more databases in response to the mathematical query.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system for a mathematical chat bot, the system comprising: at least one processor; and a memory for storing and encoding computer executable instructions that, when executed by the at least one processor is operative to: collect a mathematical query; identify a variable that corresponds to each image of images in an equation in the mathematical query utilizing a mathematical knowledge graph to form corresponding variables; identify one or more mathematical operators in the equation in the mathematical query utilizing the mathematical knowledge graph; extract potential features for each image based on at least one of world knowledge and a natural language understanding system; filter the potential features; assign confidence scores to the potential features based on the filtering of the potential features; select a feature from the potential features for each image based on the confidence scores; encode the feature for each of the images into a vector; substitute the vector for each of the images into the corresponding variables; execute the equation to determine a result; decode the result into a result feature; search an image database for result images that correspond to the result feature; filter the result images; assign probability scores to the result images based on the filtering of the result images; select an answer image from the result images based on the probability scores; and provide the answer image to a user in reply to the mathematical query. 2. The system of claim 1, wherein the feature is a keyword. 3. The system of claim 1, wherein the feature is a sentence. 4. The system of claim 1, wherein the feature is a knowledge graph of keywords. 5. The system of claim 1, wherein the potential features are keywords. 6. The system of claim 1, wherein the potential features are sentences. 7. The system of claim 1, wherein the potential features are knowledge graphs of keywords. 8. The system of claim 1, wherein filter the potential features further comprises: determining whether each of the potential features is in the foreground or background; determining a specificity level for each of the potential features; and determining a pixel percentage occupied by each of the potential features. 9. The system of claim 1, filter the result images further comprises: determining whether the result feature is in the foreground or background in each of the result images; determining a pixel percentage occupied by the result feature in each of the result images; and determining a popularity of each of the result images. 10. The system of claim 1, wherein extract the potential features for each of the images based on at least one of the world knowledge and the natural language understanding system is performed utilizing a deep learning algorithm, and wherein encode the feature for each of the images into the vector is performed utilizing deep learning techniques. 11. The system of claim 1, wherein the images and the answer image are products. 12. A method for automated mathematical chatting, the method comprising: collecting a mathematical query; identifying a variable that corresponds to each nonnumeric input of inputs in an equation in the mathematical query utilizing a mathematical knowledge graph to form corresponding variables; identifying a mathematical operator in the equation in the mathematical query utilizing the mathematical knowledge graph; extracting potential features for each nonnumeric input utilizing at least one of world knowledge and a natural language understanding system; filtering the potential features; assigning confidence scores to the potential features based on the filtering of the potential features; selecting a feature from the potential features for each nonnumeric input based on the confidence scores; converting the feature for each nonnumeric input into a vector; substituting the vector for nonnumeric input into the corresponding variables; executing the equation to determine a result; decoding the result into a result feature; searching a database for outputs that correspond to the result feature; filtering the outputs; assigning probability scores to the outputs based on the filtering of the outputs; selecting an answer from the outputs based on the probability scores; and providing the answer to a user in response to the mathematical query. 13. The method of claim 12, wherein any nonnumeric inputs in the equation are images and text. 14. The method of claim 12, wherein any nonnumeric inputs in the equation are text. 15. The method of claim 12, wherein any nonnumeric inputs in the equation are at least one of a uniform resource locator, an audio file, an application, a video, and a website. 16. The method of claim 12, wherein any nonnumeric inputs in the equation and the answer are products. 17. The method of claim 12, wherein the feature is a keyword, a sentence, or a knowledge graph of keywords. 18. The method of claim 12, further comprising: determining that the confidence scores for a first nonnumeric input do not meet a predetermined threshold; in response to the determining that the confidence scores do not meet the predetermined threshold: extracting new potential features for each nonnumeric input utilizing at least one of the world knowledge and the natural language understanding system; filtering the new potential features; assigning new confidence scores to the new potential features based on the filtering of the new potential features; wherein selecting the feature from the potential features for each nonnumeric input based on the confidence scores comprises selecting the feature from the new potential features based on the filtering of the new potential features. 19. The method of claim 12, wherein the mathematical operator is a square root operator, a cubed operator, a squared operator, a multiplication operator, or a division operator. 20. A system for a mathematical chat bot, the system comprising: a computing device including a processing unit and a memory, the processing unit implementing a search engine and a conversation layer, the computing device is operable to: collect a mathematical query with inputs, wherein at least one of the inputs is a nonnumeric input; identify a variable that corresponds to each nonnumeric input of the inputs in an equation in the mathematical query utilizing a mathematical knowledge graph to form corresponding variables; identify one or more mathematical operators in the equation in the mathematical query utilizing the mathematical knowledge graph; select a feature from potential features for each nonnumeric input utilizing a deep learning algorithm; convert the feature for each nonnumeric input into a vector; substitute the vector for each nonnumeric input into the corresponding variables; execute the equation to determine a result; decode the result into a result feature; search a database for outputs that correspond to the result feature; select an answer from the outputs; and provide the answer in response to the mathematical query.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Systems and methods for automated mathematical chatting. The systems and methods convert any identified non-numerical inputs into vectors and then perform the mathematical equation utilizing the vectors instead of the nonnumeric inputs along with any other identified numeric inputs to obtain a numerical vector result. The systems and methods decode the numerical vector result into a result feature and then search one or more databases for output based on the result feature. The systems and methods provide the selected output from the one or more databases in response to the mathematical query.
G06N3006
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Systems and methods for automated mathematical chatting. The systems and methods convert any identified non-numerical inputs into vectors and then perform the mathematical equation utilizing the vectors instead of the nonnumeric inputs along with any other identified numeric inputs to obtain a numerical vector result. The systems and methods decode the numerical vector result into a result feature and then search one or more databases for output based on the result feature. The systems and methods provide the selected output from the one or more databases in response to the mathematical query.
In an approach to topic-based team analytics, a computing device extracts a list of topics based on a thread. The computing device identifies one or more participants with a relationship to one or more topics of the list of topics. The computing device generates a graph of the list of topics, the one or more participants, and relationships of the one or more participants to the one or more topics, wherein the one or more participants are represented as participant nodes of the graph and the one or more topics are represented as topic nodes of the graph, and wherein the relationships of the one or more participants to the one or more topics are represented as one or more edges connecting participant nodes with topic nodes.
Please help me write a proper abstract based on the patent claims. CLAIM: 1-7. (canceled) 8. A computer program product for topic-based team analytics, the computer program product comprising: one or more computer readable storage media and program instructions stored on the one or more computer readable storage media, the program instructions comprising: program instructions to extract a list of topics based on a thread; program instructions to identify one or more participants with a relationship to one or more topics of the list of topics; and program instructions to generate a graph of the list of topics, the one or more participants, and relationships of the one or more participants to the one or more topics, wherein the one or more participants are represented as participant nodes of the graph and the one or more topics are represented as topic nodes of the graph, and wherein the relationships of the one or more participants to the one or more topics are represented as one or more edges connecting participant nodes with topic nodes. 9. The computer program product of claim 8, further comprising: program instructions to generate summary information demonstrating a relationship of a participant to a topic; and program instructions to display, responsive to a user interaction with a participant node representing the participant or a topic node representing the topic, the summary information. 10. The computer program product of claim 8, further comprising: program instructions to generate attitude information associated with a participant; and program instructions to display a visual representation of the attitude information. 11. The computer program product of claim 8, further comprising: program instructions to generate a list of related messages in additional threads; and program instructions to display the list of related messages. 12. The computer program product of claim 8, further comprising: program instructions to receive a user instruction to filter the graph; program instructions to filter the graph to generate a filtered graph comprising one topic node; and program instructions to display the filtered graph. 13. The computer program product of claim 8, wherein the one or more edges vary in thickness based on relationship strength. 14. The computer program product of claim 8, wherein the one or more topic nodes vary in size based on topic prevalence. 15. A computer system for topic-based team analytics, the computer system comprising: one or more processors; one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media for execution by at least one of the one or more processors, the program instructions comprising: program instructions to extract a list of topics based on a thread; program instructions to identify one or more participants with a relationship to one or more topics of the list of topics; and program instructions to generate a graph of the list of topics, the one or more participants, and relationships of the one or more participants to the one or more topics, wherein the one or more participants are represented as participant nodes of the graph and the one or more topics are represented as topic nodes of the graph, and wherein the relationships of the one or more participants to the one or more topics are represented as one or more edges connecting participant nodes with topic nodes. 16. The computer system of claim 15, further comprising: program instructions to generate summary information demonstrating a relationship of a participant to a topic; and program instructions to display, responsive to a user interaction with a participant node representing the participant or a topic node representing the topic, the summary information. 17. The computer system of claim 15, further comprising: program instructions to generate attitude information associated with a participant; and program instructions to display a visual representation of the attitude information. 18. The computer system of claim 15, further comprising: program instructions to generate a list of related messages in additional threads; and program instructions to display the list of related messages. 19. The computer system of claim 15, further comprising: program instructions to receive a user instruction to filter the graph; program instructions to filter the graph to generate a filtered graph comprising one topic node; and program instructions to display the filtered graph. 20. The computer system of claim 15, wherein the one or more edges vary in thickness based on relationship strength.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: In an approach to topic-based team analytics, a computing device extracts a list of topics based on a thread. The computing device identifies one or more participants with a relationship to one or more topics of the list of topics. The computing device generates a graph of the list of topics, the one or more participants, and relationships of the one or more participants to the one or more topics, wherein the one or more participants are represented as participant nodes of the graph and the one or more topics are represented as topic nodes of the graph, and wherein the relationships of the one or more participants to the one or more topics are represented as one or more edges connecting participant nodes with topic nodes.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: In an approach to topic-based team analytics, a computing device extracts a list of topics based on a thread. The computing device identifies one or more participants with a relationship to one or more topics of the list of topics. The computing device generates a graph of the list of topics, the one or more participants, and relationships of the one or more participants to the one or more topics, wherein the one or more participants are represented as participant nodes of the graph and the one or more topics are represented as topic nodes of the graph, and wherein the relationships of the one or more participants to the one or more topics are represented as one or more edges connecting participant nodes with topic nodes.
Various implementations described herein are directed to a method for generating an instance of a finite state machine. The method may receive a configuration comprising a value for each of one or more configuration points of a finite state machine. The method may determine a configuration point of the finite state machine that is not defined with a value in the configuration. The method may determine a default value for the determined configuration point of the finite state machine that is not defined with a value in the configuration. The method may also generate, based on the configuration and the default value, an instance of the finite state machine. The instance of the finite state machine collects data using a dialog system.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method, comprising: receiving, by a processor, a configuration comprising a value for each of one or more configuration points of a finite state machine; determining, by the processor, a configuration point of the finite state machine that is not defined with a value in the configuration; determining, automatically by the processor, a default value for the determined configuration point of the finite state machine that is not defined with a value in the configuration; and generating, by the processor and based on the configuration and the default value, an instance of the finite state machine, wherein the instance of the finite state machine is usable to collect data using a dialog system. 2. The method of claim 1, wherein the configuration comprises instructions to perform a grounding behavior, instructions to perform a backend read behavior, instructions to perform an evaluate dependency behavior, a description of mandatory attributes, a maximum number of candidates to be accepted, a minimum number of candidates to be accepted, a threshold for a refinement on candidates, a description of concepts to update, or combinations thereof. 3. The method of claim 1, wherein the configuration comprises a description of one or more mandatory attributes for data collected by the instance of the finite state machine. 4. The method of claim 3, further comprising: receiving, by the instance of the finite state machine, a first input; determining, by the instance of the finite state machine, that the first input does not comprise a mandatory attribute of the one or more mandatory attributes; and requesting, from a user, and responsive to determining that the first input does not comprise the mandatory attribute, a second input. 5. The method of claim 1, wherein the configuration comprises a maximum number of scalar values for the finite state machine to return. 6. The method of claim 5, further comprising: receiving, by the instance of the finite state machine and from a user, an input; retrieving, by the instance of the finite state machine and based on the input, a plurality of candidates based on the input; determining, by the instance of the finite state machine, that the plurality of candidates comprises a greater number of candidates than the maximum number of scalar values for the finite state machine to return; and receiving, from the user, a selection of one or more candidates of the plurality of candidates. 7. The method of claim 1, wherein the instance of the finite state machine comprises a ground data state, a check missing data state, a perform backend read state, a dependency evaluation state, and a check data update state. 8. The method of claim 1, wherein the configuration comprises an Extensible Markup Language data file corresponding to a document type definition of the finite state machine. 9. The method of claim 1, wherein the configuration comprises instructions for accessing a database and a maximum number of candidates to be kept from accessing the database. 10. The method of claim 9, further comprising: receiving, by the instance of the finite state machine, an input; and retrieving, by the instance of the finite state machine and based on the input, data from the database. 11. The method of claim 1, wherein generating the instance of the finite state machine comprises generating the instance of the finite state machine with a ground data state to receive input. 12. The method of claim 11, wherein generating the instance of the finite state machine comprises generating the instance of the finite state machine with a check missing data state to compare the input to one or more predefined mandatory attributes. 13. A method, comprising: generating, by a processor and based on a configuration, a plurality of instances of a finite state machine for a dialog system, wherein each instance of the plurality of instances is configured to collect a scalar data; receiving, by the processor, an input via the dialog system; determining, by the processor and based on the input, an instance of the plurality of instances of the finite state machine to process the input; and determining, by the processor and by using the instance of the plurality of instances of the finite state machine, a scalar data corresponding to the input. 14. The method of claim 13, wherein the configuration comprises a description of one or more mandatory attributes for data collected by each of the plurality of instances of the finite state machine. 15. The method of claim 13, wherein the configuration comprises a description of a maximum number of scalar values for each of the plurality of instances of the finite state machine to return. 16. The method of claim 13, wherein determining the instance of the plurality of instances of the finite state machine to process the input comprises comparing the input to a value in the configuration. 17. A method, comprising: generating, by a processor and based on a configuration, a first instance of a finite state machine to collect one or more scalar data via a dialog system; generating, by the processor and based on the configuration, a second instance of the finite state machine that corresponds to a complex data comprising the one or more scalar data; determining, by the process or and by using the first instance of the finite state machine, the one or more scalar data; and transmitting, by the processor and to the second instance of the finite state machine, the one or more scalar data. 18. The method of claim 17, further comprising, performing, by the second instance of the finite state machine and based on the one or more scalar data, a backend access. 19. The method of claim 17, wherein the configuration comprises a maximum number of scalar values for the first instance of the finite state machine to return and a maximum number of scalar values for the second instance of the finite state machine to return. 20. The method of claim 17, wherein the configuration comprises: instructions for the second instance of the finite state machine to access a database; and a maximum number of candidates for the second instance of the finite state machine to retrieve from the database.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Various implementations described herein are directed to a method for generating an instance of a finite state machine. The method may receive a configuration comprising a value for each of one or more configuration points of a finite state machine. The method may determine a configuration point of the finite state machine that is not defined with a value in the configuration. The method may determine a default value for the determined configuration point of the finite state machine that is not defined with a value in the configuration. The method may also generate, based on the configuration and the default value, an instance of the finite state machine. The instance of the finite state machine collects data using a dialog system.
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Various implementations described herein are directed to a method for generating an instance of a finite state machine. The method may receive a configuration comprising a value for each of one or more configuration points of a finite state machine. The method may determine a configuration point of the finite state machine that is not defined with a value in the configuration. The method may determine a default value for the determined configuration point of the finite state machine that is not defined with a value in the configuration. The method may also generate, based on the configuration and the default value, an instance of the finite state machine. The instance of the finite state machine collects data using a dialog system.
One or more techniques and/or systems are provided for training and/or utilizing a traffic obstruction identification model for identifying traffic obstructions based upon vehicle location point data. For example, a training dataset, comprising sample vehicle location points (e.g., global positioning system location points of vehicles) and traffic obstruction identification labels (e.g., locations of known traffic obstructions such as stop signs, crosswalks, stop lights, etc.), may be evaluated to extract a set of training features indicative of traffic flow patterns. The set of training features and the traffic obstruction identification labels may be used to train a traffic obstruction identification model to create a trained traffic obstruction identification model. The trained traffic obstruction identification model may be used to determine whether a road segment has a traffic obstruction or not.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for training a traffic obstruction identification model, comprising: obtaining a training dataset comprising sample vehicle location points and traffic obstruction identification labels; extracting a set of training features from the training dataset based upon the sample vehicle location points, the set of training features indicative of traffic flow patterns; and training a traffic obstruction identification model using the set of training features and the traffic obstruction identification labels to create a trained traffic obstruction identification model for identifying traffic obstructions based upon vehicle location point data. 2. The method of claim 1, comprising: obtaining a dataset comprising vehicle location points; extracting a set of features from the dataset based upon the vehicle location points, the set of features indicative of traffic flow patterns; and evaluating the set of features using the trained traffic obstruction identification model to determine whether a road segment has a traffic obstruction. 3. The method of claim 2, comprising: determining whether a current traffic flow pattern is a result of congestion or the traffic obstruction based upon whether the road segment has the traffic obstruction. 4. The method of claim 1, the traffic obstruction comprising at least one of a stop light, a stop sign, a crosswalk, a railroad crossing, a traffic flow impediment, a temporary obstruction, or a permanent obstruction. 5. The method of claim 1, the extracting a set of training features comprising: evaluating the sample vehicle location points to identify a count of vehicles having speeds below a speed threshold; and comparing the count of vehicles to a total count of vehicles to determine a vehicle speed feature for inclusion within the set of training features. 6. The method of claim 1, the extracting a set of training features comprising: evaluating the sample vehicle location points to determine a median speed; and identifying a standard deviation from the median speed to determine a median average deviation feature for inclusion within the set of training features. 7. The method of claim 1, the extracting a set of training features comprising: identifying a first count of vehicle location points within a first road segment; and comparing the first count of vehicle location points to counts of vehicle location points within one or more neighboring road segments to determine a relative point density feature for inclusion within the set of training features. 8. The method of claim 2, the obtaining a dataset comprising vehicle location points comprising: receiving a first set of global positioning system (GPS) location points from a first vehicle; receiving a second set of GPS location points from a second vehicle; and including the first set of GPS location points and the second set of GPS location points within the dataset. 9. The method of claim 1, the training a traffic obstruction identification model comprising: identifying one or more parameters for use by the trained traffic obstruction identification model based upon the training dataset and the set of training features. 10. The method of claim 1, the extracting a set of training features comprising: extracting a first set of training features for a first road segment; and extracting a second set of training features for a second road segment. 11. The method of claim 8, the first set of GPS location points collected at a first time period and the second set of GPS location points collected at a second time period. 12. The method of claim 2, the evaluating the set of features using the trained traffic obstruction identification model comprising: classifying the road segment as having or not having the traffic obstruction based upon values of features within the set of features and one or more parameters of the trained traffic obstruction identification model. 13. The method of claim 2, the set of features independent of a sampling rate of the dataset. 14. A computer readable medium comprising instructions which when executed perform a method for determining whether a road segment has a traffic obstruction, comprising: obtaining a dataset comprising vehicle location points; extracting a set of features from the dataset based upon the vehicle location points, the set of features indicative of traffic flow patterns; and evaluating the set of features using a trained traffic obstruction identification model to determine whether a road segment has a traffic obstruction. 15. The method of claim 14, comprising: determining whether a current traffic flow pattern is a result of congestion or the traffic obstruction based upon whether the road segment has the traffic obstruction. 16. The method of claim 14, the extracting a set of features comprising: evaluating the vehicle location points to identify a count of vehicles having speeds below a speed threshold; and comparing the count of vehicles to a total count of vehicles to determine a vehicle speed feature for inclusion within the set of features. 17. The method of claim 14, the extracting a set of features comprising: evaluating the vehicle location points to determine a median speed; and identifying a standard deviation from the median speed to determine a median average deviation feature for inclusion within the set of training feature. 18. The method of claim 14, the extracting a set of features comprising: identifying a first count of vehicle location points within a first road segment; and comparing the first count of vehicle location points to counts of vehicle location points within one or more neighboring road segments to determine a relative point density feature for inclusion within the set of features. 19. A system for training a traffic obstruction identification model, comprising: a model training component configured to: obtain a training dataset comprising sample vehicle location points and traffic obstruction identification labels; extract a set of training features from the training dataset based upon the sample vehicle location points, the set of training features indicative of traffic flow patterns; and train a traffic obstruction identification model using the set of training features and the traffic obstruction identification labels to create a trained traffic obstruction identification model for identifying traffic obstructions based upon vehicle location point data. 20. The system of claim 19, comprising: a traffic obstruction identification component configured to: obtain a dataset comprising vehicle location points; extract a set of features from the dataset based upon the vehicle location points, the set of features indicative of traffic flow patterns; and evaluate the set of features using the trained traffic obstruction identification model to determine whether a road segment has a traffic obstruction.