output
stringlengths
7
3.46k
input
stringclasses
1 value
instruction
stringlengths
129
114k
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Performance information and configuration information is received for the plurality of computer systems. The computer systems are grouped into a plurality of clusters based at least in part on the performance information, where the plurality of clusters includes a first cluster and a second cluster. A system configuration associated with the first cluster is automatically identified from the configuration information and is automatically sent to the second cluster.
A system for training a neural network. A switch is linked to feature detectors in at least some of the layers of the neural network. For each training case, the switch randomly selectively disables each of the feature detectors in accordance with a preconfigured probability. The weights from each training case are then normalized for applying the neural network to test data.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. (canceled) 2. A computer-implemented method comprising: obtaining a plurality of training cases; and training a neural network having a plurality of layers on the plurality of training cases, each of the layers including one or more feature detectors, each of the feature detectors having a corresponding set of weights, wherein training the neural network on the plurality of training cases comprises, for a first training case of the plurality of training cases: determining a first set of one or more feature detectors to disable during processing of the first training case, disabling the first set of one or more feature detectors in accordance with the determining, processing the first training case using the neural network with the first set of one or more feature detectors disabled to generate a predicted output for the first training case, and enabling the first set of one or more feature detectors after processing the first training case using the neural network and prior to processing a second training case of the plurality of training cases using the neural network. 3. The method of claim 2, wherein training the neural network on the plurality of training cases further comprises: for the second training case: determining a second, different set of one or more feature detectors to disable during processing of the second training case, disabling the second set of one or more feature detectors in accordance with the determining, and processing the second training case using the neural network with the second set of one or more feature detectors disabled to generate a predicted output for the second training case. 4. The method of claim 2, wherein a subset of the feature detectors are associated with respective probabilities of being disabled during processing of each of the training cases, and wherein determining the first set of one or more feature detectors to disable during processing of the first training case comprises: determining whether to disable each of the feature detectors in the subset based on the respective probability associated with the feature detector. 5. The method of claim 4, wherein training the neural network further comprises: adjusting the weights of each of the feature detectors in the neural network to generate trained values for each weight in the set of weights corresponding to the feature detector. 6. The method of claim 5, further comprising: normalizing the trained weights for each of the feature detectors in the subset, wherein normalizing the trained weights comprises multiplying the trained values of the weights for each of the one or more feature detectors in the subset by a respective probability of the feature detector not being disabled during processing of each of the training cases. 7. The method of claim 4, wherein the subset includes feature detectors in a first layer of the plurality of layers and feature detectors in one or more second layers of the plurality of layers, wherein the feature detectors in the first layer are associated with a first probability and wherein the feature detectors in the one or more second layers are associated with a second, different probability. 8. The method of claim 7, wherein the first layer is an input layer of the neural network and the one or more second layers are hidden layers of the neural network. 9. The method of claim 7, wherein the first layer and the one or more second layers are hidden layers of the neural network. 10. The method of claim 2, wherein determining the first set of one or more feature detectors to disable during processing of the first training case comprises: determining to disable the same feature detectors that were disabled during processing of a preceding training case. 11. A system comprising one or more computers and one or more storage devices storing instructions that, when executed by the one or more computers, cause the one or more computers to perform operations comprising: obtaining a plurality of training cases; and training a neural network having a plurality of layers on the plurality of training cases, each of the layers including one or more feature detectors, each of the feature detectors having a corresponding set of weights, wherein training the neural network on the plurality of training cases comprises, for a first training case of the plurality of training cases: determining a first set of one or more feature detectors to disable during processing of the first training case, disabling the first set of one or more feature detectors in accordance with the determining, processing the first training case using the neural network with the first set of one or more feature detectors disabled to generate a predicted output for the first training case, and enabling the first set of one or more feature detectors after processing the first training case using the neural network and prior to processing a second training case of the plurality of training cases using the neural network. 12. The system of claim 11, wherein training the neural network on the plurality of training cases further comprises: for the second training case: determining a second, different set of one or more feature detectors to disable during processing of the second training case, disabling the second set of one or more feature detectors in accordance with the determining, and processing the second training case using the neural network with the second set of one or more feature detectors disabled to generate a predicted output for the second training case. 13. The system of claim 11, wherein a subset of the feature detectors are associated with respective probabilities of being disabled during processing of each of the training cases, and wherein determining the first set of one or more feature detectors to disable during processing of the first training case comprises: determining whether to disable each of the feature detectors in the subset based on the respective probability associated with the feature detector. 14. The system of claim 13, wherein training the neural network further comprises: adjusting the weights of each of the feature detectors in the neural network to generate trained values for each weight in the set of weights corresponding to the feature detector. 15. The system of claim 14, the operations further comprising: normalizing the trained weights for each of the feature detectors in the subset, wherein normalizing the trained weights comprises multiplying the trained values of the weights for each of the one or more feature detectors in the subset by a respective probability of the feature detector not being disabled during processing of each of the training cases. 16. The system of claim 13, wherein the subset includes feature detectors in a first layer of the plurality of layers and feature detectors in one or more second layers of the plurality of layers, wherein the feature detectors in the first layer are associated with a first probability and wherein the feature detectors in the one or more second layers are associated with a second, different probability. 17. The system of claim 16, wherein the first layer is an input layer of the neural network and the one or more second layers are hidden layers of the neural network. 18. The system of claim 16, wherein the first layer and the one or more second layers are hidden layers of the neural network. 19. The system of claim 11, wherein determining the first set of one or more feature detectors to disable during processing of the first training case comprises: determining to disable the same feature detectors that were disabled during processing of a preceding training case. 20. A non-transitory computer storage medium encoded with a computer program, the program comprising instructions that when executed by one or more computers cause the one or more computers to perform operations comprising: obtaining a plurality of training cases; and training a neural network having a plurality of layers on the plurality of training cases, each of the layers including one or more feature detectors, each of the feature detectors having a corresponding set of weights, wherein training the neural network on the plurality of training cases comprises, for a first training case of the plurality of training cases: determining a first set of one or more feature detectors to disable during processing of the first training case, disabling the first set of one or more feature detectors in accordance with the determining, processing the first training case using the neural network with the first set of one or more feature detectors disabled to generate a predicted output for the first training case, and enabling the first set of one or more feature detectors after processing the first training case using the neural network and prior to processing a second training case of the plurality of training cases using the neural network. 21. The computer storage medium of claim 20, wherein training the neural network on the plurality of training cases further comprises: for the second training case: determining a second, different set of one or more feature detectors to disable during processing of the second training case, disabling the second set of one or more feature detectors in accordance with the determining, and processing the second training case using the neural network with the second set of one or more feature detectors disabled to generate a predicted output for the second training case.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A system for training a neural network. A switch is linked to feature detectors in at least some of the layers of the neural network. For each training case, the switch randomly selectively disables each of the feature detectors in accordance with a preconfigured probability. The weights from each training case are then normalized for applying the neural network to test data.
G06N3084
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A system for training a neural network. A switch is linked to feature detectors in at least some of the layers of the neural network. For each training case, the switch randomly selectively disables each of the feature detectors in accordance with a preconfigured probability. The weights from each training case are then normalized for applying the neural network to test data.
Aspects of the present disclosure provide methods and apparatus for allocating memory in an artificial nervous system simulator implemented in hardware. According to certain aspects, memory resource requirements for one or more components of an artificial nervous system being simulated may be determined and portions of a shared memory pool (which may include on-chip and/or off-chip RAM) may be allocated to the components based on the determination.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for allocating memory in an artificial nervous system simulator implemented in hardware, comprising: determining memory resource requirements for one or more components of an artificial nervous system being simulated; and allocating portions of a shared memory pool to the components based on the determination. 2. The method of claim 1, wherein the allocating is performed when compiling the artificial nervous system being simulated. 3. The method of claim 1, wherein the allocating is performed dynamically as memory resource requirements change. 4. The method of claim 1, wherein at least a portion of the shared memory pool comprises a memory located on a separate chip than a processor of the artificial nervous system simulator. 5. The method of claim 1, wherein: the components comprise artificial neurons; and determining memory resource requirements comprises determining resources based on at least one of a state or type of the artificial neurons. 6. The method of claim 1, wherein the shared memory pool is implemented as a distributed architecture comprising memory banks, write clients, read clients and a router interfacing the memory banks with the write clients and the read clients. 7. The method of claim 1, wherein the allocating comprises varying an amount of the shared memory pool allocated to the components based on the determination. 8. An apparatus for allocating memory in an artificial nervous system simulator implemented in hardware, comprising a processing system configured to: determine memory resource requirements for one or more components of an artificial nervous system being simulated; and allocate portions of a shared memory pool to the components based on the determination. 9. The apparatus of claim 8, wherein the processing system is configured to perform the allocation when compiling the artificial nervous system being simulated. 10. The apparatus of claim 8, wherein the processing system is configured to perform the allocation dynamically as memory resource requirements change. 11. The apparatus of claim 8, wherein at least a portion of the shared memory pool comprises a memory located on a separate chip than a processor of the artificial nervous system simulator. 12. The apparatus of claim 8, wherein: the components comprise artificial neurons; and the processing system is configured to determine resources based on at least one of a state or type of the artificial neurons. 13. The apparatus of claim 8, wherein the shared memory pool is implemented as a distributed architecture comprising memory banks, write clients, read clients and a router interfacing the memory banks with the write clients and the read clients. 14. The apparatus of claim 8, wherein the processing system is also configured to vary an amount of the shared memory pool allocated to the components based on the determination. 15. An apparatus for allocating memory in an artificial nervous system simulator implemented in hardware, comprising: means for determining memory resource requirements for one or more components of an artificial nervous system being simulated; and means for allocating portions of a shared memory pool to the components based on the determination. 16. The apparatus of claim 15, wherein the allocating is performed when compiling the artificial nervous system being simulated. 17. The apparatus of claim 15, wherein the allocating is performed dynamically as memory resource requirements change. 18. The apparatus of claim 15, wherein at least a portion of the shared memory pool comprises a memory located on a separate chip than a processor of the artificial nervous system simulator. 19. The apparatus of claim 15, wherein: the components comprise artificial neurons; and the means for determining memory resource requirements comprises means for determining resources based on at least one of a state or type of the artificial neurons. 20. The apparatus of claim 15, wherein the shared memory pool is implemented as a distributed architecture comprising memory banks, write clients, read clients and a router interfacing the memory banks with the write clients and the read clients. 21. The apparatus of claim 15, wherein the allocating comprises varying an amount of the shared memory pool allocated to the components based on the determination. 22. A computer-readable medium having instructions executable by a computer stored thereon for allocating memory in an artificial nervous system simulator implemented in hardware, comprising: instructions for determining memory resource requirements for one or more components of an artificial nervous system being simulated; and instructions for allocating portions of a shared memory pool to the components based on the determination.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Aspects of the present disclosure provide methods and apparatus for allocating memory in an artificial nervous system simulator implemented in hardware. According to certain aspects, memory resource requirements for one or more components of an artificial nervous system being simulated may be determined and portions of a shared memory pool (which may include on-chip and/or off-chip RAM) may be allocated to the components based on the determination.
G06N310
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Aspects of the present disclosure provide methods and apparatus for allocating memory in an artificial nervous system simulator implemented in hardware. According to certain aspects, memory resource requirements for one or more components of an artificial nervous system being simulated may be determined and portions of a shared memory pool (which may include on-chip and/or off-chip RAM) may be allocated to the components based on the determination.
Scalable-effort machine learning may automatically and dynamically adjust the amount of computational effort applied to input data based on the complexity of the input data. This is in contrast to fixed-effort machine learning, which uses a one-size-fits-all approach to applying a single classifier algorithm to both simple data and complex data. Scalable-effort machine learning involves, among other things, classifiers that may be arranged as a series of multiple classifier stages having increasing complexity (and accuracy). A first classifier stage may involve relatively simple machine learning models able to classify data that is relatively simple. Subsequent classifier stages have increasingly complex machine learning models and are able to classify more complex data. Scalable-effort machine learning includes algorithms that can differentiate among data based on complexity of the data.
Please help me write a proper abstract based on the patent claims. CLAIM: 1-20. (canceled) 21. A system comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, configure the one or more processors to perform operations comprising: receiving an input value; iteratively performing the following steps until a predetermined threshold is exceed: applying a level of complexity of a machine learning model to the input value; determining whether a current level of complexity of the level of complexity is able to classify the input value; determining whether the predetermined threshold has been exceeded when the current level of complexity is not able to classify the input value; and increasing the level of complexity of the machine learning model when the current level of complexity is able to classify the input value. 22. The system of claim 21, wherein an amount of computing time used to classify the input value depends, at least in part, on a first level of complexity of the current level of complexity of the machine learning model. 23. The system of claim 21, the operations further comprising: if a first level of complexity of the level of complexity of the machine learning model is able to classify the input value, classifying the input value into one of two or more categories. 24. The system of claim 21, wherein applying the level of complexity of the machine learning model to the input value includes: applying at least one biased first level of complexity to the input value to generate at least one class label. 25. The system of claim 23, wherein applying at least one biased first level of complexity to the input value to generate at least one class label includes: applying a negatively biased first level of complexity to the input value to generate a first class label; and applying a positively biased first level of complexity to the input value to generate a second class label. 26. The system of claim 25, wherein determining whether the current level of complexity is able to classify the input value comprises: comparing the first class label to the second class label; and determining whether a consensus exists between the negatively biased first level of complexity and the positively biased first level of complexity based, at least in part, on the comparing. 27. The system of claim 25, the operations further comprising: adjusting one or both of i) the negatively biased first level of complexity and ii) the positively biased first level of complexity to modify a likelihood that the current level of complexity is able to classify the input value with a predetermined confidence level. 28. The system of claim 21, wherein the input value is based, at least in part, on collected information from one or more of the following: capturing an image, capturing an audio sample, or receiving a search query. 29. A computing device comprising: an input port to receive an input value having an initial level of complexity; a memory device storing a plurality of machine learning models, wherein abilities of the machine learning models to classify the input value are different from one another; and a processor to apply one or more of the plurality of the machine learning models based, at least in part, on the initial level of complexity of the input value, wherein the processor is configured to iteratively apply an increasing level of complexity of the machine learning models to the input value. 30. The computing device of claim 29, wherein the abilities of the machine learning models to classify the input value comprise: the abilities of the machine learning models to classify the input value into one of two or more categories. 31. The computing device of claim 29, wherein the configuration of the processor to iteratively apply the increasing level of complexity of the machine learning models to the input value includes the processor being configured to: iteratively perform the following steps until a predetermined threshold is exceed: apply a level of complexity of a machine learning model to the input value; determine whether a current level of complexity of the level of complexity is able to classify the input value; determine whether the predetermined threshold has been exceeded when the current level of complexity is not able to classify the input value; and increase the level of complexity of the machine learning model when the current level of complexity is able to classify the input value. 32. The computing device of claim 31, wherein the configuration of the processor to apply the level of complexity of the machine learning model to the input value includes the processor being configured to: apply at least one biased first level of complexity to the input value to generate at least one class label. 33. The computing device of claim 32, wherein the configuration of the processor to apply at least one biased first level of complexity to the input value to generate at least one class label includes the processor being configured to: apply a negatively biased level of complexity to the input value to generate a first class label; and apply a positively biased level of complexity to the input value to generate a second class label. 34. The computing device of claim 33, wherein the processor is configured to: compare the first class label to the second class label; and determine whether a consensus exists between the negatively biased level of complexity and the positively biased level of complexity based, at least in part, on the comparing. 35. The computing device of claim 33, wherein the processor is configured to: adjust one or both of i) the negatively biased level of complexity and ii) the positively biased level of complexity to modify a likelihood that the level of complexity is able to classify the input value. 36. The computing device of claim 29, wherein the processor is configured to apply the plurality of the machine learning models on the input value sequentially in order of increasing ability of the machine learning models to classify the input value. 37. The computing device of claim 29, wherein a computing cost of classifying the input value is proportional to the initial level of complexity of the input value. 38. One or more computer-readable storage media of a client device storing computer-executable instructions that, when executed by one or more processors of the client device, configure the one or more processors to perform operations comprising: receiving an input value; iteratively performing the following steps until a predetermined threshold is exceed: applying a level of complexity of a machine learning model to the input value; determining whether a current level of complexity of the level of complexity is able to classify the input value; determining whether the predetermined threshold has been exceeded when the current level of complexity is not able to classify the input value; and increasing the level of complexity of the machine learning model when the current level of complexity is able to classify the input value. 39. The one or more computer-readable storage media of claim 38, wherein applying the level of complexity of the machine learning model to the input value includes: applying at least one biased first level of complexity to the input value to generate at least one class label. 40. The one or more computer-readable storage media of claim 39, wherein applying at least one biased first level of complexity to the input value to generate at least one class label includes: applying a negatively biased first level of complexity to the input value to generate a first class label; and applying a positively biased first level of complexity to the input value to generate a second class label.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Scalable-effort machine learning may automatically and dynamically adjust the amount of computational effort applied to input data based on the complexity of the input data. This is in contrast to fixed-effort machine learning, which uses a one-size-fits-all approach to applying a single classifier algorithm to both simple data and complex data. Scalable-effort machine learning involves, among other things, classifiers that may be arranged as a series of multiple classifier stages having increasing complexity (and accuracy). A first classifier stage may involve relatively simple machine learning models able to classify data that is relatively simple. Subsequent classifier stages have increasingly complex machine learning models and are able to classify more complex data. Scalable-effort machine learning includes algorithms that can differentiate among data based on complexity of the data.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Scalable-effort machine learning may automatically and dynamically adjust the amount of computational effort applied to input data based on the complexity of the input data. This is in contrast to fixed-effort machine learning, which uses a one-size-fits-all approach to applying a single classifier algorithm to both simple data and complex data. Scalable-effort machine learning involves, among other things, classifiers that may be arranged as a series of multiple classifier stages having increasing complexity (and accuracy). A first classifier stage may involve relatively simple machine learning models able to classify data that is relatively simple. Subsequent classifier stages have increasingly complex machine learning models and are able to classify more complex data. Scalable-effort machine learning includes algorithms that can differentiate among data based on complexity of the data.
A generalized autoregressive integrated moving average (ARIMA) model for use in predictive analytics of time series is based upon creating all possible ARIMA models (by knowing a priori the largest possible values of the p, d and q parameters forming the model), and utilizing the results of at least two different performance measures to ultimately choose the ARIMA(p,d,q) model that is most appropriate for the time series under study. The method of the present invention allows each parameter to range over all possible values, and then evaluates the complete universe of all possible ARIMA models based on these combinations of p, d and q to find the specific p, d and q parameters that yield the “best” (i.e., lowest value) performance measure results. This generalized ARIMA model is particularly useful in predicting future operating hours of power plants and scheduling maintenance events on the gas turbines at these plants.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of scheduling events for industrial equipment using an autoregressive integrated moving average (ARIMA) model for predicting future operating hours based on a times series of past operating hours of the industrial equipment, comprising defining a maximum possible value for each parameter p, d, q of an ARIMA(p,d,q) model, p defining a number of autoregressive terms to include in the ARIMA model, d defining a number of differencing operations to perform in the ARIMA model, and q defining a number of moving average terms to include in the ARIMA model, the maximum possible values identified as pr, dr, and qr; for all possible combinations of p from 0 to pr, d from 0 to dr, and q from 0 to qr, performing the following steps: determining a set of ARIMA coefficients associated with a training interval of the time series; predicting a set of N future hours based on the determined coefficients; computing at least one performance measure of the predicted set of N future operating hours with respect to actual time series data; and ranking all possible combinations of ARIMA(p,d,q) based on the computed performance measures; selecting a preferred set of (p, d, q) parameters based on the ranking; generating a predicted future time series of industrial operating hours using the selected ARIMA(p,d,q) model; and scheduling events based on the predicted future operating hours. 2. The method of claim 1, wherein the at least one performance measure is selected from the group consisting of: MAPE, SMAPE, MAE, and SSE. 3. The method of claim 1, wherein at least two different performance measures are computed for each possible ARIMA model. 4. The method of claim 1, wherein the ranking step creates a listing of all possible ARIMA models from the smallest value to the largest value of performance measure, for each computed performance measure. 5. The method of claim 1, wherein the ARIMA(p,d,q) model used in the analysis is the wavelet-ARIMA(p,d,q) model. 6. The method of claim 5, wherein the wavelet-ARIMA(p,d,q) module used in the analysis is the Daubechies wavelet transform. 7. A computer program product comprising a non-transitory computer readable recording medium having recorded thereon a computer program comprising instructions for, when executed on a computer, instructing said computer to perform a method for scheduling events associated with industrial equipment, using an autoregressive integrated moving average (ARIMA) model for predicting future operating hours based on past operating hours of the industrial equipment, comprising defining a maximum possible value for each parameter p, d, q of an ARIMA(p,d,q) model, p defining a number of autoregressive terms to include in the ARIMA model, d defining a number of differencing operations to perform in the ARIMA model, and q defining a number of moving average terms to include in the ARIMA model, the maximum possible values identified as pr, dr, and qr; for all possible combinations of p from 0 to pr, d from 0 to dr, and q from 0 to qr, performing the following steps: determining a set of ARIMA coefficients associated with a training interval of the time series; predicting a set of N future hours based on the determined coefficients; computing at least one performance measure of the predicted set of N future operating hours with respect to actual time series data; and ranking all possible combinations of ARIMA(p,d,q) based on the computed performance measures; selecting a preferred set of (p, d, q) parameters based on the ranking; generating a predicted future time series of industrial operating hours using the selected ARIMA(p,d,q) model; and scheduling events based on the predicted future operating hours. 8. The computer program product of claim 7, wherein the at least one performance measure is selected from the group consisting of: MAPE, SMAPE, MAE, and SSE. 9. The computer program product of claim 7, wherein at least two different performance measures are computed for each possible ARIMA model. 10. The computer program product of claim 7, wherein the ranking step creates a listing of all possible ARIMA models from the smallest value to the largest value of performance measure, for each computed performance measure. 11. The computer program product of claim 7, wherein the ARIMA(p,d,q) model used in the analysis is the wavelet-ARIMA(p,d,q) model. 12. The computer program product of claim 11, wherein the wavelet-ARIMA(p,d,q) module used in the analysis is the Daubechies wavelet transform. 13. A method of scheduling gas turbine maintenance events using an autoregressive integrated moving average (ARIMA) model for predicting future gas turbine operating hours based on a times series of past operating hours of the gas turbine, comprising defining a maximum possible value for each parameter p, d, q of an ARIMA(p,d,q) model, p defining a number of autoregressive terms to include in the ARIMA model, d defining a number of differencing operations to perform in the ARIMA model, and q defining a number of moving average terms to include in the ARIMA model, the maximum possible values identified as pr, dr, and qr; for all possible combinations of p from 0 to pr, d from 0 to dr, and q from 0 to qr, performing the following steps: determining a set of ARIMA coefficients associated with a training interval of the time series; predicting a set of N future operating hours based on the determined coefficients; computing at least one performance measure of the predicted set of N future operating hours with respect to actual time series data; and ranking all possible combinations of ARIMA(p,d,q) based on the computed performance measures; selecting a preferred set of (p, d, q) parameters based on the ranking; generating a predicted future time series of gas turbine operating hours using the selected ARIMA(p,d,q) model; and scheduling gas turbine maintenance events based on the predicted future time series of gas turbine operating hours. 14. The method as defined in claim 13 wherein the scheduled maintenance inventions comprises a set of disassembly maintenance events, each disassembly maintenance event to be scheduled after a predetermined number of operating hours. 15. The method as defined in claim 14 where the set of disassembly maintenance events includes a combustion inspection, hot gas path inspection and a major inspection. 16. The method as defined in claim 15 wherein the combustion inspection is scheduled more frequently than the hot gas path inspection, which is scheduled more frequently than the major inspection. 17. The method as defined in claim 16 wherein the combustion inspection is scheduled about every 8000 operating hours of the gas turbine, the hot gas path inspection is scheduled ever 16,000 operating hours, and the major inspection is scheduled every 32,000 operating hours. 18. The method of claim 13, wherein the at least one performance measure is selected from the group consisting of: MAPE, SMAPE, MAE, and SSE. 19. The method of claim 13, wherein at least two different performance measures are computed for each possible ARIMA model. 20. The method of claim 13, wherein the ARIMA(p,d,q) model used in the analysis is the wavelet-ARIMA(p,d,q) model. 21. The method of claim 20, wherein the wavelet-ARIMA(p,d,q) module used in the analysis is the Daubechies wavelet transform.
REJECTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A generalized autoregressive integrated moving average (ARIMA) model for use in predictive analytics of time series is based upon creating all possible ARIMA models (by knowing a priori the largest possible values of the p, d and q parameters forming the model), and utilizing the results of at least two different performance measures to ultimately choose the ARIMA(p,d,q) model that is most appropriate for the time series under study. The method of the present invention allows each parameter to range over all possible values, and then evaluates the complete universe of all possible ARIMA models based on these combinations of p, d and q to find the specific p, d and q parameters that yield the “best” (i.e., lowest value) performance measure results. This generalized ARIMA model is particularly useful in predicting future operating hours of power plants and scheduling maintenance events on the gas turbines at these plants.
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A generalized autoregressive integrated moving average (ARIMA) model for use in predictive analytics of time series is based upon creating all possible ARIMA models (by knowing a priori the largest possible values of the p, d and q parameters forming the model), and utilizing the results of at least two different performance measures to ultimately choose the ARIMA(p,d,q) model that is most appropriate for the time series under study. The method of the present invention allows each parameter to range over all possible values, and then evaluates the complete universe of all possible ARIMA models based on these combinations of p, d and q to find the specific p, d and q parameters that yield the “best” (i.e., lowest value) performance measure results. This generalized ARIMA model is particularly useful in predicting future operating hours of power plants and scheduling maintenance events on the gas turbines at these plants.
Embodiments of the invention are directed to systems, methods and computer program products for utilizing a shadow ridge rescaling technique to model incremental treatment effect at the individual level, based on a randomized test and control data. A shadow dependent variable is introduced with its mathematical expectation being exactly the incremental effect. Ridge regression is utilized to regress the shadow dependent variable on a set of variables generated from the test model score and the control model score. A tuning parameter in the ridge regression is selected so that the score can best rank order the incremental effect of the treatment. The final score is a nonlinear function of the test model score plus a nonlinear function of a control model score, and outperforms the traditional differencing score method.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system for modeling incremental effect, the system comprising: a memory device; and a processing device operatively coupled to the memory device, wherein the processing device is configured to execute computer-readable program code to: split data for observations into development data and validation data; create a test group model from the development data based on test group observations that are subject to a treatment; create a control group model from the development data based on control group observations that are not subject to the treatment; create a shadow dependent variable for the development data, wherein the shadow dependent variable is dependent on the test group observations, the control group observations, and a measurement performance variable; score the development data by applying the test group model and the control group model to the development data; create cubic spline basis functions for the test group model and the control group model; standardize the shadow dependent variable and the cubic spline basis functions using the development data; create a design matrix of the standardized shadow dependent variable and the cubic spline basis functions; conduct a singular value decomposition on the design matrix; utilize a binary search algorithm to determine tuning parameters for a set of degree of freedoms from the singular value decomposition; calculate a parameter vector for each of the tuning parameters; create a scoring formula based on the standardized cubic spline basis functions and the parameter vector for each of the tuning parameters; calculate scores for each of the tuning parameters using the scoring formula and the validation data; calculate an incremental effect area index of the scores for the tuning parameters values using the validation data; identify a tuning parameter from the tuning parameters corresponding to a score from the scores that has a highest incremental effect area index; and wherein the tuning parameter with the score having the highest incremental effect area index is used to rank order an incremental effect of the treatment. 2. The system of claim 1, wherein the observations are further split into holding data that is used to determine the accuracy of the incremental effect model score. 3. The system of claim 1, wherein the shadow dependent variable is defined by the following equation: Z = { n n t  Y if   the   individual   is   in   test - n n c  Y if   the   individual   is   in   control ; and wherein nt is a number of test group observation, nc is a number of control group observations, n is a total number of observations, and Y is the measurement performance variable. 4. The system of claim 1, wherein the cubic spline basis functions of the test group are U 1 = P 1 ,  U 2 = P 1 2 ,  U 3 = P 1 3 ,  U 4 = ( P 1 - a 1 ) 3 · 1  ( P 1 ≤ a 1 ) ,  U 5 = ( P 1 - a 2 ) 3 · 1  ( P 1 ≤ a 2 ) ,  …   …   …   … U k + 3 = ( P 1 - a k ) 3 · 1  ( P 1 ≤ a k ) ; the cubic spline basis functions of the control group are V 1 = P 2 ,  V 2 = P 2 2 ,  V 3 = P 2 3 ,  V 4 = ( P 2 - b 1 ) 3 · 1  ( P 2 ≤ b 1 ) ,  V 5 = ( P 2 - b 2 ) 3 · 1  ( P 2 ≤ b 2 ) ,  …   …   …   … V k + 3 = ( P 2 - b k ) 3 · 1  ( P 2 ≤ b k ) . 5. The system of claim 1, wherein standardizing the shadow dependent variable and the cubic spline basis functions using the development data comprises subtracting the variable's mean and dividing the difference by the variable's standard deviation, wherein the mean the standard deviation are calculated from the development data. 6. The system of claim 1, wherein conducting the value decomposition for the design matrix (X) comprises using a formula X=Q1DQ2T; and wherein Q1 and Q2 are n×(2k+6) and (2k+6)×(2k+6) orthogonal matrices, D is a (2k+6)×(2k+6) diagonal matrix, with diagonal entries d1≧d2≧ . . . ≧d2k+6≧0 called the singular values of matrix X. 7. The system of claim 1, wherein utilizing the binary search algorithm to determine the tuning parameters for the set of degree of freedoms from the singular value decomposition comprises: set δ as an estimation error allowed; identify the tuning parameters for each dfj; initialize end points of the searching interval by letting x1=0 and x2=u; calculate x = x 1 + x 2 2 and df = ∑ i = 1 2  k + 6   d i 2 d i 2 + x ; when |df−dfj|≦δ then x is the value of the turning parameter corresponding to dfj; when |df−dfj|>δ then update the end points such that if df<dfj then let x2=x, otherwise let x1=x, recalculate x = x 1 + x 2 2 and df = ∑ i = 1 2  k + 6   d i 2 d i 2 + x , and iterate until |df−drj|≦δ is met. 8. The system of claim 1, wherein the parameter vector is calculated for each of the tuning parameters λj using the following formula: β ^ ridge  ( λ j ) = Q 2  Diag  ( d 1 d 1 2 + λ j , d 2 d 2 2 + λ j , …  , d 2  k + 6 d 2  k + 6 2 + λ j )  Q 1 T  z * . 9. The system of claim 1, wherein the scoring formula is S(λj)=(U1*, U2*, . . . , Uk+3*, V1*, V2*, . . . , Vk+3*){circumflex over (β)}ridge(λj). 10. The system of claim 1, wherein calculating the incremental effect area index of the scores for the tuning parameters values using the validation data comprises: ranking the observations in the validation data based on the scores from low to high; determining an average response (Y) value for the test group and the an average response variable (Y) value for the control group for increasing percentages of observations of the scores from lowest to highest; determining a cumulative incremental effect value that is equal to the difference between the average response (Y) value for the test group and the average response (Y) value for the control group for the increasing percentages of observations of the scores from lowest to highest; assuming the cumulative incremental effect value is C(p) when the percentage of observations is p; and calculating the incremental effect area index using formula: 1 - 1 C  ( 1 )  { p 1 + p 2 2  C  ( p 1 ) + ∑ i = 2 s   p i + 1 - p i - 1 2  C  ( p i ) + p s - p s - 1 2  C  ( p s ) } . 11. A computer program product for modeling incremental effect, the computer program product comprising at least one non-transitory computer-readable medium having computer-readable program code portions embodied therein, the computer-readable program code portions comprising: an executable portion configured to split data for observations into development data and validation data; an executable portion configured to create a test group model from the development data based on test group observations that are subject to a treatment; an executable portion configured to create a control group model from the development data based on test group observations that fail to be subject to the treatment; an executable portion configured to create a shadow dependent variable for the development data, wherein the shadow dependent variable is dependent on the test group observations, the control group observations, and a measurement performance variable; an executable portion configured to score the development data by applying the test group model and the control group model to the development data; an executable portion configured to create cubic spline basis functions for the test group model and the control group model; an executable portion configured to standardize the shadow dependent variable and the cubic spline basis functions using the development data; an executable portion configured to create a design matrix of the standardized shadow dependent variable and the cubic spline basis functions; an executable portion configured to conduct a singular value decomposition on the design matrix; an executable portion configured to utilize a binary search algorithm to determine tuning parameters for a set of degree of freedoms from the singular value decomposition; an executable portion configured to calculate a parameter vector for each of the tuning parameters; an executable portion configured to create a scoring formula based on the standardized cubic spline basis functions and the parameter vector for each of the tuning parameters; an executable portion configured to calculate scores for each of the tuning parameters using the scoring formula and the validation data; an executable portion configured to calculate an incremental effect area index of the scores for the tuning parameters values using the validation data; an executable portion configured to identify a tuning parameter from the tuning parameters that has a highest incremental effect area index; and wherein the tuning parameter with the score having the highest incremental effect area index is used to rank order an incremental effect of the treatment. 12. The computer program product of claim 11, wherein the observations are further split into holding data that is used to determine the accuracy of the incremental effect model score. 13. The computer program product of claim 11, wherein the shadow dependent variable is defined by the following equation: Z = { n n t  Y if   the   individual   is   in   test - n n c  Y if   the   individual   is   in   control ; and wherein nt is a number of test group observation, nc is a number of control group observations, n is a total number of observations, and Y is the measurement performance variable. 14. The computer program product of claim 11, wherein the cubic spline basis functions of the test group are U 1 = P 1 ,  U 2 = P 1 2 ,  U 3 = P 1 3 ,  U 4 = ( P 1 - a 1 ) 3 · 1  ( P 1 ≤ a 1 ) ,  U 5 = ( P 1 - a 2 ) 3 · 1  ( P 1 ≤ a 2 ) ,  …   …   …   … U k + 3 = ( P 1 - a k ) 3 · 1  ( P 1 ≤ a k ) ; the cubic spline basis functions of the control group are V 1 = P 2 ,  V 2 = P 2 2 ,  V 3 = P 2 3 ,  V 4 = ( P 2 - b 1 ) 3 · 1  ( P 2 ≤ b 1 ) ,  V 5 = ( P 2 - b 2 ) 3 · 1  ( P 2 ≤ b 2 ) ,  …   …   …   … V k + 3 = ( P 2 - b k ) 3 · 1  ( P 2 ≤ b k ) . 15. The computer program product of claim 11, wherein standardizing the shadow dependent variable and the cubic spline basis functions using the development data comprises subtracting the variable's mean and dividing the difference by the variable's standard deviation, wherein the mean the standard deviation are calculated from the development data. 16. The computer program product of claim 11, wherein conducting the value decomposition for the design matrix (X) comprises using a formula X=Q1DQ2T; and wherein Q1 and Q2 are n×(2k+6) and (2k+6)×(2k+6) orthogonal matrices, D is a (2k+6)×(2k+6) diagonal matrix, with diagonal entries d1≧d2≧ . . . ≧d2k+6≧0 called the singular values of matrix X. 17. The computer program product of claim 11, wherein utilizing the binary search algorithm to determine the tuning parameters for the set of degree of freedoms from the singular value decomposition comprises: set δ as an estimation error allowed; identify the tuning parameters for each dfj; initialize end points of the searching interval by letting x1=0 and x2=u; calculate x = x 1 + x 2 2 and df = ∑ i = 1 2  k + 6   d i 2 d i 2 + x ; when |df−df|≦δ then×is the value of the turning parameter corresponding to dfj; when |df−df|>δ then update the end points such that if df<dfj then let x2=x, otherwise let x1=x, recalculate x = x 1 + x 2 2 and df = ∑ i = 1 2  k + 6   d i 2 d i 2 + x , and iterate until |df−dfj|δ is met. 18. The computer program product of claim 11, wherein the parameter vector is calculated for each of the tuning parameters λj using the following formula: β ^ ridge  ( λ j ) = Q 2  Diag  ( d 1 d 1 2 + λ j , d 2 d 2 2 + λ j , …  , d 2  k + 6 d 2  k + 6 2 + λ j )  Q 1 T  z * . 19. The computer program product of claim 11, wherein the scoring formula is S(λj)=(U1*, U2*, . . . , V1*, V2*, . . . , Vk+3*){circumflex over (β)}ridge(λj). 20. The computer program product of claim 11, wherein calculating the incremental effect area index of the scores for the tuning parameters values using the validation data comprises: ranking the observations in the validation data based on the scores from low to high; determining an average response (Y) value for the test group and the an average response variable (Y) value for the control group for increasing percentages of observations of the scores from lowest to highest; determining a cumulative incremental effect value that is equal to the difference between the average response (Y) value for the test group and the average response (Y) value for the control group for the increasing percentages of observations of the scores from lowest to highest; assuming the cumulative incremental effect value is C(p) when the percentage of observations is p; and calculating the incremental effect area index using formula: 1 - 1 C  ( 1 )  { p 1 + p 2 2  C  ( p 1 ) + ∑ i = 2 s   p i + 1 - p i - 1 2  C  ( p i ) + p s - p s - 1 2  C  ( p s ) } . 21. A method for modeling incremental effect, the method comprising: splitting, by a processor, data for observations into development data and validation data; creating, by a processor, a test group model from the development data based on test group observations that are subject to a treatment; creating, by a processor, a control group model from the development data based on test group observations that fail to be subject to the treatment; creating, by a processor, a shadow dependent variable for the development data, wherein the shadow dependent variable is dependent on the test group observations, the control group observations, and a measurement performance variable; scoring, by a processor, the development data by applying the test group model and the control group model to the development data; creating, by a processor, cubic spline basis functions for the test group model and the control group model; standardizing, by a processor, the shadow dependent variable and the cubic spline basis functions using the development data; creating, by a processor, a design matrix of the standardized shadow dependent variable and the cubic spline basis functions; conducting, by a processor, a singular value decomposition on the design matrix; utilizing, by a processor, a binary search algorithm to determine tuning parameters for a set of degree of freedoms from the singular value decomposition; calculating, by a processor, a parameter vector for each of the tuning parameters; creating, by a processor, a scoring formula based on the standardized cubic spline basis functions and the parameter vector for each of the tuning parameters; calculating, by a processor, scores for each of the tuning parameters using the scoring formula and the validation data; calculating, by a processor, an incremental effect area index of the scores for the tuning parameters values using the validation data; identifying, by a processor, a tuning parameter from the tuning parameters that has a highest incremental effect area index; and wherein the tuning parameter with the score having the highest incremental effect area index is used to rank order an incremental effect of the treatment.
REJECTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Embodiments of the invention are directed to systems, methods and computer program products for utilizing a shadow ridge rescaling technique to model incremental treatment effect at the individual level, based on a randomized test and control data. A shadow dependent variable is introduced with its mathematical expectation being exactly the incremental effect. Ridge regression is utilized to regress the shadow dependent variable on a set of variables generated from the test model score and the control model score. A tuning parameter in the ridge regression is selected so that the score can best rank order the incremental effect of the treatment. The final score is a nonlinear function of the test model score plus a nonlinear function of a control model score, and outperforms the traditional differencing score method.
G06N700
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Embodiments of the invention are directed to systems, methods and computer program products for utilizing a shadow ridge rescaling technique to model incremental treatment effect at the individual level, based on a randomized test and control data. A shadow dependent variable is introduced with its mathematical expectation being exactly the incremental effect. Ridge regression is utilized to regress the shadow dependent variable on a set of variables generated from the test model score and the control model score. A tuning parameter in the ridge regression is selected so that the score can best rank order the incremental effect of the treatment. The final score is a nonlinear function of the test model score plus a nonlinear function of a control model score, and outperforms the traditional differencing score method.
Technologies are described herein for modifying the modality of a computing device based upon a user's brain activity. A machine learning classifier is trained using data that identifies a modality for operating a computing device and data identifying brain activity of a user of the computing device. Once trained, the machine learning classifier can select a mode of operation for the computing device based upon a user's current brain activity and, potentially, other biological data. The computing device can then be operated in accordance with the selected modality. An application programming interface can also expose an interface through which an operating system and application programs executing on the computing device can obtain data identifying the modality selected by the machine learning classifier. Through the use of this data, the operating system and application programs can modify their mode of operation to be most suitable for the user's current mental state.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method, comprising: training a machine learning model using data identifying a modality for operating a computing device and data identifying first brain activity of a user of the computing device while the computing device is operating in the modality; receiving data identifying second brain activity of the user while operating the computing device; utilizing the machine learning model and the data identifying the second brain activity of the user to select one of a plurality of modalities for operating the computing device; and causing the computing device to operate in accordance with the selected modality. 2. The computer-implemented method of claim 1, further comprising exposing data identifying the selected one of the plurality of modalities by way of an application programming interface (API). 3. The computer-implemented method of claim 1, wherein the plurality of modalities comprise: a first modality in which a first virtual machine is executed on the computing device; and a second modality in which a second virtual machine is executed on the computing device. 4. The computer-implemented method of claim 1, wherein the plurality of modalities comprise: a first modality in which a first virtual desktop is displayed by the computing device; and a second modality in which a second virtual desktop is displayed by the computing device. 5. The computer-implemented method of claim 1, wherein the plurality of modalities comprise: a first modality in which messages directed to the user received at the computing device are suppressed; and a second modality in which messages directed to the user received at the computing device are not suppressed. 6. The computer-implemented method of claim 1, wherein the plurality of modalities comprise: a first modality in which a first plurality of user interface windows are presented by the computing device; and a second modality in which a second plurality of user interface windows are presented by the computing device. 7. The computer-implemented method of claim 1, wherein the plurality of modalities comprise: a first modality in which a user interface element corresponding to a first application that can be selected to execute the first application on the computing device is emphasized; and a second modality in which a user interface element corresponding to a second application that can be selected to execute the second application on the computing device is emphasized. 8. The computer-implemented method of claim 1, wherein the plurality of modalities comprise: a first modality in which a hardware component of the computing device is enabled; and a second modality in which the hardware component of the computing device is not enabled. 9. The computer-implemented method of claim 1, wherein the plurality of modalities comprise: a first modality in which an application executing on the computing device is presented in a full screen mode of operation; and a second modality in which the application executing on the computing device is not presented in the full screen mode of operation. 10. An apparatus, comprising: one or more processors; and at least one computer storage medium having computer executable instructions stored thereon which, when executed by the one or more processors, cause the apparatus to expose an application programming interface (API) for providing data identifying a modality for operating the apparatus, receive a request at the API, utilize a machine learning model to select one of a plurality of modalities for operating the apparatus, the one of the plurality of modalities for operating the apparatus being selected based, at least in part, upon data identifying brain activity of a user of the apparatus, and provide data identifying the selected one of the plurality of modalities for operating the apparatus responsive to the request. 11. The apparatus of claim 10, wherein the plurality of modalities comprise: a first modality in which a first virtual machine is executed by the one or more processors; and a second modality in which a second virtual machine is executed by the one or more processors. 12. The apparatus of claim 10, wherein the plurality of modalities comprise: a first modality in which a first virtual desktop is presented by the apparatus on a display device; and a second modality in which a second virtual desktop is presented by the apparatus on a display device. 13. The apparatus of claim 10, wherein the plurality of modalities comprise: a first modality in which messages directed to the user received at the apparatus device are suppressed; and a second modality in which messages directed to the user received at the apparatus are not suppressed. 14. The apparatus of claim 10, wherein the plurality of modalities comprise: a first modality in which a first plurality of user interface windows are presented by the apparatus on a display device; and a second modality in which a second plurality of user interface windows are presented by the apparatus on a display device. 15. The apparatus of claim 10, wherein the plurality of modalities comprise: a first modality in which a user interface element corresponding to a first application that can be selected to execute the first application on the one or more processors is emphasized; and a second modality in which a user interface element corresponding to a second application that can be selected to execute the second application on the one or more processors is emphasized. 16. A computer storage medium having computer executable instructions stored thereon which, when executed by one or more processors, cause the processors to: expose an application programming interface (API) for providing data identifying a modality for operating a computing device; receive a request at the API; utilize a machine learning model to select one of a plurality of modalities for operating the computing device, the one of the plurality of modalities for operating the computing device being selected based, at least in part, upon data identifying brain activity of a user of the computing device; and provide data identifying the selected one of the plurality of modalities for operating the computing device responsive to the request. 17. The computer storage medium of claim 16, wherein the plurality of modalities comprise: a first modality in which an application executing on the computing device is presented in a full screen mode of operation; and a second modality in which the application executing on the computing device is not presented in the full screen mode of operation. 18. The computer storage medium of claim 16, wherein the plurality of modalities comprise: a first modality in which a hardware component of the computing device is enabled; and a second modality in which the hardware component of the computing device is not enabled. 19. The computer storage medium of claim 16, wherein the plurality of modalities comprise: a first modality in which a user interface element corresponding to a first application that can be selected to execute the first application on the computing device is emphasized; and a second modality in which a user interface element corresponding to a second application that can be selected to execute the second application on the computing device is emphasized. 20. The computer storage medium of claim 16, wherein the plurality of modalities comprise: a first modality in which a first plurality of user interface windows are presented by the computing device; and a second modality in which a second plurality of user interface windows are presented by the computing device.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Technologies are described herein for modifying the modality of a computing device based upon a user's brain activity. A machine learning classifier is trained using data that identifies a modality for operating a computing device and data identifying brain activity of a user of the computing device. Once trained, the machine learning classifier can select a mode of operation for the computing device based upon a user's current brain activity and, potentially, other biological data. The computing device can then be operated in accordance with the selected modality. An application programming interface can also expose an interface through which an operating system and application programs executing on the computing device can obtain data identifying the modality selected by the machine learning classifier. Through the use of this data, the operating system and application programs can modify their mode of operation to be most suitable for the user's current mental state.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Technologies are described herein for modifying the modality of a computing device based upon a user's brain activity. A machine learning classifier is trained using data that identifies a modality for operating a computing device and data identifying brain activity of a user of the computing device. Once trained, the machine learning classifier can select a mode of operation for the computing device based upon a user's current brain activity and, potentially, other biological data. The computing device can then be operated in accordance with the selected modality. An application programming interface can also expose an interface through which an operating system and application programs executing on the computing device can obtain data identifying the modality selected by the machine learning classifier. Through the use of this data, the operating system and application programs can modify their mode of operation to be most suitable for the user's current mental state.
Technologies for analyzing temporal components of multimodal data to detect short-term multimodal events, determine relationships between short-term multimodal events, and recognize long-term multimodal events, using a deep learning architecture, are disclosed.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A multimodal data analyzer comprising instructions embodied in one or more non-transitory machine accessible storage media, the multimodal data analyzer configured to cause a computing system comprising one or more computing devices to: access a set of time-varying instances of multimodal data having at least two different modalities, each instance of the multimodal data having a temporal component; and algorithmically learn a feature representation of the temporal component of the multimodal data using a deep learning architecture. 2. The multimodal data analyzer of claim 1, configured to classify the set of multimodal data by applying a temporal discriminative model to the feature representation of the temporal component of the multimodal data. 3. The multimodal data analyzer of claim 1, configured to, using the deep learning architecture, identify short-term temporal features in the multimodal data. 4. The multimodal data analyzer of claim 1, wherein the multimodal data comprises recorded speech and the multimodal data analyzer is configured to identify an intra-utterance dynamic feature of the recorded speech. 5. The multimodal data analyzer of claim 1, configured to, using the deep learning architecture, identify a long-term temporal feature in the multimodal data. 6. The multimodal data analyzer of claim 1, wherein the multimodal data comprises recorded speech and the multimodal data analyzer is configured to identify an inter-utterance dynamic feature in the recorded speech. 7. The multimodal data analyzer of claim 1, wherein the multimodal data comprises audio and video, and the multimodal data analyzer is configured to (i) identify short-term dynamic features in the audio and video data and (ii) infer a long-term dynamic feature based on a combination of temporally-spaced audio and video short-term dynamic features. 8. The multimodal data analyzer of claim 1, wherein the temporal deep learning architecture comprises a hybrid model having a generative component and a discriminative component, and wherein the multimodal data analyzer uses output of the generative component as input to the discriminative component. 9. The multimodal data analyzer of claim 1, wherein the multimodal data analyzer is configured to identify at least two different temporally-spaced events in the multimodal data and infer a correlation between the at least two different temporally-spaced multimodal events. 10. The multimodal data analyzer of claim 1, configured to algorithmically learn the feature representation of the temporal component of the multimodal data using an unsupervised machine learning technique. 11. The multimodal data analyzer of claim 1, configured to algorithmically infer missing data both within a modality and across modalities. 12. A method for classifying multimodal data, the multimodal data comprising data having at least two different modalities, the method comprising, with a computing system comprising one or more computing devices: accessing a set of time-varying instances of multimodal data, each instance of the multimodal data having a temporal component; and algorithmically classifying the set of time-varying instances of multimodal data using a discriminative temporal model, the discriminative temporal model trained using a feature representation generated by a deep temporal generative model based on the temporal component of the multimodal data. 13. The method of claim 12, comprising identifying, within each modality of the multimodal data, a plurality of short-term features having different time scales. 14. The method of claim 13, comprising, for each modality within the multimodal data, inferring a long-term dynamic feature based on the short-term dynamic features identified within the modality. 15. The method of claim 13, comprising fusing short-term features across the different modalities of the multimodal data, and inferring a long-term dynamic feature based on the short-term features fused across the different modalities of the multimodal data. 16. A system for algorithmically recognizing a multimodal event in data, the system comprising: a data access module to access a set of time-varying instances of multimodal data, each instance of the multimodal data having a temporal component; a classifier module to classify different instances in the set of time-varying instances of multimodal data as indicative of different short-term events; and an event recognizer module to (i) recognize a longer-term multimodal event based on a plurality of multimodal short-term events identified by the classifier module and (ii) generate a semantic label for the recognized multimodal event. 17. The system of claim 16, wherein the classifier module is to apply a deep temporal generative model to the temporal component of the audio-visual data. 18. The system of claim 17, wherein the event recognizer module is to use a discriminative temporal model to recognize the longer-term multimodal event. 19. The system of claim 18, wherein the system is to train the discriminative temporal model using a feature representation generated by the deep temporal generative model. 20. The system of claim 16, wherein the event recognizer module is to recognize the longer-term multimodal event by correlating a plurality of different short-term multimodal events having different time scales.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Technologies for analyzing temporal components of multimodal data to detect short-term multimodal events, determine relationships between short-term multimodal events, and recognize long-term multimodal events, using a deep learning architecture, are disclosed.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Technologies for analyzing temporal components of multimodal data to detect short-term multimodal events, determine relationships between short-term multimodal events, and recognize long-term multimodal events, using a deep learning architecture, are disclosed.
A method of training a neural network model includes determining a specificity of multiple filters after a predetermined number of training iterations. The method also includes training each of the filters based on the specificity.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of training a neural network model, comprising: determining a specificity of a plurality of filters after a predetermined number of training iterations; and training each filter of the plurality of filters based at least in part on the specificity. 2. The method of claim 1, further comprising determining whether to continue the training of each filter based at least in part on the specificity. 3. The method of claim 2, further comprising stopping training for a specific filter of the plurality of filters when the specificity of the specific filter is greater than a threshold. 4. The method of claim 2, further comprising stopping training of a specific filter when a change in the specificity of the specific filter is less than a threshold after the predetermined number of training iterations. 5. The method of claim 2, further comprising eliminating a specific filter from the neural network model when the specificity of the specific filter is less than a threshold after the predetermined number of training iterations. 6. The method of claim 5, further comprising continuing training of the neural network model after eliminating the specific filter. 7. The method of claim 1, in which the specificity is based at least in part on entropy, change from original values, variance weight values, difference from other filters, cross correlation with other filters, or a combination thereof. 8. The method of claim 1, in which the neural network model is trained while an error function is augmented with a pooled measure of the specificity of the plurality of filters. 9. The method of claim 1, further comprising determining a target complexity of the neural network model, based at least in part on memory specification, power specifications, or a combination thereof. 10. The method of claim 9, in which filters are selectively trained based at least in part on the determined target complexity, prioritizing filters to train based at least in part on the determined target complexity, or a combination thereof. 11. The method of claim 1, further comprising: prioritizing filters to apply to an input based at least in part on the specificity of each of the plurality of filters; and selecting a number of prioritized filters based at least in part on a target complexity of the neural network model. 12. The method of claim 11, in which the target complexity is based at least in part on memory specification, power specifications, or a combination thereof. 13. An apparatus for training a neural network model, comprising: a memory unit; and at least one processor coupled to the memory unit, the at least one processor configured: to determine a specificity of a plurality of filters after a predetermined number of training iterations; and to train each filter of the plurality of filters based at least in part on the specificity. 14. The apparatus of claim 13, in which the at least one processor is further configured to determine whether to continue the training of each filter based at least in part on the specificity. 15. The apparatus of claim 14, in which the at least one processor is further configured to stop training for a specific filter of the plurality of filters when the specificity of the specific filter is greater than a threshold. 16. The apparatus of claim 14, in which the at least one processor is further configured to stop training of a specific filter when a change in the specificity of the specific filter is less than a threshold after the predetermined number of training iterations. 17. The apparatus of claim 14, in which the at least one processor is further configured to eliminate a specific filter from the neural network model when the specificity of the specific filter is less than a threshold after the predetermined number of training iterations. 18. The apparatus of claim 17, in which the at least one processor is further configured to continue training of the neural network model after eliminating the specific filter. 19. The apparatus of claim 13, in which the specificity is based at least in part on entropy, change from original values, variance weight values, difference from other filters, cross correlation with other filters, or a combination thereof. 20. The apparatus of claim 13, in which the at least one processor is further configured to train the neural network model while augmenting an error function with a pooled measure of the specificity of the plurality of filters. 21. The apparatus of claim 13, in which the at least one processor is further configured to determine a target complexity of the neural network model, based at least in part on memory specification, power specifications, or a combination thereof. 22. The apparatus of claim 21, in which the at least one processor is further configured to selectively train filters based at least in part on the determined target complexity, prioritizing filters to train based at least in part on the determined target complexity, or a combination thereof. 23. The apparatus of claim 13, in which the at least one processor is further configured: to prioritize filters to apply to an input based at least in part on the specificity of each of the plurality of filters; and to select a number of prioritized filters based at least in part on a target complexity of the neural network model. 24. The apparatus of claim 23, in which the target complexity is based at least in part on memory specification, power specifications, or a combination thereof. 25. A apparatus of training a neural network model, comprising: means for determining a specificity of a plurality of filters after a predetermined number of training iterations; and means for training each filter of the plurality of filters based at least in part on the specificity. 26. The apparatus of claim 25, further comprising means for determining whether to continue the training of each filter based at least in part on the specificity. 27. The apparatus of claim 26, further comprising means for stopping training for a specific filter of the plurality of filters when the specificity of the specific filter is greater than a threshold. 28. The apparatus of claim 26, further comprising means for stopping training of a specific filter when a change in the specificity of the specific filter is less than a threshold after the predetermined number of training iterations. 29. The apparatus of claim 26, further comprising means for eliminating a specific filter from the neural network model when the specificity of the specific filter is less than a threshold after the predetermined number of training iterations. 30. The apparatus of claim 29, further comprising means for continuing training of the neural network model after eliminating the specific filter. 31. The apparatus of claim 25, in which the specificity is based at least in part on entropy, change from original values, variance weight values, difference from other filters, cross correlation with other filters, or a combination thereof. 32. A non-transitory computer-readable medium for training a neural network model, the computer-readable medium having program code recorded thereon, the program code being executed by a processor and comprising: program code to determine a specificity of a plurality of filters after a predetermined number of training iterations; and program code train each filter of the plurality of filters based at least in part on the specificity.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method of training a neural network model includes determining a specificity of multiple filters after a predetermined number of training iterations. The method also includes training each of the filters based on the specificity.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method of training a neural network model includes determining a specificity of multiple filters after a predetermined number of training iterations. The method also includes training each of the filters based on the specificity.
Feature extraction includes determining a reference model for feature extraction and fine-tuning the reference model for different tasks. The method also includes storing a set of weight differences calculated during the fine-tuning. Each set may correspond to a different task.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of feature extraction, comprising: determining a reference model for feature extraction; fine-tuning the reference model for a plurality of different tasks; and storing a set of weight differences calculated during the fine-tuning, each set corresponding to a different task. 2. The method of claim 1, in which the reference model comprises a localization model. 3. The method of claim 1, in which the reference model comprises a feature learning model. 4. The method of claim 1, in which the storing comprises storing only non-zero weight differences. 5. The method of claim 1, in which the fine-tuning comprises applying a task specific classifier. 6. An apparatus for feature extraction, comprising: a memory; and at least one processor coupled to the memory, the at least one processor configured: to determine a reference model for feature extraction; to fine-tune the reference model for a plurality of different tasks; and to store a set of weight differences calculated during fine-tuning, each set corresponding to a different task. 7. The apparatus of claim 6, in which the reference model comprises a localization model. 8. The apparatus of claim 6, in which the reference model comprises a feature learning model. 9. The apparatus of claim 6, in which the at least one processor is further configured to store only non-zero weight differences. 10. The apparatus of claim 6, in which the at least one processor is further configured to apply a task specific classifier. 11. An apparatus for feature extraction, comprising: means for determining a reference model for feature extraction; means for fine-tuning the reference model for a plurality of different tasks; and means for storing a set of weight differences calculated during fine-tuning, each set corresponding to a different task. 12. The apparatus of claim 11, in which the reference model comprises a localization model. 13. The apparatus of claim 11, in which the reference model comprises a feature learning model. 14. The apparatus of claim 11, in which the means for storing stores only non-zero weight differences. 15. The apparatus of claim 11, further including means for applying a task specific classifier. 16. A non-transitory computer-readable medium having encoded thereon program code to be executed by a processor, the program code comprising: program code to determine a reference model for feature extraction; program code to fine-tune the reference model for a plurality of different tasks; and program code to store a set of weight differences calculated during fine-tuning, each set corresponding to a different task. 17. The computer-readable medium of claim 16, in which the reference model comprises a localization model. 18. The computer-readable medium of claim 16, in which the reference model comprises a feature learning model. 19. The computer-readable medium of claim 16, further comprising program code to store only non-zero weight differences. 20. The computer-readable medium of claim 16, further comprising program code to apply a task specific classifier.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Feature extraction includes determining a reference model for feature extraction and fine-tuning the reference model for different tasks. The method also includes storing a set of weight differences calculated during the fine-tuning. Each set may correspond to a different task.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Feature extraction includes determining a reference model for feature extraction and fine-tuning the reference model for different tasks. The method also includes storing a set of weight differences calculated during the fine-tuning. Each set may correspond to a different task.
Disclosure is a multi-objective semiconductor product capacity planning system and method thereof. The system comprises a data input module, a capacity planning module and a computing module. The machine information of the production stations, the product information and the order information are input by the data input module. According to the demand quantity of order, capacity information and product information, the capacity planning module plans a capacity allocation to determine the satisfied quantity of orders. The capacity allocation information is used to form a gene combination by chromosome encoding method. The computing module calculates the gene combination several times to generate numerous candidate solutions by a multi-objective genetic algorithm. The numerous candidate solutions sorts out and generates a new gene combination, and repeats the calculation to form candidate solution set until stop condition is satisfied. The candidate solution set is transformed into numerous suggestive plans as options.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A multi-objective semiconductor product capacity planning system, comprising: a data input module, accepting inputs of machine information, product information and order information, the machine information defining a plurality of production stations and a capacity limit of the plurality of production stations, the product information defining a plurality of product categories and a production cost of the plurality of product categories, the order information defining a demand quantity of order for a plurality of customer orders and a product price; a capacity planning module, receiving input data from the data input module, coordinating the demand quantity of order with the machine information and the product information to plan a satisfied number of order to satisfy the capacity limit, determining the satisfied number of order by a capacity allocation proportion of order to decide a capacity utilization rate of each of the orders and a satisfied priority of order to arrange a production sequence of each of the orders, combining the capacity allocation proportion of order and the satisfied priority of order as a resource allocation and transforming the resource allocation into a gene combination by a chromosome encoding method; and a computing module, receiving the gene combination from the capacity planning module, calculating the gene combination several times to generate a plurality of new candidate solutions by using a multi-objective genetic algorithm, sorting the plurality of new candidate solutions by using a plurality of planning objectives as evaluation criteria to generate a new gene combination, and repeating the calculation to form a candidate solution set until a stop condition is satisfied, transforming the candidate solution set into a plurality of suggestive plans and selecting one of the plurality of suggestive plans to arrange the production stations for manufacturing the product categories. 2. The multi-objective semiconductor product capacity planning system of claim 1, wherein, the plurality of planning objectives of the computing module comprise a financial index related to a revenue, a profit or a gross margin, or a production index related to a production quantity or a capacity utilization. 3. The multi-objective semiconductor product capacity planning system of claim 1, wherein, the plurality of planning objectives are a revenue maximization, a profit maximization and a gross margin maximization. 4. The multi-objective semiconductor product capacity planning system of claim 1, wherein, the computing module sorts out and generates the new gene combination by a Pareto front method. 5. The multi-objective semiconductor product capacity planning system of claim 1, further comprising a report module for presenting the plurality of suggestive plans. 6. A multi-objective semiconductor product capacity planning method, applicable to a multi-objective semiconductor product capacity planning system comprising a data input module, a capacity planning module and a computing module, the method comprising: receiving machine information from a production machine of each production stations, and product information and order information input by the data input module; planning a satisfied number of order by the capacity planning module, deciding a capacity utilization rate of each of the orders as a capacity allocation proportion of order and arranging a production sequence of each of the orders as a satisfied priority of order, combining the capacity allocation proportion of order and the satisfied priority of order as a resource allocation to form a gene combination by a chromosome encoding method; using a multi-objective genetic algorithm for the evolution of the gene combination for generating a plurality of new candidate solutions by the computing module; using a plurality of planning objectives as the evaluation criteria to sort the plurality of new candidate solutions for generating a new gene combination by the computing module; repeating the calculation to form a candidate solution set by the computing module until a stop condition is satisfied; and transforming the candidate solution set into a plurality of suggestive plans and selecting one of the plurality of suggestive plans to arrange the production stations for manufacturing a product. 7. The method of claim 6, further comprising following step: using a revenue, a profit, a gross margin, a production quantity or a capacity utilization as an index of the plurality of planning objectives. 8. The method of claim 6, further comprising following step: serving a revenue maximization, a profit maximization and a gross margin maximization as the plurality of planning objectives through the planning module. 9. The method of claim 6, further comprising following step: sorting by a Pareto front method and generating the new gene combination. 10. The method of claim 6, further comprising following step: presenting the plurality of suggestive plans by using a report module.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Disclosure is a multi-objective semiconductor product capacity planning system and method thereof. The system comprises a data input module, a capacity planning module and a computing module. The machine information of the production stations, the product information and the order information are input by the data input module. According to the demand quantity of order, capacity information and product information, the capacity planning module plans a capacity allocation to determine the satisfied quantity of orders. The capacity allocation information is used to form a gene combination by chromosome encoding method. The computing module calculates the gene combination several times to generate numerous candidate solutions by a multi-objective genetic algorithm. The numerous candidate solutions sorts out and generates a new gene combination, and repeats the calculation to form candidate solution set until stop condition is satisfied. The candidate solution set is transformed into numerous suggestive plans as options.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Disclosure is a multi-objective semiconductor product capacity planning system and method thereof. The system comprises a data input module, a capacity planning module and a computing module. The machine information of the production stations, the product information and the order information are input by the data input module. According to the demand quantity of order, capacity information and product information, the capacity planning module plans a capacity allocation to determine the satisfied quantity of orders. The capacity allocation information is used to form a gene combination by chromosome encoding method. The computing module calculates the gene combination several times to generate numerous candidate solutions by a multi-objective genetic algorithm. The numerous candidate solutions sorts out and generates a new gene combination, and repeats the calculation to form candidate solution set until stop condition is satisfied. The candidate solution set is transformed into numerous suggestive plans as options.
The present invention provides a method and system for providing a platform for building a dynamic social knowledgebase. The method includes importing a traditional knowledgebase from an entity at regular intervals, receiving login credentials from a user via a web portal or mobile application. Retrieving social knowledgebase from a social network associated with the user, providing recommendations to the user based on articles that are more clickable in the traditional knowledgebase and the social knowledgebase, sending a notification to the user based on the latest happenings in the web portal or mobile application, monitoring a list of top contributors and influencers of the social knowledgebase in the social network associated with the user, and sending the list of top contributors and influencers to a backend system for analysis & use.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for providing a platform for building a dynamic knowledgebase, the method comprising: a. importing a traditional knowledgebase from an entity, wherein the traditional knowledgebase is imported at regular intervals; b. receiving login credentials from a user, wherein the login credentials are received via at least a web portal or a mobile application; c. retrieving social knowledgebase from a social network associated with the user, wherein the social knowledgebase comprises at least one of one or more articles created by the user, comments on the one or more articles, wherein the social knowledgebase information is retrieved upon receiving an approval from the user; d. providing recommendations to the user, wherein the recommendations are based on articles that are more clickable in the traditional knowledgebase and the social knowledgebase; e. sending a notification to the user, wherein the notification comprises the latest happenings in at least the web portal or the mobile application; f. monitoring a list of top contributors and influencers of the social knowledgebase in the social network associated with the user; and g. sending the list of top contributors and influencers to a backend system, wherein the backend system provides targeted campaigns and call center services to the user. h. allowing users to submit articles for the growth of the social knowledgebase. 2. The method as claimed in claim 1, wherein the traditional knowledgebase comprises at least one of product catalogues, frequently asked questions, usage instructions, trouble shooting tips, customer reviews, visual tours, customer information, dealer information, videos and URLs. 3. The method as claimed in claim 1, wherein the login credentials are received upon the user inputting the login credentials into the web portal or the mobile application on a communication device associated with the user. 4. The method as claimed in claim 1, wherein the login credentials are fetched from a social network profile associated with the user, wherein the login credentials are fetched upon receiving approval from the user. 5. The method as claimed in claim 1, wherein the recommendations are at least one of text, image, video. 6. The method as claimed in claim 1, wherein the articles that are more clickable are based on one or more parameters. 7. The method as claimed in claim 6, wherein the one or more parameters are at least one of keywords, context, trends, (user's personal) social network reading preferences/feedback (View/soft/hard recommendation/comments). 8. The method as claimed as in claim 1, wherein the notifications are sent to the user by at least one of in the web portal, mobile application, E-mail, message on the social network associated with the user. 9. A system for providing a platform for building a dynamic knowledgebase, the system comprising: a. an import module, wherein the import module is configured to import a traditional knowledgebase from an entity, wherein the import module imports the traditional knowledgebase at regular intervals; b. a receiving module, wherein the receiving module is configured to receive login credentials from a user, wherein the receiving module receives the login credentials via a web portal or mobile application; c. a retrieval module, wherein the retrieval module is configured to retrieve a social knowledgebase from a social network associated with the user, wherein the social knowledgebase comprises at least one of one or more articles created by the user, comments on the one or more articles, wherein the social knowledge is retrieved upon receiving an approval from the entity; d. a recommendation module, wherein the recommendation module is configured to provide recommendations to the user, wherein the recommendations are based on articles that are more clickable in the traditional knowledgebase and the social knowledgebase; e. a notification module, wherein the notification module is configured to send a notification to the user, wherein the notification comprises the latest happenings in the web portal or mobile application; f. a monitoring module, wherein the monitoring module is configured to monitor a list of top contributors and influencers of the social knowledgebase in the social network associated with the user; and g. a transmission module, wherein the transmission module is configured to send the list of top contributors and influencers to a backend system, wherein the backend system provides targeted campaigns and call center services to the user. h. a submission module, wherein the submission module is configured to allow the user to write articles and contribute to the growth of the social knowledgebase. 10. The system as claimed in claim 9, wherein the recommendations sent by the recommendation module are at least one of text, image, video. 11. The system as claimed as in claim 9, wherein the notification module sends the notifications to the user by at least one of in the web portal, mobile application, E-mail, message on the social network associated with the user.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: The present invention provides a method and system for providing a platform for building a dynamic social knowledgebase. The method includes importing a traditional knowledgebase from an entity at regular intervals, receiving login credentials from a user via a web portal or mobile application. Retrieving social knowledgebase from a social network associated with the user, providing recommendations to the user based on articles that are more clickable in the traditional knowledgebase and the social knowledgebase, sending a notification to the user based on the latest happenings in the web portal or mobile application, monitoring a list of top contributors and influencers of the social knowledgebase in the social network associated with the user, and sending the list of top contributors and influencers to a backend system for analysis & use.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: The present invention provides a method and system for providing a platform for building a dynamic social knowledgebase. The method includes importing a traditional knowledgebase from an entity at regular intervals, receiving login credentials from a user via a web portal or mobile application. Retrieving social knowledgebase from a social network associated with the user, providing recommendations to the user based on articles that are more clickable in the traditional knowledgebase and the social knowledgebase, sending a notification to the user based on the latest happenings in the web portal or mobile application, monitoring a list of top contributors and influencers of the social knowledgebase in the social network associated with the user, and sending the list of top contributors and influencers to a backend system for analysis & use.
Provided is an information processing apparatus including a course setting unit that sets a course containing at least one place associated with positional information, a course information generation unit that generates first course information regarding the course on the basis of first user behavior information generated from a behavior of a first user having visited the course, and a course information provision unit that provides the first course information to a second user different from the first user.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. An information processing apparatus comprising: a course setting unit that sets a course containing at least one place associated with positional information; a course information generation unit that generates first course information regarding the course on the basis of first user behavior information generated from a behavior of a first user having visited the course; and a course information provision unit that provides the first course information to a second user different from the first user. 2. The information processing apparatus according to claim 1, further comprising: a place identification unit that identifies the place the first user has visited, wherein the course information generation unit generates the first course information which displays, in association with the place, the first user behavior information generated at the place in a case where the place the first user has visited is contained in the course. 3. The information processing apparatus according to claim 2, wherein the place identification unit further identifies the place the second user has visited, and the information processing apparatus further includes a course information update unit that updates the first course information by additionally associating second user behavior information generated from a behavior of the second user at the place with the place in a case where the place the second user has visited is contained in the course. 4. The information processing apparatus according to claim 3, wherein the course information update unit updates the first course information by adding a new place to the first course information and associating the second user behavior information with the place in a case where the place the second user has visited is not contained in the course. 5. The information processing apparatus according to claim 4, wherein the course information update unit adds a new place to the first course information and also adds the place to the course. 6. The information processing apparatus according to claim 2, wherein the course setting unit sets the course containing the place the first user has visited. 7. The information processing apparatus according to claim 2, wherein the place identification unit further identifies a place the second user has visited, and in a case where the place the second user has visited is contained in the course, the course information generation unit generates second course information regarding the course, the second course information displaying, in association with the place, second user behavior information generated from a behavior of the second user at the place. 8. The information processing apparatus according to claim 2, wherein the place identification unit calculates a moving speed of the first user on the basis of a history of positional information of the first user, distinguishes between a staying period and a moving period of the first user on the basis of the moving speed, and identifies a location of the first user in the staying period as the place the first user has visited. 9. The information processing apparatus according to claim 8, wherein the place identification unit distinguishes a period in which the moving speed is smaller than a first threshold as the staying period and distinguishes a period in which the moving speed is larger than the first threshold as the moving period. 10. The information processing apparatus according to claim 9, wherein in a case where a difference between the first threshold and a local maximum value or a local minimum value of the moving speed in a first staying period or a first moving period is smaller than or equal to a predetermined value, the place identification unit combines the first staying period or the first moving period with a second staying period or a second moving period before or after the first staying period or the first moving period. 11. The information processing apparatus according to claim 10, wherein the place identification unit combines the first staying period or the first moving period with one of the second staying period and the second moving period which has a larger difference between the first threshold and the local maximum value or the local minimum value of the moving speed in the period than the other has. 12. The information processing apparatus according to claim 11, wherein the first threshold is set on the basis of a frequency of staying and a frequency of moving for each moving speed in a behavior recognition result for the first user or a behavior recognition result for an average user. 13. The information processing apparatus according to claim 8, wherein the place identification unit calculates a moving acceleration of the first user on the basis of the history of the positional information, removes noise data from the history of the positional information on the basis of the moving acceleration, and then, calculates the moving speed of the first user on the basis of the history of the positional information. 14. The information processing apparatus according to claim 2, further comprising: a route identification unit that identifies a moving route of the first user, wherein the course information generation unit generates the first course information which displays, on the moving route, the place the first user has visited. 15. The information processing apparatus according to claim 14, wherein the route identification unit calculates a moving acceleration of the first user on the basis of a history of the positional information, removes noise data from the history of the positional information on the basis of the moving acceleration, and then, traces the history of the positional information to identify the moving route. 16. The information processing apparatus according to claim 15, wherein in a case where the moving acceleration when the first user moves from a first point to a second point is larger than a positive threshold, the route identification unit removes data corresponding to the second point as the noise data. 17. The information processing apparatus according to claim 15, wherein in a case where the moving acceleration when the first user moves from a first point to a second point is smaller than a negative threshold, the route identification unit refers to a history of a moving distance of the last three sections having, as the latest section, a section from the first point to the second point, and removes, as the noise data, data corresponding to a point sandwiched by sections of the last three sections the moving distance of which is not the smallest. 18. An information processing method comprising: setting a course containing at least one place associated with positional information; generating first course information regarding the course on the basis of first user behavior information generated from a behavior of a first user having visited the course; and providing the first course information to a second user different from the first user. 19. A program causing a computer to execute: a function of setting a course containing at least one place associated with positional information; a function of generating first course information regarding the course on the basis of first user behavior information generated from a behavior of a first user having visited the course; and a function of providing the first course information to a second user different from the first user.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Provided is an information processing apparatus including a course setting unit that sets a course containing at least one place associated with positional information, a course information generation unit that generates first course information regarding the course on the basis of first user behavior information generated from a behavior of a first user having visited the course, and a course information provision unit that provides the first course information to a second user different from the first user.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Provided is an information processing apparatus including a course setting unit that sets a course containing at least one place associated with positional information, a course information generation unit that generates first course information regarding the course on the basis of first user behavior information generated from a behavior of a first user having visited the course, and a course information provision unit that provides the first course information to a second user different from the first user.
The present invention relates to a method and apparatus for tailoring the output of an intelligent automated assistant. One embodiment of a method for conducting an interaction with a human user includes collecting data about the user using a multimodal set of sensors positioned in a vicinity of the user, making a set of inferences about the user in accordance with the data, and tailoring an output to be delivered to the user in accordance with the set of inferences.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for conducting an interaction with a human user, the method comprising: collecting data about the user using a multimodal set of sensors positioned in a vicinity of the user; making a set of inferences about the user in accordance with the data; and tailoring an output to be delivered to the user in accordance with the set of inferences. 2. The method of claim 1, wherein the set of inferences relates to one or more personal characteristics of the user. 3. The method of claim 2, wherein the one or more personal characteristics include an age of the user. 4. The method of claim 2, wherein the one or more personal characteristics include a gender of the user. 5. The method of claim 2, wherein the one or more personal characteristics include a socioeconomic group to which the user belongs. 6. The method of claim 1, wherein the set of inferences relates to a current affective state of the user. 7. The method of claim 1, wherein the making the set of inferences comprises: extracting at least one feature from the data; and classifying the at least one feature in accordance with at least one model that defines a potential characteristic of the user. 8. The method of claim 7, wherein the plurality of features include at least one of: a lexical content of an utterance made by the user or a linguistic content of an utterance made by the user. 9. The method of claim 7, wherein the plurality of features include at least one of: one or more pauses within an utterance made by the user, one or more increments in a duration of phones uttered by the user relative to a pre-computed average, a latency of the user to produce a response to a prompt, a probability distribution of unit durations, or timing Information related to one or more user interruptions to a previous output. 10. The method of claim 7, wherein the plurality of features include at least one of: a fundamental frequency range within an utterance made by the user, a fundamental frequency slope along one or more words, a probability distribution of a slope, or a probability distribution of a plurality of fundamental frequency values. 11. The method of claim 7, wherein the plurality of features include at least one of: a range of energy excursions within an utterance made by the user, a slope of energy within an utterance made by the user, a probability distribution of normalized energy, or a probability distribution of energy slopes. 12. The method of claim 7, wherein the plurality of features include at least one of: a color of at least a portion of a face of the user, a shape of at least a portion of a face of the user, a texture of at least a portion of a face of the user, an orientation of at least a portion of a face of the user, or a movement of at least a portion of a face of the user. 13. The method of claim 7, wherein the plurality of features include at least one of: whether the user is looking at a display on which the output is to be presented, a percentage of time spent by the user looking at a display on which the output is to be presented, a part of a display on which the output is to be presented on which the user is focused, how close a focus of the user is to a desired part of a display on which the output is to be presented, or a percentage of time spent by the user looking at a desired part of a display on which the output is to be presented. 14. The method of claim 7, wherein the plurality of features include at least one of: a shape of an area below a face of the user, a color of an area below a face of the user, or a texture of an area below a face of the user. 15. The method of claim 7, wherein the plurality of features include at least one of: a pose of a portion of a body of the user as a function of time or a motion of a portion of a body of the user as a function of time. 16. The method of claim 7, wherein the plurality of features include at least one of: a shape of footwear worn by the user, a color of footwear worn by the user, or a texture of footwear worn by the user. 17. The method of claim 7, wherein the classifying is performed using a statistical classifier. 18. The method of claim 7, wherein the classifying is performed using a training-based classifier. 19. A non-transitory computer readable medium containing an executable program for conducting an interaction with a human user, where the program performs steps comprising: collecting data about the user using a multimodal set of sensors positioned in a vicinity of the user; making a set of inferences about the user in accordance with the data; and tailoring an output to be delivered to the user in accordance with the set of inferences. 20. A system for conducting an interaction with a human user, the system comprising: a plurality of multimodal sensors positioned in a vicinity of the user for collecting data about the user; a plurality of classifiers for making a set of inferences about the user in accordance with the data; and an output selection module for tailoring an output to be delivered to the user in accordance with the set of inferences.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: The present invention relates to a method and apparatus for tailoring the output of an intelligent automated assistant. One embodiment of a method for conducting an interaction with a human user includes collecting data about the user using a multimodal set of sensors positioned in a vicinity of the user, making a set of inferences about the user in accordance with the data, and tailoring an output to be delivered to the user in accordance with the set of inferences.
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: The present invention relates to a method and apparatus for tailoring the output of an intelligent automated assistant. One embodiment of a method for conducting an interaction with a human user includes collecting data about the user using a multimodal set of sensors positioned in a vicinity of the user, making a set of inferences about the user in accordance with the data, and tailoring an output to be delivered to the user in accordance with the set of inferences.
A method comprises building a forecast for an autonomous agent. The building at least comprises assigning a selected parameter of the autonomous agent to a state value, adding a new policy to a set of policies where the new policy maps actions of the autonomous agent for optimizing the state value, and adding a new forecast to a set of forecasts where the forecast at least comprises a prediction of a next state of the autonomous agent following execution of the new policy. The mapped actions are initiated according to the new policy. A state of the autonomous agent is evaluated following completion of the mapped actions. The evaluation at least comprises comparing the state with the forecast. Whether to build an additional forecast is determined. The determining is at least in part based on the evaluation.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising the steps of: building a forecast for an autonomous agent, said building at least comprising: assigning a selected parameter of said autonomous agent to a state value; adding a new policy to a set of policies, said new policy mapping actions of said autonomous agent for optimizing said state value; and adding a new forecast to a set of forecasts, said forecast at least comprising a prediction of a next state of said autonomous agent following execution of said new policy; initiating said mapped actions according to said new policy; evaluating a state of said autonomous agent following completion of said mapped actions, said evaluation at least comprising comparing said state with said forecast; and determining whether to build an additional forecast, said determining at least in part based on said evaluation. 2. The method as recited in claim 1, further comprising the steps of: determining if said forecast is ineffective; and pruning said forecast from said set of forecasts upon said determination. 3. The method as recited in claim 1, further comprising the step of determining whether to terminate the method, said determining at least in part based on said evaluation. 4. The method as recited in claim 1, in which said set of forecasts comprises a hierarchical structure. 5. The method as recited in claim 1, in which said new policy further comprises starting and stopping criteria. 6. The method as recited in claim 1, in which a state predicted by any forecast in said set of forecasts is associated with at least one of said policies in said set of policies. 7. The method as recited in claim 1, in which said selected parameter comprises at least one of an observation signal, a forecast of interest, a function of a combination of observation signals, and a function of forecast values in said set of forecasts. 8. The method as recited in claim 3, in which said step of determining whether to terminate the method is further based on a threshold value. 9. The method as recited in claim 1, in which said forecast comprises General Value Functions. 10. A method comprising: steps for building a forecast for an autonomous agent; steps for initiating mapped actions; steps for evaluating a state of said autonomous agent following completion of said mapped actions; and steps for determining whether to build an additional forecast. 11. The method as recited in claim 10, further comprising: steps for determining if said forecast is ineffective; steps for pruning said forecast from said set of forecasts upon said determination; and steps for determining whether to terminate the method. 12. A non-transitory computer-readable storage medium with an executable program stored thereon, wherein the program instructs one or more processors to perform the following steps: building a forecast for an autonomous agent, said building at least comprising: assigning a selected parameter of said autonomous agent to a state value; adding a new policy to a set of policies, said new policy mapping actions of said autonomous agent for optimizing said state value; and adding a new forecast to a set of forecasts, said forecast at least comprising a prediction of a next state of said autonomous agent following execution of said new policy; initiating said mapped actions according to said new policy; evaluating a state of said autonomous agent following completion of said mapped actions, said evaluation at least comprising comparing said state with said forecast; and determining whether to build an additional forecast, said determining at least in part based on said evaluation. 13. The program instructing the one or more processors as recited in claim 12, further comprising the steps of: determining if said forecast is ineffective; and pruning said forecast from said set of forecasts upon said determination. 14. The program instructing the one or more processors as recited in claim 12, further comprising the step of determining whether to terminate the method, said determining at least in part based on said evaluation. 15. The program instructing the one or more processors as recited in claim 12, in which said set of forecasts comprises a hierarchical structure. 16. The program instructing the one or more processors as recited in claim 12, in which said new policy further comprises starting and stopping criteria. 17. The program instructing the one or more processors as recited in claim 12, in which a state predicted by any forecast in said set of forecasts is associated with at least one of said policies in said set of policies. 18. The program instructing the one or more processors as recited in claim 12, in which said selected parameter comprises at least one of an observation signal, a forecast of interest, a function of a combination of observation signals, and a function of forecast values in said set of forecasts. 19. The program instructing the one or more processors as recited in claim 14, in which said step of determining whether to terminate the method is further based on a threshold value. 20. The program instructing the one or more processors as recited in claim 12, in which said forecast comprises General Value Functions.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method comprises building a forecast for an autonomous agent. The building at least comprises assigning a selected parameter of the autonomous agent to a state value, adding a new policy to a set of policies where the new policy maps actions of the autonomous agent for optimizing the state value, and adding a new forecast to a set of forecasts where the forecast at least comprises a prediction of a next state of the autonomous agent following execution of the new policy. The mapped actions are initiated according to the new policy. A state of the autonomous agent is evaluated following completion of the mapped actions. The evaluation at least comprises comparing the state with the forecast. Whether to build an additional forecast is determined. The determining is at least in part based on the evaluation.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method comprises building a forecast for an autonomous agent. The building at least comprises assigning a selected parameter of the autonomous agent to a state value, adding a new policy to a set of policies where the new policy maps actions of the autonomous agent for optimizing the state value, and adding a new forecast to a set of forecasts where the forecast at least comprises a prediction of a next state of the autonomous agent following execution of the new policy. The mapped actions are initiated according to the new policy. A state of the autonomous agent is evaluated following completion of the mapped actions. The evaluation at least comprises comparing the state with the forecast. Whether to build an additional forecast is determined. The determining is at least in part based on the evaluation.
In one embodiment, a character engine models a character that interacts with users. The character engine receives user input data from a user device, and analyzes the user input data to determine a user intent and an assessment domain. Subsequently, the character engine selects inference algorithm(s) that include machine learning capabilities based on the intent and the assessment domain. The character engine computes a response to the user input data based on the selected inference algorithm(s) and a set of personality characteristics that are associated with the character. Finally, the character engine causes the user device to output the response to the user. In this fashion, the character engine includes sensing functionality, thinking and learning functionality, and expressing functionality. By aggregating advanced sensing techniques, inference algorithms, character-specific personality characteristics, and expressing algorithms, the character engine provides a realistic illusion that users are interacting with the character.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method for generating a character response during an interaction with a user, the method comprising: evaluating user input data that is associated with a user device to determine a user intent and an assessment domain; selecting at least one inference algorithm from a plurality of inference algorithms based on at least one of the user intent and the assessment domain, wherein the at least one inference algorithm implements machine learning functionality; computing a character response to the user input data based on the at least one inference algorithm, the user input data, a set of personality characteristics associated with a character, and data representing knowledge associated with the character; and causing the user device to output the character response to the user. 2. The computer-implemented method of claim 1, wherein the at least one inference algorithm comprises at least a first inference algorithm and a second inference algorithm, wherein the first inference algorithm implements machine learning functionality. 3. The computer-implemented method of claim 1, wherein computing the character response to the user input data comprises: generating an inference based on the at least one inference algorithm, the user input data, and the data representing knowledge associated with the character; selecting the set of personality characteristics from a plurality of sets of personality characteristics based on at least one of the inference, the user intent, and the assessment domain; and generating the character response based on the inference and the set of personality characteristics. 4. The computer-implemented method of claim 1, wherein the data representing knowledge associated with the character includes information obtained from at least one of a World Wide Web, a script, a book, and a user-specific history. 5. The computer-implemented method of claim 1, further comprising updating the data representing knowledge associated with the character based on at least one of the user input data and the character response. 6. The computer-implemented method of claim 1, further comprising, in a batch mode: generating training data based on at least one of the user input data, the character response, the data representing knowledge associated with the character, and one or more data sources; and performing one or more operations that train the at least one inference algorithm based on the training data. 7. The computer-implemented method of claim 5, wherein the one or more data sources include a gamification platform that includes at least one of software and hardware that implement game mechanics to entice the user to provide input that can be used to train the at least one inference algorithm. 8. The computer-implemented method of claim 1, wherein causing the user device to output the character response comprises generating at least one of a physical action, a sound, and an image. 9. The computer-implemented method of claim 1, wherein the user device comprises a robot, a walk around character, a toy, or a computing device. 10. The computer-implemented method of claim 1, wherein the set of personality characteristics comprises a plurality of parameters, wherein each parameter is associated with a different personality dimension. 11. The computer-implemented method of claim 1, wherein the at least one inference algorithm comprises a Markov model, a computer vision system, a theory of mind system, a neural network, or a support vector machine. 12. A character engine that executes on one or more processors, the character engine comprising: a user intent engine that, when executed by the one or more processors, evaluates user input data that is associated with a user device to determine a user intent; a domain engine that, when executed by the one or more processors, evaluates at least one of the user input data and the user intent to determine an assessment domain; and an inference engine that, when executed by the one or more processors: selects at least one inference algorithm from a plurality of inference algorithms based on at least one of the user intent and the assessment domain, wherein the at least one inference algorithm implements machine learning functionality; and computes a character response to the user input data based on the at least one inference algorithm, the user input data, a set of personality characteristics associated with a character, and data representing knowledge associated with the character; and an output device abstraction infrastructure that, when executed by the one or more processors, causes the user device to output the character response to the user. 13. The character engine of claim 12, wherein the at least one inference algorithm comprises at least a first inference algorithm and a second inference algorithm, wherein the first inference algorithm implements machine learning functionality. 14. The character engine of claim 12, wherein the inference engine computes the character response to the user input data by: generating an inference based on the at least one inference algorithm, the user input data, and the data representing knowledge associated with the character; selecting the set of personality characteristics from a plurality of sets of personality characteristics based on at least one of the inference, the user intent, and the assessment domain; and generating the character response based on the inference and the set of personality characteristics. 15. The character engine of claim 12, wherein the data representing knowledge associated with the character includes information obtained from at least one of a World Wide Web, a movie script, a book, and a user-specific history. 16. The character engine of claim 12, wherein causing the user device to output the character response comprises generating at least one of a physical action, a sound, and an image. 17. The character engine of claim 12, wherein the user device comprises a robot, a walk around character, a toy, or a computing device. 18. The character engine of claim 12, wherein the set of personality characteristics comprises a plurality of parameters, wherein each parameter is associated with a different personality dimension. 19. The character engine of claim 12, wherein the at least one inference algorithm comprises a Markov model, a computer vision system, a theory of mind system, a neural network, or a support vector machine. 20. A computer-readable storage medium including instructions that, when executed by a processor, cause the processor to generate a character response during an interaction with a user by performing the steps of: selecting at least one inference algorithm from a plurality of inference algorithms based on at least one of a user intent and an assessment domain, wherein the at least one inference algorithm implements machine learning functionality; causing the at least one inference algorithm to compute an inference based on user input data and data representing knowledge associated with a character; and causing a personality engine associated with the character to compute a character response to the user input data based on the inference.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: In one embodiment, a character engine models a character that interacts with users. The character engine receives user input data from a user device, and analyzes the user input data to determine a user intent and an assessment domain. Subsequently, the character engine selects inference algorithm(s) that include machine learning capabilities based on the intent and the assessment domain. The character engine computes a response to the user input data based on the selected inference algorithm(s) and a set of personality characteristics that are associated with the character. Finally, the character engine causes the user device to output the response to the user. In this fashion, the character engine includes sensing functionality, thinking and learning functionality, and expressing functionality. By aggregating advanced sensing techniques, inference algorithms, character-specific personality characteristics, and expressing algorithms, the character engine provides a realistic illusion that users are interacting with the character.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: In one embodiment, a character engine models a character that interacts with users. The character engine receives user input data from a user device, and analyzes the user input data to determine a user intent and an assessment domain. Subsequently, the character engine selects inference algorithm(s) that include machine learning capabilities based on the intent and the assessment domain. The character engine computes a response to the user input data based on the selected inference algorithm(s) and a set of personality characteristics that are associated with the character. Finally, the character engine causes the user device to output the response to the user. In this fashion, the character engine includes sensing functionality, thinking and learning functionality, and expressing functionality. By aggregating advanced sensing techniques, inference algorithms, character-specific personality characteristics, and expressing algorithms, the character engine provides a realistic illusion that users are interacting with the character.
Methods, mediums, and systems are described for providing a platform coupled to one or more rules engines. The platform may provide runtime rule services to one or more applications. Different rules engines may be used for different types of rules, such as calculations, decisions, process control, transformation, and validation. Rules engines can be added, removed, and reassigned to the platform. When the platform receives a request for services from an application, the platform selects one of the rules engines to handle the request and instructs the selected rules engine to execute the rule. The rules engine may be selected automatically. The platform may be implemented through a service-oriented architecture.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system for facilitating communication between one or more applications and one or more rules engines, the system comprising: a design tool configured to provide a platform syntax for defining rules for the one or more applications and to associate each rule with a respective predetermined one of the one or more rules engines, wherein the association of each rule with the respective predetermined one of the rules engines is based on predetermined criteria; a rule processor configured to translate each rule defined using the design tool from the platform syntax into a rules engine syntax of the respective predetermined one of the rules engines; a repository configured to store each rule, each translated rule defined using the design tool, an association of each rule defined using the design tool to its respective translated rule, and the association of each rule defined using the design tool with its respective predetermined one of the rules engines; and a rules execution platform configured to receive a request from one of the applications to execute one of the rules defined using the design tool, identify the respective predetermined one of the rules engines associated with the requested rule, and transmit the requested rule to the respective predetermined one of the rules engines associated with the requested rule. 2. The system of claim 1 wherein the predetermined criteria comprises an environment in which the design tool is initiated. 3. The system of claim 1 wherein the predetermined criteria comprises information corresponding to a user that defined the requested rule using the design tool. 4. The system of claim 3 wherein the information comprises a role of the user. 5. The system of claim 4 wherein the role comprises a functional role of the user. 6. The system of claim 1 wherein the design tool is configured to present various views depending on a role of a user accessing the design tool. 7. The system of claim 6 wherein the role comprises a system role.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Methods, mediums, and systems are described for providing a platform coupled to one or more rules engines. The platform may provide runtime rule services to one or more applications. Different rules engines may be used for different types of rules, such as calculations, decisions, process control, transformation, and validation. Rules engines can be added, removed, and reassigned to the platform. When the platform receives a request for services from an application, the platform selects one of the rules engines to handle the request and instructs the selected rules engine to execute the rule. The rules engine may be selected automatically. The platform may be implemented through a service-oriented architecture.
G06N5027
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Methods, mediums, and systems are described for providing a platform coupled to one or more rules engines. The platform may provide runtime rule services to one or more applications. Different rules engines may be used for different types of rules, such as calculations, decisions, process control, transformation, and validation. Rules engines can be added, removed, and reassigned to the platform. When the platform receives a request for services from an application, the platform selects one of the rules engines to handle the request and instructs the selected rules engine to execute the rule. The rules engine may be selected automatically. The platform may be implemented through a service-oriented architecture.
Systems and methods for classifying test data based on maximum margin classifier are described. In one implementation, the method includes obtaining training data having a predefined sample size, wherein the training data is composed of separable data-sets. For the training data, a Vapnik-Chervonenkis (VC) dimension for the training data is determined. For the VC dimension, an exact bound is subsequently determined. The exact bound may be minimized for obtaining the minimum VC classifier for predicting at least one class to which samples of the training data belong.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for classifying binary data, the method comprising: obtaining training data having a predefined sample size, wherein the training data is composed of separable datasets; determining an exact bound on Vapnik-Chervonenkis (VC) dimension of a hyperplane classifier for the training data, wherein the exact bound is based one or more variables defining the hyperplane; and minimizing the exact bound on the VC dimension; and based on the minimizing of the exact bound, determining the optimal values of the one or more variables defining the hyperplane; generating a binary classifier for predicting one class to which a given data sample belongs. 2. The method as claimed in claim 1, wherein the exact bound is a function of the distances of closest and furthest from amongst the training data from the hyperplane, wherein the hyperplane classifies plurality of points within the training data with zero error. 3. The method as claimed in claim 1, wherein the datasets are one of linearly separable datasets and non-linearly separable datasets. 4. The method as claimed in claim 2, wherein for the notional hyperplane depicted by the following relation: uTx+v=0, the exact bound on the VC dimension for the hyperplane classifier is a function of h, being defined by: h = Max i = 1 , 2 , …  , M   u T  x i + v  Min i = 1 , 2 , …  , M   u T  x i + v  wherein, xi, i=1,2, . . . , M depict data points within the training data. 5. The method as claimed in claim 2, wherein the function to be minimized is another function of h added to a misclassification error parameter. 6. The method as claimed in claim 4, wherein the minimizing the exact bound further comprises: reducing the linear fractional programming problem of minimizing the h to obtain a linear programming problem; by solving the linear programming problem so obtained, obtaining a decision function for classifying the test data. 7. The method as claimed in claim 6, wherein the decision function has a low VC dimension. 8. The method as claimed in claim 6, wherein the objective of, the linear programming problem includes a function of the misclassification error. 9. A system for classifying test data, the system comprising: a processor; a data classification module, wherein the data classification module is to, obtaining training data having a predefined sample size, wherein the training data is composed of separable datasets having; determining an exact bound on the Vapnik-Chervonenkis (VC) dimension of a hyperplane classifier for the training data, wherein the exact bound depends on the variables defining the said hyperplane minimizing the exact bound on the VC dimension; and based on the minimizing of the exact bound, determining the optimal values of the variables defining the hyperplane, thus generating a binary classifier for predicting one class to which a given data sample belongs. 10. The system as claimed in claim 8, wherein the data classification module for nonlinearly separable datasets in a first dimension, is to map samples of training data from the first dimension to a higher dimension using a mapping function φ. 11. The system as claimed in claim 9, wherein for a notional hyperplane depicted by the relation uTφ(x)+v=0, the data classification module is to: minimize an exact bound on the VC dimension of a hyperplane classifier wherein the said classifier separates samples that have been transformed from the input dimension to a higher dimension by means of the mapping function (φ); wherein the minimization task is achieved by solving a fractional programming problem that has been reduced to a linear programming problem. 12. The system as claimed in claim 9, where data classification module utilizes a Kernel function K, wherein, K is a function of two input vectors ‘a’ and ‘b’ with K being positive definite; and K(a,b)=φ(a)Tφ(b) with K(a,b) being an inner product of the vectors obtained by transforming vectors ‘a’ and ‘b’ into a higher dimensional space by using the mapping function φ. 13. The system as claimed in claim 8, wherein alternatively the data classification module is to further: obtain a tolerance regression parameter, for a plurality of points within the training data; obtain the value of a hypothetical function or measurement at each of said training samples derive a classification problem in which the samples of each of the two classes are determined by using the given data and the tolerance parameter define a notional hyperplane, wherein the notional hyperplane classifies the plurality of points within the derived classification problem with minimal error; and based on the notional hyperplane, generates a regressor corresponding to the plurality of points. 14. The system as claimed in claim 13, wherein for the notional hyperplane is defined by, wTx+ηy+v=0, the data classification module generates the regressor defined by, y = - 1 η  ( w T  x + b ) 15. The system as claimed in claim 14, wherein for the points forming a linearly separable dataset, the regressor is a linear regressor. 16. The system as claimed in claim 14, wherein for the points forming a nonlinearly separable dataset, the regressor is a kernel regressor. 17. The system as claimed in claim 14, wherein the regressor further includes an error parameter. 18. The method as claimed in claim 8, in which the solution of the linear programming problem yields a set of weights or co-efficients, with each weight corresponding to an input feature, attribute, or co-ordinate, and wherein the set of input features with non-zero weights constitutes a set of selected features to allow feature selection. 19. The method as claimed in claim 18, in which only the selected features are used to next compute a classifier, thus eliminating the noise or confusion introduced by features that are less discriminative. 20. The method as claimed in claim 5, in which the constraints are modified so that one of the terms of the objective function is non-essential and can be removed. 21. The method as claimed in claim 20, in which the removal of a term in the objective function removes the need to choose a hyper-parameter weighting the mis-classification error, thus simplifying the use of the said method. 22. The method as claimed in claim 17, in which the constraints are modified so that one of the terms of the objective function is non-essential and can be removed. 23. The method as claimed in claim 4, in which the Max function is, replaced by a “soft Max” function in which distance is measured as a weighted function of distances from a plurality of hyperplanes, and in which the Min, function is replaced by a “soft Min” function. 24. The system as claimed in claim 14, in which the Max function is replaced by a “soft Max” function in which distance is measured as a weighted function of distances from a plurality of hyperplanes, and in which the Min function is replaced by a “soft Min” function.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Systems and methods for classifying test data based on maximum margin classifier are described. In one implementation, the method includes obtaining training data having a predefined sample size, wherein the training data is composed of separable data-sets. For the training data, a Vapnik-Chervonenkis (VC) dimension for the training data is determined. For the VC dimension, an exact bound is subsequently determined. The exact bound may be minimized for obtaining the minimum VC classifier for predicting at least one class to which samples of the training data belong.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Systems and methods for classifying test data based on maximum margin classifier are described. In one implementation, the method includes obtaining training data having a predefined sample size, wherein the training data is composed of separable data-sets. For the training data, a Vapnik-Chervonenkis (VC) dimension for the training data is determined. For the VC dimension, an exact bound is subsequently determined. The exact bound may be minimized for obtaining the minimum VC classifier for predicting at least one class to which samples of the training data belong.
The inventive concepts herein relate to performing block retrieval on a block to be processed of a urine sediment image. The method comprises: using a plurality of decision trees to perform block retrieval on the block to be processed, wherein each of the plurality of decision trees comprises a judgment node and a leaf node, and the judgment node judges the block to be processed to make it reach the leaf node by using a block retrieval feature in a block retrieval feature set to form a block retrieval result at the leaf node, and at least two decision trees in the plurality of decision trees are different in structures thereof and/or judgments performed by the judgment nodes thereof by using the block retrieval feature; and integrating the block retrieval results of the plurality of decision trees so as to form a final block retrieval result.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for performing block retrieval on a block to be processed of a urine sediment image, comprising: using a plurality of decision trees to perform block retrieval on the block to be processed, wherein each of the plurality of decision trees comprises a judgment node and a leaf node, and the judgment node judges the block to be processed to make it reach the leaf node by using a block retrieval feature in a block retrieval feature set to form a block retrieval result at the leaf node, wherein the block retrieval result comprises a retrieved block, and at least two decision trees in the plurality of decision trees are different in structures thereof and/or judgments performed by the judgment nodes thereof by using the block retrieval feature; and integrating the block retrieval results of the plurality of decision trees so as to form a final block retrieval result. 2. The method according to claim 1, characterized in that the step of integrating the block retrieval results of the plurality of decision trees comprises: voting for the blocks retrieved by the plurality of decision trees, wherein if there are m decision trees in the plurality of decision trees altogether which retrieve a specific block, the ballot of the specific block is m, with m being a positive integer; and arranging the blocks retrieved by the plurality of decision trees in a descending order of the ballot. 3. The method according to claim 2, characterized in that only the retrieved blocks with ballots greater than a preset threshold value are listed. 4. The method according to claim 1, characterized in that the step of using a plurality of decision trees to perform block retrieval on the block to be processed comprises: on each decision tree, in response to the block to be processed being judged by the judgment node and reaching the leaf node, acquiring a block belonging to the leaf node as a block retrieval result, wherein the block belonging to the leaf node is set in a manner as follows: training the plurality of decision trees by using a training sample block in a training sample block set so that on each decision tree, the training sample block is judged by the judgment node and reaches a corresponding leaf node, and becomes a block belonging to the corresponding leaf node. 5. The method according to claim 4, characterized in that a classification tag is preset for the training sample block in the training sample block set so that the retrieved blocks comprised in the block retrieval result also carry classification tags. 6. An apparatus for performing block retrieval on a block to be processed of a urine sediment image, comprising: a block retrieval unit configured to use a plurality of decision trees to perform block retrieval on the block to be processed, wherein each of the plurality of decision trees comprises a judgment node and a leaf node, and the judgment node judges the block to be processed to make it reach the leaf node by using a block retrieval feature in a block retrieval feature set to form a block retrieval result at the leaf node, wherein the block retrieval result comprises a retrieved block, and at least two decision trees in the plurality of decision trees are different in structures thereof and/or judgments performed by the judgment nodes thereof by using the block retrieval feature; and an integration unit configured to integrate the block retrieval results of the plurality of decision trees so as to form a final block retrieval result. 7. The apparatus according to claim 6, characterized in that the integration unit is further configured to: vote for the blocks retrieved by the plurality of decision trees, wherein if there are m decision trees in the plurality of decision trees altogether which retrieve a specific block, the ballot of the specific block is m, with m being a positive integer; and arrange the blocks retrieved by the plurality of decision trees in a descending order of the ballot. 8. The apparatus according to claim 7, characterized in that the integration unit is further configured to only list the retrieved blocks with ballots greater than a preset threshold value. 9. The apparatus according to claim 6, characterized in that the block retrieval unit is configured to, on each decision tree, in response to the block to be processed being judged by the judgment node and reaching the leaf node, acquire a block belonging to the leaf node as a block retrieval result, wherein the block belonging to the leaf node is set in a manner as follows: training the plurality of decision trees by using a training sample block in a training sample block set so that on each decision tree, the training sample block is judged by the judgment node and reaches a corresponding leaf node, and becomes a block belonging to the corresponding leaf node. 10. The apparatus according to claim 9, characterized in that a classification tag is preset for the training sample block in the training sample block set so that the retrieved blocks comprised in the block retrieval result also carry classification tags. 11. A device for performing block retrieval on a block to be processed of a urine sediment image, comprising: a memory for storing executable instructions, the executable instructions, when executed, implementing the method of claim 1; and a processor for executing the executable instructions. 12. A machine-readable medium on which an executable instruction is stored, wherein when the executable instruction is executed, a machine is caused to perform the method of claim 1.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: The inventive concepts herein relate to performing block retrieval on a block to be processed of a urine sediment image. The method comprises: using a plurality of decision trees to perform block retrieval on the block to be processed, wherein each of the plurality of decision trees comprises a judgment node and a leaf node, and the judgment node judges the block to be processed to make it reach the leaf node by using a block retrieval feature in a block retrieval feature set to form a block retrieval result at the leaf node, and at least two decision trees in the plurality of decision trees are different in structures thereof and/or judgments performed by the judgment nodes thereof by using the block retrieval feature; and integrating the block retrieval results of the plurality of decision trees so as to form a final block retrieval result.
G06N5045
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: The inventive concepts herein relate to performing block retrieval on a block to be processed of a urine sediment image. The method comprises: using a plurality of decision trees to perform block retrieval on the block to be processed, wherein each of the plurality of decision trees comprises a judgment node and a leaf node, and the judgment node judges the block to be processed to make it reach the leaf node by using a block retrieval feature in a block retrieval feature set to form a block retrieval result at the leaf node, and at least two decision trees in the plurality of decision trees are different in structures thereof and/or judgments performed by the judgment nodes thereof by using the block retrieval feature; and integrating the block retrieval results of the plurality of decision trees so as to form a final block retrieval result.
A method of updating a set of classifiers includes applying a first set of classifiers to a first set of data. The method further includes requesting, from a remote device, a classifier update based on an output of the first set of classifiers or a performance measure of the application of the first set of classifiers.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of updating a set of classifiers comprising: applying a first set of classifiers to a first set of data; and requesting, from a remote device, a classifier update based at least in part on at least one of an output of the first set of classifiers or a performance measure of the application of the first set of classifiers. 2. The method of claim 1, in which the requesting is based at least in part on context information. 3. The method of claim 1, in which the performance measure comprises an accuracy of the classifiers, a level of agreement of multiple classifiers, or a speed of computation of the classifiers. 4. The method of claim 1, in which the first set of classifiers and the classifier update are built on a same feature generator. 5. The method of claim 1, in which the first set of classifiers comprises a general classifier and the classifier update comprises a specific classifier. 6. The method of claim 5, further comprising applying the specific classifier to an object to identify a specific class of the object. 7. The method of claim 1, in which the remote device is configured to apply the first set of classifiers. 8. The method of claim 7, further comprising: computing features and transmitting the computed features to the remote device, the remote device applying the first set of classifiers to the computed features to compute a classification. 9. An apparatus for updating a set of classifiers comprising: a memory; and at least one processor coupled to the memory, the at least one processor being configured: to apply a first set of classifiers to a first set of data; and to request, from a remote device, a classifier update based at least in part on at least one of an output of the first set of classifiers or a performance measure of the application of the first set of classifiers. 10. The apparatus of claim 9, in which the at least one processor is further configured to request the classifier update based at least in part on context information. 11. The apparatus of claim 9, in which the performance measure comprises an accuracy of the classifiers, a level of agreement of multiple classifiers, or a speed of computation of the classifiers. 12. The apparatus of claim 9, in which the first set of classifiers and the classifier update are built on a same feature generator. 13. The apparatus of claim 9, in which the first set of classifiers comprises a general classifier and the classifier update comprises a specific classifier. 14. The apparatus of claim 13, in which the at least one processor is further configured to apply the specific classifier to an object to identify a specific class of the object. 15. The apparatus of claim 9, in which the remote device is configured to apply the first set of classifiers. 16. The apparatus of claim 15, in which the at least one processor is further configured: to compute features and transmit the computed features to the remote device, the remote device applying the first set of classifiers to the computed features to compute a classification. 17. An apparatus for updating a set of classifiers comprising: means for applying a first set of classifiers to a first set of data; and means for requesting, from a remote device, a classifier update based at least in part on at least one of an output of the first set of classifiers or a performance measure of the application of the first set of classifiers. 18. A computer program product for updating a set of classifier comprising: a non-transitory computer readable medium having encoded thereon program code, the program code comprising: program code to apply a first set of classifiers to a first set of data; and program code to request, from a remote device, a classifier update based at least in part on at least one of an output of the first set of classifiers or a performance measure of the application of the first set of classifiers.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method of updating a set of classifiers includes applying a first set of classifiers to a first set of data. The method further includes requesting, from a remote device, a classifier update based on an output of the first set of classifiers or a performance measure of the application of the first set of classifiers.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method of updating a set of classifiers includes applying a first set of classifiers to a first set of data. The method further includes requesting, from a remote device, a classifier update based on an output of the first set of classifiers or a performance measure of the application of the first set of classifiers.
A method for implementing a convolutional neural network (CNN) accelerator on a target includes identifying characteristics and parameters for the CNN accelerator. Resources on the target are identified. A design for the CNN accelerator is generated in response to the characteristics and parameters of the CNN accelerator and the resources on the target.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for implementing a convolutional neural network (CNN) accelerator on a target, comprising: identifying characteristics and parameters for the CNN accelerator; identifying resources on the target; and generating a design for the CNN accelerator in response to the characteristics and parameters of the CNN accelerator and the resources on the target. 2. The method of claim 1, wherein identifying characteristics and parameters for the CNN accelerator comprises receiving the characteristics from a user. 3. The method of claim 2, wherein the characteristics for the CNN accelerator comprises: a number and sequence of stages of layers; sizes and coefficients of filters; and sizes, strides, and padding of images. 4. The method of claim 1, wherein the characteristics for the CNN accelerator comprises a range of characteristics that allows the CNN accelerator to execute a plurality of CNN algorithms. 5. The method of claim 4, wherein generating the design for the CNN accelerator comprises implementing configurable status registers (CSR), programmable by a user at runtime, to configure the target to support characteristics required for executing one of the plurality of CNN algorithms. 6. The method of claim 1, wherein generating the design for the CNN accelerator comprises assigning an appropriate size for buffers in response to sizes of images to be processed by the CNN accelerator. 7. The method of claim 1, wherein generating the design for the CNN accelerator comprises generating computation units in response to available resources on the target. 8. The method of claim 1, wherein generating the computation units comprises generating processing elements utilizing digital signal processor blocks, memory blocks, and adders on the target. 9. The method of claim 1, wherein generating the design for the CNN accelerator comprises generating a sequencer unit that coordinates transmission of data to appropriate processing elements on the CNN accelerator at appropriate times in order to time multiplex computations on the processing elements. 10. The method of claim 1 further comprising: identifying a CNN algorithm to execute on the CNN accelerator; identifying a variation of the CNN accelerator that supports execution of the CNN algorithm; and setting configurable status registers on the target to support the variation of the CNN accelerator. 11. A method for implementing a convolutional neural network (CNN) accelerator on a target, comprising: identifying a CNN algorithm to execute on the CNN accelerator; identifying a variation of the CNN accelerator that supports execution of the CNN algorithm; and setting configurable status registers on the target to support the variation of the CNN accelerator. 12. The method of claim 11 further comprising: determining whether a different CNN algorithm is to be executed on the CNN accelerator; identifying a different variation of the CNN accelerator that supports execution of the different CNN algorithm; and setting the configurable status registers on the target to support the different variation of the CNN accelerator. 13. The method of claim 11, wherein setting the configurable status registers adds or subtracts convolution layers on the CNN accelerator. 14. The method of claim 11, wherein setting the configurable status registers sets filter coefficients. 15. The method of claim 11, wherein setting the configurable status registers removes one or more pooling layers. 16. The method of claim 11, wherein setting the configurable status registers reduces a size of a filter. 17. The method of claim 11 further comprising programming the target to implement the CNN accelerator with a configuration file. 18. A non-transitory computer readable medium including a sequence of instructions stored thereon for causing a computer to execute a method for implementing a convolutional neural network (CNN) accelerator on a target, comprising: identifying characteristics and parameters for the CNN accelerator; identifying resources on the target; and generating a design for the CNN accelerator in response to the characteristics and parameters of the CNN accelerator and the resources on target. 19. The non-transitory computer readable medium of claim 18, wherein the characteristics for the CNN accelerator comprises a range of characteristics that allows the CNN accelerator to execute a plurality of CNN algorithms. 20. The non-transitory computer readable medium of claim 19, wherein generating the design for the CNN accelerator comprises implementing configurable status registers (CSR), programmable by a user at runtime, to configure the target to support characteristics required for executing one of the plurality of CNN algorithms.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method for implementing a convolutional neural network (CNN) accelerator on a target includes identifying characteristics and parameters for the CNN accelerator. Resources on the target are identified. A design for the CNN accelerator is generated in response to the characteristics and parameters of the CNN accelerator and the resources on the target.
G06N304
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method for implementing a convolutional neural network (CNN) accelerator on a target includes identifying characteristics and parameters for the CNN accelerator. Resources on the target are identified. A design for the CNN accelerator is generated in response to the characteristics and parameters of the CNN accelerator and the resources on the target.
A hybrid ultracapacitor-battery energy storage system is integrated with a photovoltaic system to help solve fluctuations. A fuzzy-logic-based adaptive power management system enables optimization of the power/energy distributions and a filter-based power coordination layer serving as a rudimentary step for power coordination among the hybrid storage system and a fuzzy-logic-based control adjustment layer that keeps monitoring the operation status of all the energy storage devices, taking into account their dynamic characteristics, and fine-tuning the control settings adaptively.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system comprising; a fuzzy logic-based adaptive power management system; a photovoltaic system; a first capacitor based energy storage system; a second battery energy based storage system; and a storage of knowledge of system operation and operation of energy storage related devices within the system, wherein the management storage system communicates with the photovoltaic system, first capacitor and battery based energy system and storage of knowledge to influence energy fluctuations ahead of detailed control loops in power electronic devices, the fuzzy logic based adaptive system includes a filter based power coordination layer for power conditioning among the energy based storage system and a fuzzy logic based control adjustment for monitoring operational status of all energy storage devices taking into account their dynamic characteristics to fine tune control settings with the system adaptively and influence optimal power or energy distributions within the system. 2. The system of claim 1, wherein the filter based power coordination ensures that a supercapacitor storage device compensates sudden changes in rapidly fluctuating PV output power while the battery covers a smoothing power profile, and during normal operation periods, the references for different energy storage devices work sufficiently with the references being modifiable under certain conditions in order to improve the overall system performance. 3. The system of claim 2, wherein the fuzzy logic based control adjustment comprises that during the system operation, the energy storage device will keep switching among different operation modes and present different dynamics, achieving smooth changes over various operation modes and maintain consistent system performance includes tuning the control along with those changes, and the fuzzy logic adjustment providing advantages in non-linear system control without requiring a precise mathematical modeling or sophisticated computations in certain situations. 4. The system of claim 1, wherein the fuzzy logic based control adjustment comprises fuzzy logic based smoothing control, fuzzy logic based battery power control and fuzzy logic based ultracapacitor power control. 5. The system of claim 1, wherein the fuzzy logic based smoothing control comprises low pass filtering that influences a smoothing power profile, a difference between smoothing power and actual power being covered by discharging or charging of a hybrid electrical energy system, a parameter in the smoothing control determining a smoothing performance in that as the parameter becomes larger the more fluctuating power needs to be compensated by the hybrid ESS which means more energy and power will be requested out of energy storages. 6. The system of claim 1, wherein the fuzzy logic based smoothing control comprises preventing the saturation or depletion of energy capacity, and ensuring sustainable system operation with states of charge for capacitance of the battery energy and ultracapacitor being updatable when different unit sizes are applied in the PV system, with a power-intensive storage the ultracapacitor may presents a relatively fluctuating state of charge profiles is prone to energy depletion and saturation so positive big and negative big range take up a larger range than a battery in the system 7. The system of claim 1, wherein the fuzzy logic based control adjustment comprises fuzzy logic based ultracapacitor power control that adjusts a simulated ultracapacitor reference current by adding a deviating value, output of the ultracapacitor reference current being directly applicable on a converter current control loop, and including fuzzy rules for preventing the ultracapacitor from energy depletion or saturation. 8. The system of claim 1, wherein the fuzzy logic based control adjustment comprises fuzzy logic based battery power control that adjusts a simulated battery reference current by adding a deviating value and an output of the battery reference current can be directly applied on a converter current control loop. 9. The system of claim 1, wherein fuzzy control in the fuzzy logic-based adaptive power management system are configures from the heuristic knowledge of the system operation in the storage of knowledge, the fuzzy control being tuneable through system simulation studies and configuration of the adaptive power management system keeping the system in a sustainable operation status and preserving energy storage devices acceptable life cycles. 10. A method comprising; employing a fuzzy logic-based adaptive power management system in a an electrical energy system; coupling a photovoltaic system to the power management system; coupling a first capacitor based energy storage system to the power management system; coupling a second battery energy based storage system to the power management system; and coupling a storage of knowledge of system operation and operation of energy storage related devices within an electric energy system to the power management system, the management storage system communicating with the photovoltaic system, first capacitor and battery based energy system and storage of knowledge for influencing energy fluctuations ahead of detailed control loops in power electronic devices, the fuzzy logic based adaptive system including a filter based power coordination layer for power conditioning among the energy based storage system and a fuzzy logic based control adjustment for monitoring operational status of all energy storage devices taking into account their dynamic characteristics for fine tuning control settings with the system adaptively and influencing optimal power or energy distributions within the system. 11. The method of claim 10, wherein the filter based power coordination ensures that a supercapacitor storage device provides for compensating sudden changes in rapidly fluctuating PV output power while the battery covers a smoothing power profile, and during normal operation periods, the references for different energy storage devices work sufficiently with the references being modifiable under certain conditions in order to improve the overall system performance. 12. The method of claim 11, wherein the fuzzy logic based control adjustment comprises that during the system operation, the energy storage device will provide for keeping switching among different operation modes and present different dynamics, achieving smooth changes over various operation modes and maintaining consistent system performance including tuning the control along with those changes, and also providing advantages in non-linear system control without requiring a precise mathematical modeling or sophisticated computations in certain situations. 13. The method of claim 10, wherein the fuzzy logic based control adjustment comprises fuzzy logic based smoothing control, fuzzy logic based battery power control and fuzzy logic based ultracapacitor power control. 14. The method of claim 10, wherein the fuzzy logic based smoothing control comprises low pass filtering that influences a smoothing power profile, a difference between smoothing power and actual power being covered by discharging or charging of a hybrid electrical energy system, a parameter in the smoothing control determining a smoothing performance in that as the parameter becomes larger the more fluctuating power needs to be compensated by hybrid electric storage system which means more energy and power will be requested out of energy storages. 15. The method of claim 10, wherein the fuzzy logic based smoothing control comprises preventing the saturation or depletion of energy capacity, and ensuring sustainable system operation with states of charge for capacitance of the battery energy and ultracapacitor being updatable when different unit sizes are applied in the PV system, with a power-intensive storage the ultracapacitor may presents a relatively fluctuating state of charge profiles is prone to energy depletion and saturation so positive big and negative big range take up a larger range than a battery in the system 16. The method of claim 10, wherein the fuzzy logic based control adjustment comprises fuzzy logic based ultracapacitor power control adjusting a simulated ultracapacitor reference current by adding a deviating value, output of the ultracapacitor reference current being directly applicable on a converter current control loop, and including fuzzy rules for preventing the ultracapacitor from energy depletion or saturation. 17. The method of claim 10, wherein the fuzzy logic based control adjustment comprises fuzzy logic based battery power control adjusting a simulated battery reference current by adding a deviating value and an output of the battery reference current can be directly applied on a converter current control loop. 18. The method of claim 10, wherein fuzzy control in the fuzzy logic-based adaptive power management system comprises configuring from a heuristic knowledge of the system operation in the storage of knowledge, tuning the fuzzy control through system simulation and configuration of the adaptive power management system while keeping the system in a sustainable operation status and preserving energy storage devices acceptable life cycles. 19. The method of claim 10, wherein the fuzzy logic based control adjustment comprises fuzzy logic based smoothing control, fuzzy logic based battery power control and fuzzy logic based ultracapacitor power control.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A hybrid ultracapacitor-battery energy storage system is integrated with a photovoltaic system to help solve fluctuations. A fuzzy-logic-based adaptive power management system enables optimization of the power/energy distributions and a filter-based power coordination layer serving as a rudimentary step for power coordination among the hybrid storage system and a fuzzy-logic-based control adjustment layer that keeps monitoring the operation status of all the energy storage devices, taking into account their dynamic characteristics, and fine-tuning the control settings adaptively.
G06N702
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A hybrid ultracapacitor-battery energy storage system is integrated with a photovoltaic system to help solve fluctuations. A fuzzy-logic-based adaptive power management system enables optimization of the power/energy distributions and a filter-based power coordination layer serving as a rudimentary step for power coordination among the hybrid storage system and a fuzzy-logic-based control adjustment layer that keeps monitoring the operation status of all the energy storage devices, taking into account their dynamic characteristics, and fine-tuning the control settings adaptively.
Systems and methods are disclosed for determining if an account identifier is computer-generated. One method includes receiving the account identifier, dividing the account identifier into a plurality of fragments, and determining one or more features of at least one of the fragments. The method further includes determining the commonness of at least one of the fragments, and determining if the account identifier is computer-generated based on the features of at least one of the fragments, and the commonness of at least one of the fragments.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method for determining if an account identifier is computer-generated, comprising: receiving the account identifier; dividing the account identifier into a plurality of fragments; determining one or more features of at least one of the fragments; determining the commonness of at least one of the fragments; and determining if the account identifier is computer-generated based on the features of at least one of the fragments, and the commonness of at least one of the fragments. 2. The method of claim 1, wherein determining if the account identifier is computer-generated comprises providing the features of at least one of the fragments, and the commonness of at least one of the fragments, to a probabilistic classifier model. 3. The method of claim 2, wherein the probabilistic classifier model is trained with a training set of a plurality of account identifiers that are known to be or known to not be computer-generated, and wherein the training set allows the probabilistic classifier model to independently weigh the one or more features in order to more accurately determine if the account identifier is computer-generated. 4. The method of claim 1, further comprising: determining one or more features of the account identifier by counting characters by character type. 5. The method of claim 1, wherein determining the commonness of at least one of the fragments comprises determining the frequency of occurrence of at least one of the fragments in data in a data store. 6. The method of claim 1, wherein at least one of the fragments is truncated to contain only consonants. 7. The method of claim 1, wherein each character of at least one of the fragments is hashed according to character type. 8. The method of claim 7, wherein each character type is selected from a group including consonant, vowel, number, and punctuation mark. 9. A system for determining if an account identifier is computer-generated, the system including: a data storage device storing instructions determining if an account identifier is computer-generated; and a processor configured to execute the instructions to perform a method including: receiving the account identifier; dividing the account identifier into a plurality of fragments; determining one or more features of at least one of the fragments; determining the commonness of at least one of the fragments; and determining if the account identifier is computer-generated based on the features of at least one of the fragments, and the commonness of at least one of the fragments. 10. The system of claim 9, wherein determining if the account identifier is computer- generated comprises providing the features of at least one of the fragments, and the commonness of at least one of the fragments, to a probabilistic classifier model. 11. The system of claim 10, wherein the probabilistic classifier model is trained with a training set of a plurality of account identifiers that are known to be or known to not be computer-generated, and wherein the training set allows the probabilistic classifier model to independently weigh the one or more features in order to more accurately determine if the account identifier is computer-generated. 12. The system of claim 9, wherein the processor is further configured for: determining one or more features of the account identifier by counting characters by character type. 13. The system of claim 9, wherein determining the commonness of at least one of the fragments comprises determining the frequency of occurrence of at least one of the fragments in data in a data store. 14. The system of claim 9, wherein at least one of the fragments is truncated to contain only consonants. 15. The system of claim 9, wherein each character of at least one of the fragments is hashed according to character type. 16. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform a method for determining whether an account identifier is computer-generated, the method including: receiving the account identifier; dividing the account identifier into a plurality of fragments; determining one or more features of at least one of the fragments; determining the commonness of at least one of the fragments; and determining if the account identifier is computer-generated based on the features of at least one of the fragments, and the commonness of at least one of the fragments. 17. The non-transitory computer-readable medium of claim 16, wherein determining if the account identifier is computer-generated comprises providing the features of at least one of the fragments, and the commonness of at least one of the fragments, to a probabilistic classifier model. 18. The non-transitory computer-readable medium of claim 17, wherein the probabilistic classifier model is trained with a training set of a plurality of account identifiers that are known to be or known to not be computer-generated, and wherein the training set allows the probabilistic classifier model to independently weigh the one or more features in order to more accurately determine if the account identifier is computer-generated. 19. The non-transitory computer-readable medium of claim 16, wherein determining the commonness of at least one of the fragments comprises determining the frequency of occurrence of at least one of the fragments in data in a data store. 20. The non-transitory computer-readable medium of claim 16, wherein each character of at least one of the fragments is hashed according to character type.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Systems and methods are disclosed for determining if an account identifier is computer-generated. One method includes receiving the account identifier, dividing the account identifier into a plurality of fragments, and determining one or more features of at least one of the fragments. The method further includes determining the commonness of at least one of the fragments, and determining if the account identifier is computer-generated based on the features of at least one of the fragments, and the commonness of at least one of the fragments.
G06N5048
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Systems and methods are disclosed for determining if an account identifier is computer-generated. One method includes receiving the account identifier, dividing the account identifier into a plurality of fragments, and determining one or more features of at least one of the fragments. The method further includes determining the commonness of at least one of the fragments, and determining if the account identifier is computer-generated based on the features of at least one of the fragments, and the commonness of at least one of the fragments.
The present invention provides an event-driven universal neural network circuit. The circuit comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of digital synapses interconnects the neural modules. Each synapse interconnects a first neural module to a second neural module by interconnecting a neuron in the first neural module to a corresponding neuron in the second neural module. Corresponding neurons in the first neural module and the second neural module communicate via the synapses. Each synapse comprises a learning rule associating a neuron in the first neural module with a corresponding neuron in the second neural module. A control module generates signals which define a set of time steps for event-driven operation of the neurons and event communication via the interconnection network.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: for a port of a neural module comprising a plurality of neurons and a plurality of ports: classifying the port into one of a plurality of port types, wherein the port is classified into a different port type that another port of the plurality of ports; interconnecting the port to at least one synapse classified into the same port type as the port; maintaining synaptic connectivity information indicative of the at least one synapse, a total sum of synaptic weights of the at least one synapse, and a target sum; and selectively updating a set of learning rules for the at least one synapse based on the total sum and the target sum. 2. The method of claim 1, wherein the synaptic connectivity information comprises a list of all synapses the port is connected to. 3. The method of claim 1, wherein a first port and a second port of the plurality of ports are classified into a first port type and a second port type, respectively, and the first port type and the second port type are different port types. 4. The method of claim 1, wherein the selectively updating comprises: determining whether the total sum exceeds or is less than the target sum; and updating the set of learning rules in response to determining the total sum exceeds or is less than the target sum. 5. The method of claim 1, further comprising: for a neuron of the neural module: maintaining neural information indicative of a membrane potential of the neuron, a first set of ports of the plurality of ports the neuron sends output to, and a second set of ports of the plurality of ports the neuron receives input from. 6. A system comprising a computer processor, a computer-readable hardware storage medium, and program code embodied with the computer-readable hardware storage medium for execution by the computer processor to implement a method comprising: for a port of a neural module comprising a plurality of neurons and a plurality of ports: classifying the port into one of a plurality of port types, wherein the port is classified into a different port type that another port of the plurality of ports; interconnecting the port to at least one synapse classified into the same port type as the port; maintaining synaptic connectivity information indicative of the at least one synapse, a total sum of synaptic weights of the at least one synapse, and a target sum; and selectively updating a set of learning rules for the at least one synapse based on the total sum and the target sum. 7. The system of claim 6, wherein the synaptic connectivity information comprises a list of all synapses the port is connected to. 8. The system of claim 6, wherein a first port and a second port of the plurality of ports are classified into a first port type and a second port type, respectively, and the first port type and the second port type are different port types. 9. The system of claim 6, wherein the selectively updating comprises: determining whether the total sum exceeds or is less than the target sum; and updating the set of learning rules in response to determining the total sum exceeds or is less than the target sum. 10. The system of claim 6, the method further comprising: for a neuron of the neural module: maintaining neural information indicative of a membrane potential of the neuron, a first set of ports of the plurality of ports the neuron sends output to, and a second set of ports of the plurality of ports the neuron receives input from. 11. A computer program product comprising a computer-readable hardware storage medium having program code embodied therewith, the program code being executable by a computer to implement a method comprising: for a port of a neural module comprising a plurality of neurons and a plurality of ports: classifying the port into one of a plurality of port types, wherein the port is classified into a different port type that another port of the plurality of ports; interconnecting the port to at least one synapse classified into the same port type as the port; maintaining synaptic connectivity information indicative of the at least one synapse, a total sum of synaptic weights of the at least one synapse, and a target sum; and selectively updating a set of learning rules for the at least one synapse based on the total sum and the target sum. 12. The computer program product of claim 11, wherein the synaptic connectivity information comprises a list of all synapses the port is connected to. 13. The computer program product of claim 11, wherein a first port and a second port of the plurality of ports are classified into a first port type and a second port type, respectively, and the first port type and the second port type are different port types. 14. The computer program product of claim 11, wherein the selectively updating comprises: determining whether the total sum exceeds or is less than the target sum; and updating the set of learning rules in response to determining the total sum exceeds or is less than the target sum. 15. The computer program product of claim 11, the method further comprising: for a neuron of the neural module: maintaining neural information indicative of a membrane potential of the neuron, a first set of ports of the plurality of ports the neuron sends output to, and a second set of ports of the plurality of ports the neuron receives input from.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: The present invention provides an event-driven universal neural network circuit. The circuit comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of digital synapses interconnects the neural modules. Each synapse interconnects a first neural module to a second neural module by interconnecting a neuron in the first neural module to a corresponding neuron in the second neural module. Corresponding neurons in the first neural module and the second neural module communicate via the synapses. Each synapse comprises a learning rule associating a neuron in the first neural module with a corresponding neuron in the second neural module. A control module generates signals which define a set of time steps for event-driven operation of the neurons and event communication via the interconnection network.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: The present invention provides an event-driven universal neural network circuit. The circuit comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of digital synapses interconnects the neural modules. Each synapse interconnects a first neural module to a second neural module by interconnecting a neuron in the first neural module to a corresponding neuron in the second neural module. Corresponding neurons in the first neural module and the second neural module communicate via the synapses. Each synapse comprises a learning rule associating a neuron in the first neural module with a corresponding neuron in the second neural module. A control module generates signals which define a set of time steps for event-driven operation of the neurons and event communication via the interconnection network.
The subset encoding method and related automata designs for improving the space efficiency for many applications on the Automata Processor (AP) are presented. The method is a general method that can take advantage of the character-or ability of STEs (State Transition Elements) on the AP, and can relieve the problems of limited hardware capacity and inefficient routing. Experimental results show that after applying the subset encoding method on Hamming distance automata, up to 3.2× more patterns can be placed on the AP if a sliding window is required. If a sliding window is not required, up to 192× more patterns can be placed on the AP. For a Levenshtein distance, the subset encoding can split the Levenshtein automata into small chunks and make them routable on the AP. The impact of the subset encoding method depends on the character size of the AP.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. An encoding method of a processor for a pattern matching application comprising the steps of: encoding a plurality of input data before streaming them into the processor, encoding and placing a plurality of patterns on loop structures on the processor, and matching the encoded input data on the processor using the loop structures, wherein the plurality of input data are application data, and the loop structures contain the encoded patterns. 2. The encoding method according to claim 1, wherein both the plurality of input data and the plurality of patterns are encoded into subsets of characters. 3. The encoding method according to claim 1, wherein the encoded input data are put in a single self-loop state transition element (STE) or a loop structure with multiple STEs on the processor. 4. The encoding method according to claim 3, wherein the single self-loop STE or the loop structure with multiple STEs contains a set of characters. 5. The encoding method according to claim 4, wherein the looped STE structure remains activated when the set of characters in the looped STE structure and another set of characters streamed in serial are identical, and the looped STE structure is turned off when the set of characters in the looped STE structure and the another set of characters streamed in serial are not identical. 6. The encoding method according to claim 3, wherein an STE comprises an array of memory, and a value in the memory cell indicates whether the encoded input data matches with the encoded patterns on the processor. 7. The encoding method according to claim 1, wherein the processor is a non-von Neumann processor based on the architecture of a dynamic random-access memory (DRAM). 8. The encoding method according to claim 6, wherein the processor further comprises a routing matrix for implementing connections among STEs, Boolean logic gates, and counters on the processor. 9. The encoding method according to claim 1, wherein the encoding and the matching are performed in parallel. 10. An automata design method of the processor for applying the encoding method according to claim 1 comprises: exact matching automata, Hamming distance automata, Levenshtein automata, and Damerau-Levenshtein automata. 11. The automata design method according to claim 10, wherein in the exact matching automata, whether the plurality of input data and the plurality of patterns are exactly identical is determined. 12. The automata design method according to claim 10, wherein in the Hamming distance automata, an one-to-one encoding method, an one-to-many encoding method, a many-to-one encoding method, and a many-to-many encoding method are used. 13. The automata design method according to claim 10, wherein in the Hamming distance automata, a ladder structure with a predetermined level is constructed to match the plurality of input data with the plurality of patterns within a predetermined distance. 14. The automata design method according to claim 10, wherein the Hamming distance automata is used to match the plurality of input data and the plurality of patterns with sliding windows, and a size of the plurality of input data is larger than that of the plurality of patterns. 15. The automata design method according to claim 10, wherein in the Levenshtein automata, left-shifted and right-shifted encoding are used for capturing insertions and deletions. 16. The automata design method according to claim 10, wherein in the Damerau-Levenshtein automata, AND logic gates are used for capturing transpositions of adjacent characters.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: The subset encoding method and related automata designs for improving the space efficiency for many applications on the Automata Processor (AP) are presented. The method is a general method that can take advantage of the character-or ability of STEs (State Transition Elements) on the AP, and can relieve the problems of limited hardware capacity and inefficient routing. Experimental results show that after applying the subset encoding method on Hamming distance automata, up to 3.2× more patterns can be placed on the AP if a sliding window is required. If a sliding window is not required, up to 192× more patterns can be placed on the AP. For a Levenshtein distance, the subset encoding can split the Levenshtein automata into small chunks and make them routable on the AP. The impact of the subset encoding method depends on the character size of the AP.
G06N5047
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: The subset encoding method and related automata designs for improving the space efficiency for many applications on the Automata Processor (AP) are presented. The method is a general method that can take advantage of the character-or ability of STEs (State Transition Elements) on the AP, and can relieve the problems of limited hardware capacity and inefficient routing. Experimental results show that after applying the subset encoding method on Hamming distance automata, up to 3.2× more patterns can be placed on the AP if a sliding window is required. If a sliding window is not required, up to 192× more patterns can be placed on the AP. For a Levenshtein distance, the subset encoding can split the Levenshtein automata into small chunks and make them routable on the AP. The impact of the subset encoding method depends on the character size of the AP.
A machine learning system for evaluating at least one characteristic of a heart valve, an inflow tract, an outflow tract or a combination thereof may include a training mode and a production mode. The training mode may be configured to train a computer and construct a transformation function to predict an unknown anatomical characteristic and/or an unknown physiological characteristic of a heart valve, inflow tract and/or outflow tract, using a known anatomical characteristic and/or a known physiological characteristic the heart valve, inflow tract and/or outflow tract. The production mode may be configured to use the transformation function to predict the unknown anatomical characteristic and/or the unknown physiological characteristic of the heart valve, inflow tract and/or outflow tract, based on the known anatomical characteristic and/or the known physiological characteristic of the heart valve, inflow tract and/or outflow tract.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A machine learning system including a computer for evaluating at least one characteristic of a heart valve, an inflow tract, an outflow tract or a combination thereof, the machine learning system comprising: a transformation function configured to predict at least one of an unknown anatomical characteristic or an unknown physiological characteristic of at least one of a heart valve, an inflow tract or an outflow tract, using at least one of a known anatomical characteristic or a known physiological characteristic of the at least one heart valve, inflow tract or outflow tract; and a production mode configured to use the transformation function to predict at least one of the unknown anatomical characteristic or the unknown physiological characteristic of the at least one heart valve, inflow tract or outflow tract, based on at least one of the known anatomical characteristic or the known physiological characteristic of the at least one heart valve, inflow tract or outflow tract, wherein the production mode is further configured to receive one or more feature vectors. 2. A machine learning system as in claim 1, wherein the machine learning system is configured to compute and store in a feature vector the at least one known anatomical characteristic or known physiological characteristic of the at least one heart valve, inflow tract or outflow tract. 3. A machine learning system as in claim 2, wherein the machine learning system is configured to calculate an approximate blood flow through the at least one heart valve, inflow tract or outflow tract. 4. A machine learning system as in claim 3, wherein the machine learning system is further configured to store quantities associated with the approximate blood flow through the at least one heart valve, inflow tract or outflow tract. 5. A machine learning system as in claim 4, wherein the machine learning system is further configured to perturb the at least one known anatomical characteristic or known physiological characteristic of the at least one heart valve, inflow tract or outflow tract stored in the feature vector. 6. A machine learning system as in claim 5, wherein the machine learning system is further configured to calculate a new approximate blood flow through the at least one heart valve, inflow tract or outflow tract with the perturbed at least one known anatomical characteristic or known physiological characteristic. 7. A machine learning system as in claim 6, wherein the machine learning system is further configured to store quantities associated with the new approximate blood flow through the perturbed at least one heart valve, inflow tract or outflow tract. 8. A machine learning system as in claim 7, wherein the machine learning system is further configured to repeat the perturbing, calculating and storing steps to create a set of feature vectors and quantity vectors and to generate the transformation function. 9. A machine learning system as in claim 1, wherein the production mode is configured to apply the transformation function to the feature vectors. 10. A machine learning system as in claim 9, wherein the production mode is configured to generate one or more quantities of interest. 11. A machine learning system as in claim 10, wherein the production mode is configured to store the quantities of interest. 12. A machine learning system as in claim 11, wherein the production mode is configured to process the quantities of interest to provide data for use in at least one of evaluation, diagnosis, prognosis, treatment or treatment planning related to a heart in which the heart valve resides. 13. A computer-implemented machine learning method for evaluating at least one characteristic of a heart valve, an inflow tract, an outflow tract or a combination thereof the method comprising: predicting, with a transformation function on a computer, at least one of an unknown anatomical characteristic or an unknown physiological characteristic of at least one of a heart valve, an inflow tract or an outflow tract, using at least one of a known anatomical characteristic or a known physiological characteristic of the at least one heart valve, inflow tract or outflow tract; maintaining, in a feature vector on the computer, the at least one known anatomical characteristic or known physiological characteristic of the at least one heart valve, inflow tract or outflow tract; and using a production mode of a machine learning system on the computer to direct the transformation function to predict at least one of the unknown anatomical characteristic or the unknown physiological characteristic of the at least one heart valve, inflow tract or outflow tract, based on at least one of the known anatomical characteristic or the known physiological characteristic of the at least one heart valve, inflow tract or outflow tract. 14. A method as in claim 13, further comprising using the computer to calculate an approximate blood flow through the at least one heart valve, inflow tract or outflow tract. 15. A method as in claim 14, further comprising using the computer to store quantities associated with the approximate blood flow through the at least one heart valve, inflow tract or outflow tract. 16. A method as in claim 15, further comprising using the computer to perturb the at least one known anatomical characteristic or known physiological characteristic of the at least one heart valve, inflow tract or outflow tract stored in the feature vector. 17. A method as in claim 16, further comprising using the computer to calculate a new approximate blood flow through the at least one heart valve, inflow tract or outflow tract with the perturbed at least one known anatomical characteristic or known physiological characteristic. 18. A method as in claim 17, further comprising using the computer to store quantities associated with the new approximate blood flow through the perturbed at least one heart valve, inflow tract or outflow tract. 19. A method as in claim 18, further comprising using the computer to repeat the perturbing, calculating and storing steps to create a set of feature vectors and quantity vectors and to generate the transformation function. 20. A method as in claim 13, further comprising using the computer to perform the following steps: receiving patient-specific data selected from the group consisting of anatomic data, physiologic data, and hemodynamic data; generating a digital model of the at least one heart valve, inflow tract or outflow tract, based on the received data; discretizing the digital model; applying boundary conditions to at least one inflow portion and at least one outflow portion of the digital model; and initializing and solving mathematical equations of blood flow through the digital model. 21. A method as in claim 20, further comprising storing quantities and parameters that characterize at least one of an anatomic state or a physiologic state of the digital model and the blood flow. 22. A method as in claim 21, further comprising perturbing at least one of an anatomic parameter or a physiologic parameter that characterizes the digital model. 23. A method as in claim 22, further comprising at least one of re-discretizing or re-solving the mathematical equations with the at least one anatomic parameter or physiologic parameter. 24. A method as in claim 23, further comprising storing quantities and parameters that characterize at least one of the anatomic state or the physiologic state of the perturbed model and blood flow. 25. A method as in claim 13, further comprising receiving one or more feature vectors with the production mode. 26. A method as in claim 25, further comprising using the production mode to apply the transformation function to the feature vectors. 27. A method as in claim 26, further comprising using the production mode to generate one or more quantities of interest. 28. A method as in claim 27, further comprising using the production mode to process the quantities of interest to provide data for use in at least one of evaluation, diagnosis, prognosis, treatment or treatment planning related to a heart in which the at least one heart valve, inflow tract or outflow tract resides.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A machine learning system for evaluating at least one characteristic of a heart valve, an inflow tract, an outflow tract or a combination thereof may include a training mode and a production mode. The training mode may be configured to train a computer and construct a transformation function to predict an unknown anatomical characteristic and/or an unknown physiological characteristic of a heart valve, inflow tract and/or outflow tract, using a known anatomical characteristic and/or a known physiological characteristic the heart valve, inflow tract and/or outflow tract. The production mode may be configured to use the transformation function to predict the unknown anatomical characteristic and/or the unknown physiological characteristic of the heart valve, inflow tract and/or outflow tract, based on the known anatomical characteristic and/or the known physiological characteristic of the heart valve, inflow tract and/or outflow tract.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A machine learning system for evaluating at least one characteristic of a heart valve, an inflow tract, an outflow tract or a combination thereof may include a training mode and a production mode. The training mode may be configured to train a computer and construct a transformation function to predict an unknown anatomical characteristic and/or an unknown physiological characteristic of a heart valve, inflow tract and/or outflow tract, using a known anatomical characteristic and/or a known physiological characteristic the heart valve, inflow tract and/or outflow tract. The production mode may be configured to use the transformation function to predict the unknown anatomical characteristic and/or the unknown physiological characteristic of the heart valve, inflow tract and/or outflow tract, based on the known anatomical characteristic and/or the known physiological characteristic of the heart valve, inflow tract and/or outflow tract.
Disclosed herein are system, method, and computer program product embodiments for performing a regression analysis on lawfully collected personal data records. The analysis enables discovery of individuals likely to perform certain actions based on their personal data records and the personal data records and actions of others. The disclosed system, method, and computer program product may process vast quantities of data, including personal data records with thousands of categories and lawfully stored databases with millions of personal data records. Through the regression analysis, the disclosed system, method, and computer program product learn the most relevant categories for predicting an individual's actions based on input data provided by a user. The analysis then analyzes the categories of personal data records stored in a lawfully stored database to predict actions of individuals associated with those records and outputs results to the user.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system, comprising: a memory; and one or more processors electronically coupled to the memory and configured to: access a training set stored in the memory, the training set having a plurality of personal data training records and a plurality of categories associated with each personal data training record, wherein the plurality of categories comprises an action taken by an individual corresponding to the associated personal data training record; determine a subset of the plurality of categories based on at least a first plurality of personal data training records in the training set; determine a prediction function that outputs an outcome score based on values of the subset of the plurality of categories; test the accuracy of the prediction function based at least on a second plurality of personal data training records in the training set different from the first plurality of personal data training records in the training set; access a data set stored in memory, the data set having a number of personal data records greater than the number of personal data training records in the training set, the data set having the plurality of categories associated with each personal data record; and process a subset of the personal data records from the data set based on the prediction function to determine an outcome score for each personal data record in the subset of personal data records. 2. The system of claim 1, wherein determining the subset of the plurality of categories comprises determining a weight for each category. 3. The system of claim 2, wherein determining a weight for each category comprises: initializing a coefficient; and computing a weight vector that optimizes a function based on the subset of the plurality of categories, the function including the coefficient multiplied by the sum of the absolute values of each element in the weight vector; wherein each element in the weight vector corresponds to a weight for a category. 4. The system of claim 3, wherein the one or more processors are further configured to iterate the steps of determining a prediction function and testing the accuracy of the prediction function using successively smaller coefficients until the difference between successive accuracies is less than a threshold. 5. The system of claim 3, wherein the size of the subset of the plurality of categories is related to the magnitude of the coefficient. 6. The system of claim 1, wherein the prediction function substantially satisfies the equation P = e θ T x 1 + e θ T x ; wherein P is the prediction function, wherein e is Euler's number, wherein θ is a column vector of parameters, and wherein x is a column vector of values corresponding to categories of a personal data record. 7. The system of claim 6, wherein determining the subset of the plurality of categories comprises determining a value for θ that substantially minimizes the equation Y(θ,α1)=−ΣmεMAm log Pm(θ)+(1−Am)log(1−Pm(θ))−α1(α2∥θ∥1+½(1−α2)∥θ∥22) wherein M is the first plurality of personal data training records in the training set, wherein Am is the action taken by an individual corresponding to the personal data record mεM, wherein Pm(θ)=eθTxm/(1+eθTxm) is the outcome score for personal data record m, wherein α1 is a coefficient, and wherein α2 is a constant coefficient. 8. A computer implemented method, comprising: accessing a training set stored in memory, the training set having a plurality of personal data training records and a plurality of categories associated with each personal data training record, and the plurality of categories comprises an action taken by an individual corresponding to the associated personal data training record; determining a subset of the plurality of categories based on the action of at least a first personal data training record in the training set; determining a prediction function that outputs an outcome score based on the subset of the plurality of categories; testing the accuracy of the prediction function based on at least a second personal data training record in the training set different from the first personal data training record in the training set; accessing a data set stored in memory, the data set having a number of personal data records greater than the number of personal data training records in the training set, the data set having a plurality of categories associated with each personal data record; and processing a subset of the personal data records from the data set based on the subset of categories to determine an outcome score for each personal data record in the subset of personal data records. 9. The method of claim 8, further comprising: receiving an action taken by an individual corresponding to at least one personal data record in the subset of personal data records from the data set; replacing the outcome score for the at least one personal data record with the received outcome; and moving the at least one personal data record from the data set to the training set. 10. The method of claim 8, wherein determining the subset of the plurality of categories comprises determining a weight for each category. 11. The method of claim 10, wherein determining a weight for each category comprises: initializing a coefficient; and computing a weight vector that optimizes a function based on the subset of the plurality of categories, the function including the coefficient multiplied by the sum of the absolute values of each element in the weight vector; wherein each element in the weight vector corresponds to a weight for a category. 12. The method of claim 11, wherein the one or more processors are further configured to iterate the steps of determining a prediction function and testing the accuracy of the prediction function using successively smaller coefficients until the difference between successive accuracies is less than a threshold. 13. The method of claim 11, wherein the size of the subset of the plurality of categories is related to the magnitude of the coefficient. 14. The method of claim 8, wherein the prediction function substantially satisfies the equation P = e θ T x 1 + e θ T x ; wherein P is the prediction function, wherein e is Euler's number, wherein θ is a column vector of parameters, and wherein x is a column vector of values corresponding to categories of a personal data record. 15. A tangible computer-readable device having instructions stored thereon that, when executed by at least one computing device, causes the at least one computing device to perform operations comprising: accessing a training set stored in memory, the training set having a plurality of personal data training records and a plurality of categories associated with each personal data training record, and the plurality of categories comprises an action taken by an individual corresponding to the associated personal data training record; determining a subset of the plurality of categories based on the outcome of at least a first personal data training record in the training set; determining a prediction function based on the subset of the plurality of categories; testing the accuracy of the prediction function based on at least a second personal data training record in the training set different from the first personal data training record in the training set; accessing a data set stored in memory, the data set having a number of personal data records greater than the number of personal data training records in the training set, the data set having a plurality of categories associated with each personal data record; and processing a subset of the personal data records from the data set based on the subset of categories to determine an outcome score for each personal data record in the subset of personal data records. 16. The computer-readable device of claim 15, wherein the operation of determining the subset of the plurality of categories comprises determining a weight for each category. 17. The computer-readable device of claim 16, wherein the operation of determining a weight for each category comprises: initializing a coefficient; and computing a weight vector that optimizes a function based on the subset of the plurality of categories, the function including the coefficient multiplied by the sum of the absolute values of each element in the weight vector; wherein each element in the weight vector corresponds to a weight for a category. 18. The computer-readable device of claim 17, wherein the one or more processors are further configured to iterate the steps of determining a prediction function and testing the accuracy of the prediction function using successively smaller coefficients until the difference between successive accuracies is less than a threshold. 19. The computer-readable device of claim 17, wherein the size of the subset of the plurality of categories is related to the magnitude of the coefficient. 20. The computer-readable device of claim 15, wherein the prediction function substantially satisfies the equation P = e θ T x 1 + e θ T x ; wherein P is the prediction function, wherein e is Euler's number, wherein θ is a column vector of parameters, and wherein x is a column vector of values corresponding to categories of a personal data record.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Disclosed herein are system, method, and computer program product embodiments for performing a regression analysis on lawfully collected personal data records. The analysis enables discovery of individuals likely to perform certain actions based on their personal data records and the personal data records and actions of others. The disclosed system, method, and computer program product may process vast quantities of data, including personal data records with thousands of categories and lawfully stored databases with millions of personal data records. Through the regression analysis, the disclosed system, method, and computer program product learn the most relevant categories for predicting an individual's actions based on input data provided by a user. The analysis then analyzes the categories of personal data records stored in a lawfully stored database to predict actions of individuals associated with those records and outputs results to the user.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Disclosed herein are system, method, and computer program product embodiments for performing a regression analysis on lawfully collected personal data records. The analysis enables discovery of individuals likely to perform certain actions based on their personal data records and the personal data records and actions of others. The disclosed system, method, and computer program product may process vast quantities of data, including personal data records with thousands of categories and lawfully stored databases with millions of personal data records. Through the regression analysis, the disclosed system, method, and computer program product learn the most relevant categories for predicting an individual's actions based on input data provided by a user. The analysis then analyzes the categories of personal data records stored in a lawfully stored database to predict actions of individuals associated with those records and outputs results to the user.
Systems and method are disclosed for determining complex interactions among system inputs by using semi-Restricted Boltzmann Machines (RBMs) with factorized gated interactions of different orders to model complex interactions among system inputs; applying semi-RBMs to train a deep neural network with high-order within-layer interactions for learning a distance metric and a feature mapping; and tuning the deep neural network by minimizing margin violations between positive query document pairs and corresponding negative pairs.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for determining complex interactions among system inputs, comprising: using semi-Restricted Boltzmann Machines (RBMs) with factorized gated interactions of different orders to model complex interactions among system inputs, applying semi-RBMs to train a deep neural network with high-order within-layer interactions for learning a distance metric and a feature mapping; and tuning the deep neural network by minimizing margin violations between positive query document pairs and corresponding negative pairs. 2. The method of claim 1, comprising identifying complex nonlinear system input interactions for data denoising and data visualization. 3. The method of claim 1, wherein the semi-RBMs have gated interactions with a combination of orders ranging from 1 to m to approximate an arbitrary-order combinatorial input feature interactions in words and in Transcription Factors (TFs). 4. The method of claim 1, wherein hidden units of the semi-RBMs act as binary switches controlling interactions between input features. 5. The method of claim 1, comprising using factorization to reduce the number of parameters. The method of claim 1, comprising sampling from the semi-RBMs by using either fast deterministic damped mean-field updates or prolonged Gibbs sampling. 6. The method of claim 1, wherein parameters of semi-RBMs are learned using Contrastive Divergence. 7. The method of claim 1, wherein after a semi-RBM is learned, comprising treating inferred hidden activities of input data as new data to learn another semi-RBM and forming a deep belief net with gated high order interactions. 8. The method of claim 1, wherein with pairs of discrete representations of a query and a document, using semi-RBMs with gated arbitrary-order interactions to pre-train a deep neural network and generating a similarity score between a query and a document, in which a penultimate layer corresponds to a non-linear feature embedding of the original system input features. 9. The method of claim 8, further comprising using back-propagation to fine-tune parameters of the deep gated high-order neural network to make positive pairs of query, wherein document always have larger similarity scores than negative pairs based on margin maximization. 10. The method of claim 1, comprising modeling complex interactions between different words in documents and queries and predicting the bindings of TFs given some other TFs for understanding deep semantic information for information retrieval and TF binding redundancy and TF interactions for gene regulation. 11. The method of claim 1, comprising applying high-order semi-RBMs for modeling feature interactions including word interactions in documents or protein interactions in biology. 12. The method of claim 1, wherein the deep neural network has multiple layers. 13. The method of claim 1, comprising providing a given discretized query and document representation as input to a non-linear SSI system, and applying the semi-RBMs to pre-train the SSI system. 14. The method of claim 13, comprising fine-tuning the non-linear SSI system using back-propagation to minimize a margin-based rank loss. 15. The method of claim 13, wherein the discrete document representation includes a Bag of Word representation or a discretized term frequency—inverse document frequency(TF-IDF) representation. 16. The method of claim 1, comprising training by minimizing a margin ranking loss on a tuple (q, d+, d−): ∑ ( q , d + , d - )  max  ( 0 , 1 - f  ( q , d + ) + f  ( q , d - ) ) , where q is the query, d+ is a relevant document, and d− is an irrelevant document, f(·,·) is a similarity score.
REJECTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Systems and method are disclosed for determining complex interactions among system inputs by using semi-Restricted Boltzmann Machines (RBMs) with factorized gated interactions of different orders to model complex interactions among system inputs; applying semi-RBMs to train a deep neural network with high-order within-layer interactions for learning a distance metric and a feature mapping; and tuning the deep neural network by minimizing margin violations between positive query document pairs and corresponding negative pairs.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Systems and method are disclosed for determining complex interactions among system inputs by using semi-Restricted Boltzmann Machines (RBMs) with factorized gated interactions of different orders to model complex interactions among system inputs; applying semi-RBMs to train a deep neural network with high-order within-layer interactions for learning a distance metric and a feature mapping; and tuning the deep neural network by minimizing margin violations between positive query document pairs and corresponding negative pairs.
A method for determining a policy that considers observations delayed at runtime is disclosed. The method includes constructing a model of a stochastic decision process that receives delayed observations at run time, wherein the stochastic decision process is executed by an agent, finding an agent policy according to a measure of an expected total reward of a plurality of agent actions within the stochastic decision process over a given time horizon, and bounding an error of the agent policy according to an observation delay of the received delayed observations.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: constructing a model of a stochastic decision process that receives delayed observations at run time, wherein the stochastic decision process is executed by an agent; finding an agent policy according to a measure of an expected total reward of a plurality of agent actions within the stochastic decision process over a given time horizon; bounding an error of the agent policy according to an observation delay of the received delayed observations; and offering a reward to the agent using the agent policy having the error bounded according to the observation delay of the received delayed observations. 2. The method of claim 1, wherein finding the agent policy comprises: updating an agent belief state upon receiving each of the delayed observation; and determining a next agent action according to the expected total reward of a remaining decision epoch given an updated agent belief state. 3. The method of claim 2, wherein the agent belief state is updated using the delayed observation, a history of observations at runtime and a history of agent actions at runtime. 4. The method of claim 2, wherein the agent executes the next agent action in a next decision epoch. 5. The method of claim 1, further comprising: storing a history of observations at runtime; storing a history of agent actions at runtime; and recalling the history of observations at runtime and the history of agent actions at runtime to find the agent policy. 6. The method of claim 1, wherein the expected total reward comprises all rewards that the agent receives when a given agent action is executed in a current agent belief state. 7. The method of claim 1, wherein the observation delay of the received delayed observations is a maximum observation delay among the received delayed observations that is considered by the model. 8. A computer program product for planning in uncertain environments, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method comprising: receiving a model of a stochastic decision process that receives delayed observations at run time, wherein the stochastic decision process is executed by an agent; finding an agent policy according to a measure of an expected total reward of a plurality of agent actions within the stochastic decision process over a given time horizon; and bounding an error of the agent policy according to an observation delay of the received delayed observations. 9. The computer program product of claim 8, wherein finding the agent policy comprises: updating an agent belief state upon receiving each of the delayed observation; and determining a next agent action according to the expected total reward of a remaining decision epoch given an updated agent belief state. 10. The computer program product of claim 9, wherein the agent belief state is updated using the delayed observation, a history of observations at runtime and a history of agent actions at runtime. 11. The computer program product of claim 8, further comprising: storing a history of observations at runtime; storing a history of agent actions at runtime; and recalling the history of observations at runtime and the history of agent actions at runtime to find the agent policy. 12. The computer program product of claim 8, wherein the expected total reward comprises all rewards that the agent receives when a given agent action is executed in a current agent belief state. 13. The computer program product of claim 8, wherein the observation delay of the received delayed observations is a maximum observation delay among the received delayed observations that is considered by the model. 14. A decision engine configured execute a stochastic decision process receiving delayed observations using an agent policy comprising: a computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the decision engine to: receive a model of the stochastic decision process that receives a plurality of delayed observations at run time, wherein the stochastic decision process is executed by an agent; find an agent policy according to a measure of an expected total reward of a plurality of agent actions within the stochastic decision process over a given time horizon; and bound an error of the agent policy according to an observation delay of the received delayed observations. 15. The decision engine of claim 14, wherein the agent policy comprises: an agent belief state updated upon receiving each of the delayed observation; and a next agent action extracted according to the expected total reward of a remaining decision epoch given the agent belief state. 16. The decision engine of claim 15, wherein the agent belief state is updated using the delayed observation, a history of observations at runtime and a history of agent actions at runtime. 17. The decision engine of claim 14, wherein the program instructions executable by the processor to cause the decision engine to: store a history of observations at runtime; store a history of agent actions at runtime; and recall the history of observations at runtime and the history of agent actions at runtime to find the agent policy. 18. The decision engine of claim 14, wherein the expected total reward comprises all rewards that the agent receives when a given agent action is executed in a current agent belief state. 19. The decision engine of claim 14, wherein the observation delay of the received delayed observations is a maximum observation delay among the received delayed observations that is considered by the model.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method for determining a policy that considers observations delayed at runtime is disclosed. The method includes constructing a model of a stochastic decision process that receives delayed observations at run time, wherein the stochastic decision process is executed by an agent, finding an agent policy according to a measure of an expected total reward of a plurality of agent actions within the stochastic decision process over a given time horizon, and bounding an error of the agent policy according to an observation delay of the received delayed observations.
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method for determining a policy that considers observations delayed at runtime is disclosed. The method includes constructing a model of a stochastic decision process that receives delayed observations at run time, wherein the stochastic decision process is executed by an agent, finding an agent policy according to a measure of an expected total reward of a plurality of agent actions within the stochastic decision process over a given time horizon, and bounding an error of the agent policy according to an observation delay of the received delayed observations.
A virtual assistant platform (“VAP”) provides a self-supporting and expandable architectural framework for virtual assistants (“VAs”) to communicate with a user via an electronic device. VAs may communicate with other devices, software programs, and other VAs. VAs may include intelligent agents configured to perform particular tasks. The VAP may include an execution environment that provides an interface between the VA and the electronic device and a framework of services for the intelligent agents. A VA may participate in or coordinate a group of VAs in which knowledge and tasks can be shared and cooperatively executed. The execution environment may include an agent store for registering agents for use on the VAP, storing agent code and data, and distributing agents to requesting users. Through the agent store, new VAs and agents may be distributed to users to expand their use of the VAP.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for providing electronic assistance to a user, the method comprising: providing a virtual assistant platform configured to share data across a plurality of virtual assistants; activating a first agent in one of the virtual assistants, the first agent located on a device client installed on a device of the user, and the first agent being configured to perform one or more tasks; and activating a second agent in one of the virtual assistants, the second agent located on the device of the user or on another device and facilitating communication with the first agent. 2. The method of claim 1, further comprising configuring the first agent and second agent to access one or more shared data stores, the shared data stores providing the virtual assistants with shared capabilities. 3. The method of claim 2, wherein one or more of the shared data stores comprises a world ontology understood by all of the virtual assistants. 4. The method of claim 1, wherein the first agent comprises a main agent configured to manage tasks of one or more other agents on the device, the method further comprising activating at least one of the other agents on the device. 5. The method of claim 4, wherein at least one of the other agents on the device is an adapter agent configured to communicate with an object. 6. The method of claim 4, further comprising providing an agent bus configured to deliver only communications between the main agent and the other agents on the device. 7. The method of claim 1, further comprising providing on the virtual assistant platform an agent store from which the user may obtain at least one additional agent. 8. The method of claim 7, further comprising registering each of the additional agents for use on the user's device. 9. A virtual assistant platform (VAP) operating on one or more computer servers and on one or more devices, the VAP comprising: a plurality of virtual assistants, each of the virtual assistants comprising at least one agent; and one or more shared data stores accessible by each of the virtual assistants, the shared data stores providing the virtual assistants with shared capabilities. 10. The VAP of claim 9, wherein one or more of the shared data stores comprises a world ontology understood by all of the virtual assistants. 11. The VAP of claim 10, wherein the world ontology is included in an ontology hierarchy that further includes one or more domain ontologies within one or more of the data stores. 12. The VAP of claim 9, further comprising a group virtual assistant to which one or more of the virtual assistants subscribes, the group virtual assistant being configured to distribute information to each of the subscribing virtual assistants according to a status of the virtual assistant. 13. The VAP of claim 9, wherein one of the virtual assistants is an administrator virtual assistant configured to communicate with all of the other virtual assistants. 14. The VAP of claim 13, further comprising a virtual assistant bus configured to deliver only communications between the virtual assistants. 15. The VAP of claim 9, further comprising a device client installed on each of the devices on which the VAP operates, the device client modifying operations of the device so that one or more of the virtual assistants operate on the device. 16. The VAP of claim 15, wherein in each device, the virtual assistant operating on the device includes a main agent and a plurality of other agents, wherein the main agent communicates with the other agents and each of the other agents performs one or more tasks. 17. The VAP of claim 16, further comprising an agent bus on each device, the agent bus configured to deliver only communications between the main agent and the other agents on the device. 18. The VAP of claim 16, further comprising a device bus configured to deliver only communications between main agents of the devices on which one of the virtual assistants is operating. 19. The VAP of claim 9, further comprising an execution environment including a plurality of VAP-implementation services for configuring one or more of the VAs and one or more of the agents. 20. The VAP of claim 19, wherein the execution environment further includes an application programming interface for agents to access the VAP-implementation services.
REJECTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A virtual assistant platform (“VAP”) provides a self-supporting and expandable architectural framework for virtual assistants (“VAs”) to communicate with a user via an electronic device. VAs may communicate with other devices, software programs, and other VAs. VAs may include intelligent agents configured to perform particular tasks. The VAP may include an execution environment that provides an interface between the VA and the electronic device and a framework of services for the intelligent agents. A VA may participate in or coordinate a group of VAs in which knowledge and tasks can be shared and cooperatively executed. The execution environment may include an agent store for registering agents for use on the VAP, storing agent code and data, and distributing agents to requesting users. Through the agent store, new VAs and agents may be distributed to users to expand their use of the VAP.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A virtual assistant platform (“VAP”) provides a self-supporting and expandable architectural framework for virtual assistants (“VAs”) to communicate with a user via an electronic device. VAs may communicate with other devices, software programs, and other VAs. VAs may include intelligent agents configured to perform particular tasks. The VAP may include an execution environment that provides an interface between the VA and the electronic device and a framework of services for the intelligent agents. A VA may participate in or coordinate a group of VAs in which knowledge and tasks can be shared and cooperatively executed. The execution environment may include an agent store for registering agents for use on the VAP, storing agent code and data, and distributing agents to requesting users. Through the agent store, new VAs and agents may be distributed to users to expand their use of the VAP.
There is provided an apparatus for forecasting water demand of a waste system using an automation system. The apparatus for estimating water demand includes a water demand estimation setting unit configured to collect user input data, a control unit configured to collect the record data and the user input data from the SCADA system, perform a learning process on each of a plurality of algorithm combination groups including at least one algorithm to select any one algorithm combination group, and input the record data and the user input data to the selected algorithm combination group to calculate water demand estimation data, a storage unit configured to store the record data collected from the SCADA system, store the user input data, and store the water demand estimation data, and a water demand estimation output unit configured to output the calculated water demand estimation data.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. An apparatus for estimating water demand, the apparatus comprising: a water demand estimation setting unit configured to collect user input data; a control unit configured to collect the record data and the user input data from the SCADA system, set an upper limit value and a lower limit value among the data collected, compare a value with the upper limit value and the lower limit value to extract a value (normal data) present within the limit value range, set the extracted normal data as water demand estimation data, perform a learning process on each of a plurality of algorithm combination groups including at least one algorithm to select any one algorithm combination group, and input the record data and the user input data to the selected algorithm combination group to calculate water demand estimation data; a storage unit configured to store the record data collected from the SCADA system, store the user input data, and store the water demand estimation data; and a water demand estimation output unit configured to output the calculated water demand estimation data. 2. The apparatus of claim 1, wherein the control unit includes: a record data collecting unit configured to collect the record data from the SCADA system; a record data processing unit configured to set an upper limit value and a lower limit value among the data collected, compare a value with the upper limit value and the lower limit value to extract a value (normal data) present within the limit value range, and set the extracted normal data as water demand estimation data by the record data collecting unit; an algorithm combining unit configured to generate a plurality of algorithm combination groups including at least one algorithm and select any one algorithm combination group among the generated algorithm combination groups; and a water demand estimation result calculating unit configured to calculate water demand estimation data by inputting the record data and the user data to the selected algorithm combination group. 3. The apparatus of claim 2, wherein the algorithm combining unit sets the number of algorithms to be included in at least one algorithm combination group, performs a learning process on each of the algorithm combination groups including algorithms combined according to the number, and extracts any one algorithm combination group. 4. The apparatus of claim 3, wherein the algorithm combining unit gives a weighted value of each algorithm according to the learning process performed on each of the combined algorithm groups, and extracts an algorithm combination having the uppermost weight value or an algorithm combination having the smallest error value with respect to reference estimation result data. 5. The apparatus of claim 2, wherein the control unit further includes a calculation result verifying unit configured to verify the result calculated by the water demand estimation result calculating unit. 6. The apparatus of claim 2, wherein the control unit further includes: an error compensating unit configured to give a weighted value to water demand estimation data within a threshold range period from the current time, among the water demand estimation data calculated by the water demand estimation result calculating unit, and compensate an error with respect to hourly estimation result data. 7. The apparatus of claim 1, wherein the storage unit includes: a record storage unit configured to store record data collected from the SCADA server; a setting storage unit configured to store user input data; and an estimation data storage unit configured to store calculated water demand estimation data under the control of the control unit. 8. The apparatus of claim 7, wherein the record storage unit collects record data periodically from the SCADA server or outputs data collected in real time to the control unit periodically.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: There is provided an apparatus for forecasting water demand of a waste system using an automation system. The apparatus for estimating water demand includes a water demand estimation setting unit configured to collect user input data, a control unit configured to collect the record data and the user input data from the SCADA system, perform a learning process on each of a plurality of algorithm combination groups including at least one algorithm to select any one algorithm combination group, and input the record data and the user input data to the selected algorithm combination group to calculate water demand estimation data, a storage unit configured to store the record data collected from the SCADA system, store the user input data, and store the water demand estimation data, and a water demand estimation output unit configured to output the calculated water demand estimation data.
G06N5048
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: There is provided an apparatus for forecasting water demand of a waste system using an automation system. The apparatus for estimating water demand includes a water demand estimation setting unit configured to collect user input data, a control unit configured to collect the record data and the user input data from the SCADA system, perform a learning process on each of a plurality of algorithm combination groups including at least one algorithm to select any one algorithm combination group, and input the record data and the user input data to the selected algorithm combination group to calculate water demand estimation data, a storage unit configured to store the record data collected from the SCADA system, store the user input data, and store the water demand estimation data, and a water demand estimation output unit configured to output the calculated water demand estimation data.
A mechanism is provided in a data processing system for tailoring question answering system output based on user expertise. The mechanism receives an input question from a questioning user and determines a set of features associated with text of the input question. The mechanism determines an expertise level of the questioning user based on the set of features associated with the text of the input question using a trained expertise model. The mechanism generates one or more candidate answers for the input question and tailors output of the one or more candidate answers based on the expertise level of the questioning user.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method, in a data processing system, for tailoring question answering system output based on user expertise, the method comprising: receiving, by the data processing system, an input question from a questioning user; determining, by the data processing system, a set of features associated with text of the input question; determining, by the data processing system, an expertise level of the questioning user based on the set of features associated with the text of the input question using a trained expertise model; generating, by the data processing system, one or more candidate answers for the input question; and tailoring, by the data processing system, output of the one or more candidate answers based on the expertise level of the questioning user. 2. The method of claim 1, wherein determining the set of features associated with the text of the input question comprises: extracting a plurality of features from the text of the input question using an annotation engine pipeline in the data processing system. 3. The method of claim 2, wherein the plurality of features comprises at least one of content words formed into unigram/ngram lexical features, social hedges, specificity of words, specific experience level indicators, or references to external expertise. 4. The method of claim 2, wherein determining the set of features associated with the text of the input question further comprises: obtaining features from the questioning user's posting history within a collection of question and answer postings. 5. The method of claim 4, wherein determining the set of features associated with the text of the input question further comprises: obtaining features from responses by other users within the collection of question and answer postings. 6. The method of claim 1, wherein the trained expertise model comprises a question partition trained using questions in a collection of question and answer postings and an answer partition trained using answers in the collection of question and answer postings and wherein determining the expertise level of the questioning user comprises determining the expertise level of the questioning user using the question partition of the trained expertise model. 7. The method of claim 1, wherein generating the one or more candidate answers for the input question comprises generating the one or more candidate answers from a collection of question and answer postings. 8. The method of claim 7, wherein tailoring output of the one or more candidate answers comprises: determining an expertise level of a contributing user providing evidence for a given candidate answer, comprising: obtaining features from the contributing user's posting history within the collection of question and answer postings; and obtaining features from responses by other users within the collection of question and answer postings. 9. The method of claim 1, wherein tailoring output of the one or more candidate answers comprises: determining an expertise level of each of the one or more candidate answers using the trained expertise model; and ranking the one or more candidate answers based on the expertise levels of the one or more candidate answers. 10. The method of claim 1, wherein tailoring output of the one or more candidate answers comprises: selecting only candidate answers that have a high confidence score and match the expertise level of the questioning user. 11. The method of claim 1, wherein training the trained expertise model comprises: harvesting a collection of question and answer postings; labeling questions and answers in the collection with predetermined expertise levels; determining a set of features associated with text of each question and answer; and training a machine learning model based on the predetermined expertise levels and the sets of features associated with the text of the questions and answers to form the trained expertise model. 12. The method of claim 11, wherein determining the set of features associated with text of a given question or answer comprises: extracting a plurality of features from the text of the given question or answer using an annotation engine pipeline; obtaining features from posting history of a contributing user associated with the given question or answer; and obtaining features from responses by other users within the collection of question and answer postings. 13. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: receive an input question from a questioning user; determine a set of features associated with text of the input question; determine an expertise level of the questioning user based on the set of features associated with the text of the input question using a trained expertise model; generate one or more candidate answers for the input question; and tailor output of the one or more candidate answers based on the expertise level of the questioning user. 14. The computer program product of claim 13, wherein determining the set of features associated with the text of the input question comprises: extracting a plurality of features from the text of the input question using an annotation engine pipeline. 15. The computer program product of claim 13, wherein tailoring output of the one or more candidate answers comprises: determining an expertise level of each of the one or more candidate answers using the trained expertise model; and ranking the one or more candidate answers based on the expertise levels of the one or more candidate answers. 16. The computer program product of claim 13, wherein tailoring output of the one or more candidate answers comprises: selecting only candidate answers that have a high confidence score and match the expertise level of the questioning user. 17. The computer program product of claim 13, wherein training the trained expertise model comprises: harvesting a collection of question and answer postings; labeling questions and answers in the collection with predetermined expertise levels; determining a set of features associated with text of each question and answer; and training a machine learning model based on the predetermined expertise levels and the sets of features associated with the text of the questions and answers to form the trained expertise model. 18. An apparatus comprising: a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: receive an input question from a questioning user; determine a set of features associated with text of the input question; determine an expertise level of the questioning user based on the set of features associated with the text of the input question using a trained expertise model; generate one or more candidate answers for the input question; and tailor output of the one or more candidate answers based on the expertise level of the questioning user. 19. The apparatus of claim 18, wherein tailoring output of the one or more candidate answers comprises: determining an expertise level of each of the one or more candidate answers using the trained expertise model; and ranking the one or more candidate answers based on the expertise levels of the one or more candidate answers. 20. The apparatus of claim 18, wherein training the trained expertise model comprises: harvesting a collection of question and answer postings; labeling questions and answers in the collection with predetermined expertise levels; determining a set of features associated with text of each question and answer; and training a machine learning model based on the predetermined expertise levels and the sets of features associated with the text of the questions and answers to form the trained expertise model.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A mechanism is provided in a data processing system for tailoring question answering system output based on user expertise. The mechanism receives an input question from a questioning user and determines a set of features associated with text of the input question. The mechanism determines an expertise level of the questioning user based on the set of features associated with the text of the input question using a trained expertise model. The mechanism generates one or more candidate answers for the input question and tailors output of the one or more candidate answers based on the expertise level of the questioning user.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A mechanism is provided in a data processing system for tailoring question answering system output based on user expertise. The mechanism receives an input question from a questioning user and determines a set of features associated with text of the input question. The mechanism determines an expertise level of the questioning user based on the set of features associated with the text of the input question using a trained expertise model. The mechanism generates one or more candidate answers for the input question and tailors output of the one or more candidate answers based on the expertise level of the questioning user.
The technology disclosed relates to evolving a deep neural network based solution to a provided problem. In particular, it relates to providing an improved cooperative evolution technique for deep neural network structures. It includes creating blueprint structures that include a plurality of supermodule structures. The supermodule structures include a plurality of modules. The modules are neural networks. A first loop of evolution executes at the blueprint level. A second loop of evolution executes at the supermodule level. Further, multiple mini-loops of evolution execute at each of the subpopulations of the supermodules. The first loop, the second loop, and the mini-loops execute in parallel.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented system for cooperatively evolving a deep neural network structure that solves a provided problem when trained on a source of training data containing labeled examples of data sets for the problem, the system comprising: a memory storing a candidate supermodule genome database having a pool of candidate supermodules, each of the candidate supermodules identifying respective values for a plurality of supermodule hyperparameters of the supermodule, the supermodule hyperparameters including supermodule global topology hyperparameters identifying a plurality of modules in the candidate supermodule and module interconnects among the modules in the candidate supermodule, at least one of the modules in each candidate supermodule including a neural network, each candidate supermodule having associated therewith storage for an indication of a respective supermodule fitness value; the memory further storing a blueprint genome database having a pool of candidate blueprints for solving the provided problem, each of the candidate blueprints identifying respective values for a plurality of blueprint topology hyperparameters of the blueprint, the blueprint topology hyperparameters including a number of included supermodules, and interconnects among the included supermodules, each candidate blueprint having associated therewith storage for an indication of a respective blueprint fitness value; an instantiation module which instantiates each of at least a training subset of the blueprints in the pool of candidate blueprints, at least one of the blueprints being instantiated more than once, each instantiation of a candidate blueprint including identifying for the instantiation a supermodule from the pool of candidate supermodules for each of the supermodules identified in the blueprint; a training module which trains neural networks on training data from the source of training data, the neural networks are modules which are identified by supermodules in each of the blueprint instantiations, the training includes modifying submodules of the neural network modules in dependence upon back-propagation algorithms; an evaluation module which, for each given one of the blueprints in the training subset of blueprints: evaluates each instantiation of the given blueprint on validation data, to develop a blueprint instantiation fitness value associated with each of the blueprint instantiations, updates fitness values of all supermodules identified for inclusion in each instantiation of the given blueprint in dependence upon the fitness value of the blueprint instantiation, and updates a blueprint fitness value for the given blueprint in dependence upon the fitness values for the instantiations of the blueprint; a competition module which: selects blueprints for discarding from the pool of candidate blueprints in dependence upon their updated fitness values, and selects supermodules from the candidate supermodule pool for discarding in dependence upon their updated fitness values; a procreation module which: forms new supermodules in dependence upon a respective set of at least one parent supermodule from the pool of candidate supermodules, and forms new blueprints in dependence upon a respective set of at least one parent blueprint from the pool of candidate blueprints; and a solution harvesting module providing for deployment a selected one of the blueprints remaining in the candidate blueprint pool, instantiated with supermodules selected from the candidate supermodule pool. 2. The system of claim 1, wherein each supermodule in the pool of candidate supermodules further belongs to a subpopulation of the supermodules, wherein the blueprint topology hyperparameters of blueprints in the pool of candidate blueprints also identify a supermodule subpopulation for each included supermodule, wherein the instantiation module selects, for each supermodule identified in the blueprint, a supermodule from the subpopulation of supermodules which is identified by the blueprint, wherein the competition module, in selecting supermodules from the candidate supermodule pool for discarding in dependence upon their updated fitness values, does so in further dependence upon the subpopulation to which the supermodules belong, wherein the procreation module, in forming new supermodules in dependence upon a respective set of at least one parent supermodule from the pool of candidate supermodules, forms the new supermodules only in dependence upon parent supermodules which belong to the same subpopulation, and wherein the system is further configured to comprise a re-speciation module which re-speciates the supermodules in the pool of candidate supermodules into updated subpopulations. 3. The system of claim 2, wherein the competition module selects supermodules for discarding from the subpopulation with a same subpopulation identifier (ID). 4. The system of claim 1, further configured to comprise a control module which repeatedly invokes, for each of a plurality of generations, the training module, the evaluation module, the competition module, and the procreation module. 5. The system of claim 1, wherein the validation data is data previously unseen during training of a particular supermodule. 6. The system of claim 1, wherein a particular supermodule is identified in a plurality of blueprint instantiations, and wherein the evaluation module updates a supermodule fitness value associated with the particular supermodule in dependence of respective blueprint instantiation fitness values associated with each of the blueprint instantiations in the plurality. 7. The system of claim 6, wherein the supermodule fitness value is an average of the respective blueprint instantiation fitness values. 8. The system of claim 1, wherein the evaluation module assigns a supermodule fitness value to a particular supermodule if the supermodule fitness value is previously undetermined. 9. The system of claim 1, wherein the evaluation module, for a particular supermodule, merges a current supermodule fitness value with a previously determined supermodule fitness. 10. The system of claim 9, wherein the merging includes averaging. 11. The system of claim 1, wherein the evaluation module updates the blueprint fitness value for the given blueprint by averaging the fitness values for the instantiations of the blueprint. 12. The system of claim 1, wherein the supermodule hyperparameters further comprise module topology hyperparameters that identify a plurality of submodules of the neural network and interconnections among the submodules. 13. The system of claim 12, wherein crossover and mutation of the module topology hyperparameters during procreation includes modifying a number of submodules and/or interconnections among them. 14. A computer-implemented method of cooperatively evolving a deep neural network structure that solves a provided problem when trained on a source of training data containing labeled examples of data sets for the problem, the method including: storing a candidate supermodule genome database having a pool of candidate supermodules, each of the candidate supermodules identifying respective values for a plurality of supermodule hyperparameters of the supermodule, the supermodule hyperparameters including supermodule global topology hyperparameters identifying a plurality of modules in the candidate supermodule and module interconnects among the modules in the candidate supermodule, at least one of the modules in each candidate supermodule including a neural network, each candidate supermodule having associated therewith storage for an indication of a respective supermodule fitness value; storing a blueprint genome database having a pool of candidate blueprints for solving the provided problem, each of the candidate blueprints identifying respective values for a plurality of blueprint topology hyperparameters of the blueprint, the blueprint topology hyperparameters including a number of included supermodules, and interconnects among the included supermodules, each candidate blueprint having associated therewith storage for an indication of a respective blueprint fitness value; instantiating each of at least a training subset of the blueprints in the pool of candidate blueprints, at least one of the blueprints being instantiated more than once, each instantiation of a candidate blueprint including identifying for the instantiation a supermodule from the pool of candidate supermodules for each of the supermodules identified in the blueprint; training neural networks on training data from the source of training data, the neural networks are modules which are identified by supermodules in each of the blueprint instantiations, the training further includes modifying submodules of the neural networks in dependence upon back-propagation algorithms; for each given one of the blueprints in the training subset of blueprints, evaluating each instantiation of the given blueprint on validation data, to develop a blueprint instantiation fitness value associated with each of the blueprint instantiations, updating fitness values of all supermodules identified for inclusion in each instantiation of the given blueprint in dependence upon the fitness value of the blueprint instantiation, and updating a blueprint fitness value for the given blueprint in dependence upon the fitness values for the instantiations of the blueprint; selecting blueprints for discarding from the pool of candidate blueprints in dependence upon their updated fitness values; selecting supermodules from the candidate supermodule pool for discarding in dependence upon their updated fitness values; forming new supermodules in dependence upon a respective set of at least one parent supermodule from the pool of candidate supermodules; forming new blueprints in dependence upon a respective set of at least one parent blueprint from the pool of candidate blueprints; and deploying a selected one of the blueprints remaining in the candidate blueprint pool, instantiated with supermodules selected from the candidate supermodule pool. 15. The computer-implemented method of claim 14, wherein each supermodule in the pool of candidate supermodules further belongs to a subpopulation of the supermodules, wherein the blueprint topology hyperparameters of blueprints in the pool of candidate blueprints also identify a supermodule subpopulation for each included supermodule, wherein the instantiating includes selecting, for each supermodule identified in the blueprint, a supermodule from the subpopulation of supermodules which is identified by the blueprint, wherein, in selecting supermodules from the candidate supermodule pool for discarding in dependence upon their updated fitness values, doing so in further dependence upon the subpopulation to which the supermodules belong, wherein, in forming new supermodules in dependence upon a respective set of at least one parent supermodule from the pool of candidate supermodules, forming the new supermodules only in dependence upon parent supermodules which belong to the same subpopulation, and wherein re-speciating the supermodules in the pool of candidate supermodules into updated subpopulations. 16. The computer-implemented method of claim 14, further including repeatedly performing, for each of a plurality of generations, the training, the evaluating, the updating, the selecting, the forming, and the deploying. 17. The computer-implemented method of claim 14, wherein a particular supermodule is identified in a plurality of blueprint instantiations, and further including updating a supermodule fitness value associated with the particular supermodule in dependence of respective blueprint instantiation fitness values associated with each of the blueprint instantiations in the plurality. 18. A non-transitory computer readable storage medium impressed with computer program instructions to cooperatively evolve a deep neural network structure that solves a provided problem when trained on a source of training data containing labeled examples of data sets for the problem, the instructions, when executed on a processor, implement a method comprising: storing a candidate supermodule genome database having a pool of candidate supermodules, each of the candidate supermodules identifying respective values for a plurality of supermodule hyperparameters of the supermodule, the supermodule hyperparameters including supermodule global topology hyperparameters identifying a plurality of modules in the candidate supermodule and module interconnects among the modules in the candidate supermodule, at least one of the modules in each candidate supermodule including a neural network, each candidate supermodule having associated therewith storage for an indication of a respective supermodule fitness value; storing a blueprint genome database having a pool of candidate blueprints for solving the provided problem, each of the candidate blueprints identifying respective values for a plurality of blueprint topology hyperparameters of the blueprint, the blueprint topology hyperparameters including a number of included supermodules, and interconnects among the included supermodules, each candidate blueprint having associated therewith storage for an indication of a respective blueprint fitness value; instantiating each of at least a training subset of the blueprints in the pool of candidate blueprints, at least one of the blueprints being instantiated more than once, each instantiation of a candidate blueprint including identifying for the instantiation a supermodule from the pool of candidate supermodules for each of the supermodules identified in the blueprint; training neural networks on training data from the source of training data, the neural networks are modules which are identified by supermodules in each of the blueprint instantiations, the training further includes modifying submodules of the neural networks in dependence upon back-propagation algorithms; for each given one of the blueprints in the training subset of blueprints, evaluating each instantiation of the given blueprint on validation data, to develop a blueprint instantiation fitness value associated with each of the blueprint instantiations, updating fitness values of all supermodules identified for inclusion in each instantiation of the given blueprint in dependence upon the fitness value of the blueprint instantiation, and updating a blueprint fitness value for the given blueprint in dependence upon the fitness values for the instantiations of the blueprint; selecting blueprints for discarding from the pool of candidate blueprints in dependence upon their updated fitness values; selecting supermodules from the candidate supermodule pool for discarding in dependence upon their updated fitness values; forming new supermodules in dependence upon a respective set of at least one parent supermodule from the pool of candidate supermodules; forming new blueprints in dependence upon a respective set of at least one parent blueprint from the pool of candidate blueprints; and deploying a selected one of the blueprints remaining in the candidate blueprint pool, instantiated with supermodules selected from the candidate supermodule pool. 19. The non-transitory computer readable storage medium of claim 18, wherein each supermodule in the pool of candidate supermodules further belongs to a subpopulation of the supermodules, wherein the blueprint topology hyperparameters of blueprints in the pool of candidate blueprints also identify a supermodule subpopulation for each included supermodule, wherein the instantiating includes selecting, for each supermodule identified in the blueprint, a supermodule from the subpopulation of supermodules which is identified by the blueprint, wherein, in selecting supermodules from the candidate supermodule pool for discarding in dependence upon their updated fitness values, doing so in further dependence upon the subpopulation to which the supermodules belong, wherein, in forming new supermodules in dependence upon a respective set of at least one parent supermodule from the pool of candidate supermodules, forming the new supermodules only in dependence upon parent supermodules which belong to the same subpopulation, and wherein re-speciating the supermodules in the pool of candidate supermodules into updated subpopulations. 20. The non-transitory computer readable storage medium of claim 18, implementing the method further comprising repeatedly performing, for each of a plurality of generations, the training, the evaluating, the updating, the selecting, the forming, and the deploying.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: The technology disclosed relates to evolving a deep neural network based solution to a provided problem. In particular, it relates to providing an improved cooperative evolution technique for deep neural network structures. It includes creating blueprint structures that include a plurality of supermodule structures. The supermodule structures include a plurality of modules. The modules are neural networks. A first loop of evolution executes at the blueprint level. A second loop of evolution executes at the supermodule level. Further, multiple mini-loops of evolution execute at each of the subpopulations of the supermodules. The first loop, the second loop, and the mini-loops execute in parallel.
G06N3086
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: The technology disclosed relates to evolving a deep neural network based solution to a provided problem. In particular, it relates to providing an improved cooperative evolution technique for deep neural network structures. It includes creating blueprint structures that include a plurality of supermodule structures. The supermodule structures include a plurality of modules. The modules are neural networks. A first loop of evolution executes at the blueprint level. A second loop of evolution executes at the supermodule level. Further, multiple mini-loops of evolution execute at each of the subpopulations of the supermodules. The first loop, the second loop, and the mini-loops execute in parallel.
Data is received that include values that correspond to a plurality of variables. A score is then generated based on the received data and using a boosted ensemble of segmented scorecard models. The boosted ensemble of segmented scorecard models includes two or more segmented scorecard models. Subsequently, data including the score can be provided (e.g., displayed, transmitted, loaded, stored, etc.). Related apparatus, systems, techniques and articles are also described.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: receiving data comprising values corresponding a plurality of variables; generating a score based on the received data and using a boosted ensemble of segmented scorecard models, the boosted ensemble of segmented scorecard models comprising two or more segmented scorecard models; and providing data comprising the score. 2. A method as in claim 1, wherein the score ƒ is calculated by the boosted ensemble of segmented scorecard models based on: f = β * s + w 0 + ∑ i = 1 I  f i  ( x i ) , wherein s is a score derived from a previous ensemble, β is a shrinking parameter; and wherein the enhanced scorecard model optimizes β and predictor bin weights so that the score ƒ is better than the score s. 3. A method as in claim 1, wherein the providing data comprises at least one of: displaying the score, transmitting data comprising the score to a remote computing system, loading data comprising the score into memory, or storing data comprising the score. 4. A method as in claim 1, wherein at least one of the receiving, generating, and providing is implemented by at least one data processor forming part of at least one computing system. 5. A method as in claim 1, wherein the boosted ensemble of segmented scorecard models is generated by: training a first segmented scorecard model; identifying or generating a second segmented scorecard model that provides an enhanced score relative to the first segmented scorecard model; enumerating split variables and split values in the second segmented scorecard model using a segmentation search algorithm; and forming the boosted ensemble of segmented scorecard models based on both the first segmented scorecard model and the second segmented scorecard model. 6. A non-transitory computer program product storing instructions which, when executed by at least one data processor forming part of at least one computing system, results in operations comprising: receiving data comprising values corresponding a plurality of variables; generating a score based on the received data and using a boosted ensemble of segmented scorecard models, the boosted ensemble of segmented scorecard models comprising two or more segmented scorecard models; and providing data comprising the score. 7. A computer program product as in claim 6, wherein the score ƒ is calculated by the boosted ensemble of segmented scorecard models based on: f = β * s + w 0 + ∑ i = 1 I  f i  ( x i ) , wherein s is a score derived from a previous ensemble, β is a shrinking parameter; and wherein the enhanced scorecard model optimizes β and predictor bin weights so that the score ƒ is better than the score s. 8. A computer program product as in claim 6, wherein the providing data comprises at least one of: displaying the score, transmitting data comprising the score to a remote computing system, loading data comprising the score into memory, or storing data comprising the score. 9. A computer program product as in claim 6, wherein the boosted ensemble of segmented scorecard models is generated by: training a first segmented scorecard model; identifying or generating a second segmented scorecard model that provides an enhanced score relative to the first segmented scorecard model; enumerating split variables and split values in the second segmented scorecard model using a segmentation search algorithm; and forming the boosted ensemble of segmented scorecard models based on both the first segmented scorecard model and the second segmented scorecard model. 10. A method comprising: training a first segmented scorecard model; identifying or generating a second segmented scorecard model that provides an enhanced score relative to the first segmented scorecard model; enumerating split variables and split values in the second segmented scorecard model using a segmentation search algorithm; and forming a boosted ensemble of scorecard models based on both the first model and the second segmented scorecard model. 11. A method as in claim 10, wherein the first segmented scorecards is trained using at least one of a scorecard model, a regression, or a neural network. 12. A method as in claim 11, wherein at least one of the training, identifying, enumerating, and forming is implemented by at least one data processor forming part of at least one computing system. 13. A method as in claim 10, further comprising: enabling local or remote access to the boosted ensemble of segmented scorecard models to enable scores to be generated.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Data is received that include values that correspond to a plurality of variables. A score is then generated based on the received data and using a boosted ensemble of segmented scorecard models. The boosted ensemble of segmented scorecard models includes two or more segmented scorecard models. Subsequently, data including the score can be provided (e.g., displayed, transmitted, loaded, stored, etc.). Related apparatus, systems, techniques and articles are also described.
G06N5048
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Data is received that include values that correspond to a plurality of variables. A score is then generated based on the received data and using a boosted ensemble of segmented scorecard models. The boosted ensemble of segmented scorecard models includes two or more segmented scorecard models. Subsequently, data including the score can be provided (e.g., displayed, transmitted, loaded, stored, etc.). Related apparatus, systems, techniques and articles are also described.