output
stringlengths
7
3.46k
input
stringclasses
1 value
instruction
stringlengths
129
114k
Provided in the present invention are a method and apparatus for labeling training samples. In the embodiments of the present invention, two mutually independent classifiers, i.e. a first classifier and a second classifier, are used to perform collaborative forecasting on M unlabeled first training samples to obtain some of the labeled first training samples, without the need for the participation of operators; the operation is simple and the accuracy is high, thereby improving the efficiency and reliability of labeling training samples.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. An apparatus for labeling training samples, comprising: one or more processors; and a memory having one or more programs stored thereon to be executed by said one or more processors, the programs including instruction for: inputting M unlabeled first training samples into a first classifier to obtain a first forecasting result of each first training sample in the M first training samples, M being an integer greater than or equal to 1; selecting N first training samples as second training samples from the M first training samples according to the first forecasting result of each first training sample, N being an integer greater than or equal to 1 and less than or equal to M; inputting the N second training samples into a second classifier to obtain a second forecasting result of each second training sample in the N second training samples, the first classifier and the second classifier being independent of each other; selecting P second training samples from said N second training samples according to the second forecasting result of each second training sample, P being an integer greater than or equal to 1 and less than or equal to N; selecting Q first training samples from the other first training samples according to first forecasting results of other first training samples in the M first training samples apart from the N second training samples and the value of P, Q being an integer greater than or equal to 1 and less than or equal to a difference of M−N; and generating P labeled second training samples according to second forecasting results of the P second training samples and each of the second training samples; and generating Q labeled first training samples according to first forecasting results of the Q first training samples and each of the first training samples therein. 2. The apparatus of claim 1, wherein the programs include instruction for: obtaining a first probability that said first training samples indicated by the first forecasting result are of a designated type; and selecting, from the M first training samples, the N first training samples of which the first probability satisfies a pre-set first training condition as the second training samples; or wherein the programs include instruction for: obtaining a second probability that the second training samples indicated by the second forecasting result are of the designated type; and selecting, from the N second training samples, the P second training samples of which the second probability satisfies a pre-set second training condition. 3. The apparatus of claim 2, wherein the designated type comprises a positive-example type, a counter-example type, or a combination thereof. 4. The apparatus of claim 2, wherein the first training condition comprises a probability that the first training samples indicated by the first forecasting result are of the designated type is greater than or equal to a first threshold value and is less than or equal to a second threshold value; or wherein the second training condition comprises a designated number with a minimum probability that the second training samples indicated by the second forecasting result are of the designated type. 5. The apparatus of claim 1, wherein the programs include instruction for: selecting, from the other first training samples, P first training samples of which a third probability that the first training samples indicated by the first forecasting result are of a designated type satisfies a pre-set third training condition; and selecting, from the other first training samples, Q−P first training samples of which the third probability satisfies a pre-set fourth training condition. 6. The apparatus of claim 5, wherein the third training condition comprises a designated number with a minimum probability that the first training samples indicated by the first forecasting result are of a designated type; or wherein the fourth training condition comprises a designated number with a maximum probability that the first training samples indicated by the first forecasting result are of a designated type. 7. The apparatus of claim 1, wherein a ratio of Q−P to 2P is a golden ratio. 8. A method for labeling training samples, comprising: selecting P second training samples from N second training samples selected from M first training samples based upon a first forecasting result of each of the M first training samples, the P second training samples being selected based upon a second forecasting result of each of the N second training samples; selecting Q first training samples from other first training samples based upon the first forecasting results of other first training samples in the M first training samples apart from the N second training samples and the value of P; generating P labeled second training samples based upon second forecasting results of the P second training samples and each of the second training samples therein; and generating Q labeled first training samples based upon first forecasting results of the Q first training samples and each of the first training samples therein. 9. The method of claim 8, wherein M is an integer greater than or equal to 1, wherein N is an integer between 1 and M inclusive, wherein P is an integer between 1 and N inclusive, and wherein Q is an integer between 1 and a difference between M and N inclusive. 10. The method of claim 8, further comprising inputting the M first training samples into a first classifier to obtain the first forecasting result of the each of the M first training samples. 11. The method of claim 10, wherein said inputting the M first training samples comprises inputting the M first training samples as unlabeled first training samples. 12. The method of claim 8, further comprising selecting N first training samples as the N second training samples from the M first training samples based upon the first forecasting result of the each of the M first training samples. 13. The method of claim 8, further comprising inputting the N second training samples into a second classifier to obtain the second forecasting result of the each of the N second training samples. 14. The method of claim 13, wherein the second classifier is independent from a first classifier used for obtaining the first forecasting result of the each of the M first training samples. 15. An apparatus for labeling training samples, comprising: one or more processors; and a memory having one or more programs stored thereon to be executed by said one or more processors, the programs including instruction for a labeling process including: instruction for selecting P second training samples from N second training samples selected from M first training samples based upon a first forecasting result of each of the M first training samples, the P second training samples being selected based upon a second forecasting result of each of the N second training samples; instruction for selecting Q first training samples from other first training samples based upon the first forecasting results of other first training samples in the M first training samples apart from the N second training samples and the value of P; instruction for generating P labeled second training samples based upon second forecasting results of the P second training samples and each of the second training samples therein; and instruction for generating Q labeled first training samples based upon first forecasting results of the Q first training samples and each of the first training samples therein. 16. The apparatus of claim 15, wherein the instruction for the labeling process includes instruction for inputting the M first training samples into a first classifier to obtain the first forecasting result of the each of the M first training samples, wherein the M first training samples are unlabeled. 17. The apparatus of claim 16, wherein the first classifier is based on a training sample set, wherein the instruction for the labeling process includes instruction for adding the P labeled second training samples and the Q labeled first training samples to the training sample set, and wherein the programs include instruction for repeatedly executing the instruction for the labeling process until a classification accuracy rate based on the first classifier is greater than or equal to a pre-set accuracy rate threshold value. 18. The apparatus of claim 16, wherein the first classifier is based on a training sample set, wherein the instruction for the labeling process includes instruction for adding the P labeled second training samples and the Q labeled first training samples to the training sample set, and wherein the programs include instruction for repeatedly executing the instruction for the labeling process until a number of the first training samples contained in the training sample set is greater than or equal to a pre-set number threshold value. 19. The apparatus of claim 15, wherein the instruction for the labeling process includes instruction for selecting N first training samples as the N second training samples from the M first training samples based upon the first forecasting result of the each of the M first training samples. 20. The apparatus of claim 15, wherein the instruction for the labeling process includes instruction for inputting the N second training samples into a second classifier to obtain the second forecasting result of the each of the N second training samples, wherein the second classifier is independent from a first classifier used for obtaining the first forecasting result of the each of the M first training samples.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Provided in the present invention are a method and apparatus for labeling training samples. In the embodiments of the present invention, two mutually independent classifiers, i.e. a first classifier and a second classifier, are used to perform collaborative forecasting on M unlabeled first training samples to obtain some of the labeled first training samples, without the need for the participation of operators; the operation is simple and the accuracy is high, thereby improving the efficiency and reliability of labeling training samples.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Provided in the present invention are a method and apparatus for labeling training samples. In the embodiments of the present invention, two mutually independent classifiers, i.e. a first classifier and a second classifier, are used to perform collaborative forecasting on M unlabeled first training samples to obtain some of the labeled first training samples, without the need for the participation of operators; the operation is simple and the accuracy is high, thereby improving the efficiency and reliability of labeling training samples.
An interoperable platform that provides a way to automatically compose and execute even complex workflows without writing code is described. A set of pre-built functional building blocks can be provided. The building blocks perform data transformation and machine learning functions. The functional blocks have few well known plug types. The building blocks can be composed to build complex compositions. Interoperability between data formats, metadata schema and interfaces to machine learning (ML) functions and trained machine learning models can be provided with no loss of information. A cloud runtime environment can be provided in which the composed workflows can be hosted as REST API to run in production.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system comprising: at least one processor: a memory connected to the at least one processor; and at least one program module providing interoperability between a first machine learning execution environment executing in a first programming language and a second execution environment executing in a second programming language wherein the interoperability enables existing tools written in the second programming language to be used in the first machine learning execution environment without reprogramming. 2. The system of claim 1, further comprising: at least one program module that translates a machine learning-specific schema associated with the first machine learning execution environment to a schema associated with the second execution environment without loss of information. 3. The system of claim 1, further comprising: at least one program module that translates a machine learning-specific schema associated with the second execution environment to a schema associated with the first machine learning execution environment without loss of information. 4. The system of claim 1, wherein the second programming language is one of R, JAVA or Python. 5. The system of claim 4, wherein an R factor data type associated with the second programming language is converted to a categorical data type associated with the first programming language. 6. The system of claim 4, wherein an R missing value type associated with the second programming language is converted to a missing value associated with the first programming language. 7. The system of claim 3, wherein machine-learning specific metadata is not lost when execution passes from one execution environment to a second execution environment. 8. The system of claim 3, wherein machine-learning schema comprises metadata about feature columns, labels, scores and weights. 9. A method comprising: providing interoperability between a first machine learning execution environment executing in a first programming language and a second execution environment executing in a second programming language wherein the interoperability enables existing tools written in the second programming language to be used in the first machine learning execution environment without reprogramming. 10. The method of claim 8, wherein the programming language of the second execution environment is R. 11. The method of claim 8, wherein the programming language of the second execution environment is Python. 12. The method of claim 8, wherein the programming language of the second execution environment is JAVA. 13. The method of claim 8, wherein the data types of the programming language of the second execution environment are converted into .NET data types in accordance with an extensible data table. 14. A computer-readable storage medium comprising computer-readable instructions which when executed cause at least one processor of a computing device to: enable existing tools in a plurality of programming languages to be used automatically without conversion coding in a machine learning execution environment. 15. The computer-readable storage medium of claim 14, comprising further computer-readable instructions which when executed cause the at least one processor to: map a scripting language schema of a first programming language to a schema in a data table in a second programming language. 16. The computer-readable storage medium of claim 14, comprising further computer-readable instructions which when executed cause the at least one processor to: map a scripting language schema to a schema in a data table in a machine learning execution environment, wherein the scripting language is R. 17. The computer-readable storage medium of claim 14, comprising further computer-readable instructions which when executed cause the at least one processor to: map a scripting language schema to a schema in a data table in a machine learning execution environment, wherein the scripting language is Python. 18. The computer-readable storage medium of claim 14, comprising further computer-readable instructions which when executed cause the at least one processor to: map a scripting language schema to a schema in a data table in a machine learning execution environment, wherein the data table is extensible. 19. The computer-readable storage medium of claim 14, comprising further computer-readable instructions which when executed cause the at least one processor to: map a scripting language schema to a schema in a data table in a machine learning execution environment, wherein the data types of the scripting language are converted into .NET data types in accordance with an extensible data table. 20. The computer-readable storage medium of claim 14, comprising further computer-readable instructions which when executed cause the at least one processor to: wrap the scripting language code in a .NET wrapper.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: An interoperable platform that provides a way to automatically compose and execute even complex workflows without writing code is described. A set of pre-built functional building blocks can be provided. The building blocks perform data transformation and machine learning functions. The functional blocks have few well known plug types. The building blocks can be composed to build complex compositions. Interoperability between data formats, metadata schema and interfaces to machine learning (ML) functions and trained machine learning models can be provided with no loss of information. A cloud runtime environment can be provided in which the composed workflows can be hosted as REST API to run in production.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An interoperable platform that provides a way to automatically compose and execute even complex workflows without writing code is described. A set of pre-built functional building blocks can be provided. The building blocks perform data transformation and machine learning functions. The functional blocks have few well known plug types. The building blocks can be composed to build complex compositions. Interoperability between data formats, metadata schema and interfaces to machine learning (ML) functions and trained machine learning models can be provided with no loss of information. A cloud runtime environment can be provided in which the composed workflows can be hosted as REST API to run in production.
A method and apparatus for certification of facts introduces a certifier and a fact certificate into the fact-exchange cycle that enables parties to exchange trustworthy facts. Certification is provided to a fact presenter during the first part of the fact-exchange cycle, and verification is provided to the fact receiver during the last part of the cycle. To request a certification, a fact presenter presents the Certifier with a fact. In return, the certifier issues a fact certificate, after which the fact presenter presents the fact certificate to the fact receiver instead of presenting the fact itself. The receiver inspects the received certificate in order to evaluate the fact's validity and trustworthiness. For some facts and notions of verification, the certificate is sufficient and its inspection does not require any communication. For others, the receiver requests a verification service from the Certifier in order to complete the verification.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method for presenting and publishing trustworthy facts, comprising: sending a fact from a fact presenter to a fact certifier for certification of the fact; receiving a fact certificate at the fact presenter from the fact certifier; transmitting any of the fact and the fact certificate from the fact presenter to a fact receiver, wherein the fact receiver is enabled to establish that the fact is valid, using the fact or the fact certificate. 2. The method of claim 1, wherein the transmitting any of the fact and the fact certificate to the fact receiver comprises: via a computing device, transmitting the certified fact and the fact certificate from the fact presenter to the fact receiver; wherein the fact receiver receives both the fact and the fact certificate; and wherein the fact receiver enabled to establish that the fact is valid, using both the fact and the fact certificate. 3. The method of claim 1, wherein the fact presenter knows a plurality of facts, and wherein less than all of the facts are revealed to any of the fact certifier and the fact receiver. 4. The method of claim 3, wherein at least one of the facts that are revealed to the fact certifier or the fact receiver is determined using any of a random selection and selection by the fact receiver. 5. The method of claim 4, wherein the fact receiver is enabled to establish that each of the plurality of facts is valid, and wherein the fact receiver does not know each of the plurality of facts except for those revealed. 6. The method of claim 3, wherein each of the plurality of facts have a unique identifier associated therewith. 7. The method of claim 1, wherein a visual emblem is presented with the fact to the fact receiver. 8. The method of claim 7, wherein the visual emblem enables the fact receiver to access the fact certificate. 9. The method of claim 1, further comprising: broadcasting the fact certificate from the fact presenter to a plurality of witnesses; wherein a level of trust between the fact receiver and one or more of the witnesses enables the fact receiver to trust any of the fact and the fact certificate. 10. The method of claim 1, wherein any of the certified fact and the fact certificate are broadcast to a plurality of fact receivers by the fact presenter. 11. The method of claim 1, wherein identity of the fact receiver is not known to at least one of an observer and the fact presenter. 12. The method of claim 1, wherein the fact comprises any of an event, an observed fact, and a deduced fact. 13. The method of claim 12, wherein the event includes an exchange of a document. 14. The method of claim 13, wherein a person declares that the document is valid. 15. The method of claim 12, wherein the event includes any of a physical event, a measurement, an electronic transaction and a financial transaction. 16. The method of claim 15, wherein the physical event includes any of at least two people being in a common location concurrently, a person being close to an object at a given time, and at least two objects being in a common place concurrently. 17. The method of claim 15, wherein the measurement is performed by a measurement device. 18. The method of claim 17, wherein the measurement device is any of a speed measurement device, a sensor, a camera, and an audio recorder. 19. The method of claim 12, wherein the observed fact comprises any of a measurement or observation. 20. The method of claim 19, wherein the measurement or observation is not revealed to the fact receiver. 21. The method of claim 12, wherein the fact sent from the fact presenter to the fact certifier comprises one of a measurement and an observation performed by an observer. 22. The method of claim 12, wherein the deduced fact is based on any of a deductive reasoning process and one or more basis facts. 23. The method of claim 22, wherein identity of the fact is not known to an observer at time of execution of the deductive reasoning process. 24. The method of claim 22, further comprising: via a computing device, any of presenting, storing and transmitting the deduced fact in an interconnected data structure that allows efficient certification and communication of facts deduced from other facts by the deductive reasoning process, wherein the data structure comprises a tree of facts, wherein: the deduced fact comprises a root node of the tree; wherein the basis facts comprise nodes of the tree; and wherein the observed fact comprises a leaflet of the tree. 25. The method of claim 22, wherein a fact certification library containing executable code allows repetition of the deductive reasoning process. 26. The method of claim 22, wherein the fact presenter and the fact receiver are enabled to achieve lasting or permanent consensus that a statement regarding the deductive reasoning process used to deduce the fact from other facts represents a description of the deductive reasoning process used. 27. The method of claim 22, wherein the fact presenter and the fact receiver are enabled to achieve lasting or permanent consensus that the fact that was deduced from other facts by the deductive reasoning process represents the result of an application of the deductive reasoning process. 28. A computer-implemented method for certifying an event, comprising: individually certifying each fact of a plurality of related observed facts; and certifying the event based on the individual certification of each of the plurality of facts. 29. The method of claim 28, wherein the event comprises any of a transaction, a meeting, an agreement, an ownership, and a measurement. 30. The method of claim 28, wherein the individual certification of at least one of the plurality of facts corresponds to any of a time factor, a location, and an identity factor. 31. The method of claim 28, wherein the time factor includes a timestamp established by a witness. 32. The method of claim 28, wherein the location corresponds to GPS information. 33. The method of claim 28, wherein the identity information includes any of a digital signature and a device identity. 34. The method of claim 28, wherein one or more of the plurality of facts correspond to any of a physical event, a paper document, a measurement, and electronic event, and a financial transaction. 35. The method of claim 28, further comprising: establishing a fact certificate for any of at least one of the facts and the event. 36. The method of claim 28, further comprising: broadcasting the fact certificate to a plurality of witnesses; wherein a level of trust between a fact receiver and one or more of the witnesses enables the fact receiver to trust any of the fact certificate, the fact and the event.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method and apparatus for certification of facts introduces a certifier and a fact certificate into the fact-exchange cycle that enables parties to exchange trustworthy facts. Certification is provided to a fact presenter during the first part of the fact-exchange cycle, and verification is provided to the fact receiver during the last part of the cycle. To request a certification, a fact presenter presents the Certifier with a fact. In return, the certifier issues a fact certificate, after which the fact presenter presents the fact certificate to the fact receiver instead of presenting the fact itself. The receiver inspects the received certificate in order to evaluate the fact's validity and trustworthiness. For some facts and notions of verification, the certificate is sufficient and its inspection does not require any communication. For others, the receiver requests a verification service from the Certifier in order to complete the verification.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method and apparatus for certification of facts introduces a certifier and a fact certificate into the fact-exchange cycle that enables parties to exchange trustworthy facts. Certification is provided to a fact presenter during the first part of the fact-exchange cycle, and verification is provided to the fact receiver during the last part of the cycle. To request a certification, a fact presenter presents the Certifier with a fact. In return, the certifier issues a fact certificate, after which the fact presenter presents the fact certificate to the fact receiver instead of presenting the fact itself. The receiver inspects the received certificate in order to evaluate the fact's validity and trustworthiness. For some facts and notions of verification, the certificate is sufficient and its inspection does not require any communication. For others, the receiver requests a verification service from the Certifier in order to complete the verification.
Machine learning may be personalized to individual users of computing devices, and can be used to increase machine learning prediction accuracy and speed, and/or reduce memory footprint. Personalizing machine learning can include hosting, by a computing device, a consensus machine learning model and collecting information, locally by the computing device, associated with an application executed by the client device. Personalizing machine learning can also include modifying the consensus machine learning model accessible by the application based, at least in part, on the information collected locally by the client device. Modifying the consensus machine learning model can generate a personalized machine learning model. Personalizing machine learning can also include transmitting the personalized machine learning model to a server that updates the consensus machine learning model.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: hosting, by a client device, a consensus machine learning model; collecting information, locally by the client device, associated with an application executed by the client device; and modifying the consensus machine learning model accessible by the application based, at least in part, on the information collected locally by the client device, wherein modifying the consensus machine learning model generates a personalized machine learning model; transmitting the personalized machine learning model to a server; and receiving a global machine learning model from the server, wherein the global machine learning model is based, at least in part, on i) the personalized machine learning model transmitted to the server and ii) an aggregation of a plurality of other personalized machine learning models transmitted from a plurality of other client devices to the server. 2. The method of claim 1, wherein modifying the consensus machine learning model is further based, at least in part, on a hinge loss function including vectors representing (i) the information collected locally by the client device, (ii) target labels of the personalized machine learning model, (iii) the personalized machine learning model, and the transpose of the vector representing the personalized machine learning model. 3. The method of claim 2, wherein modifying the consensus machine learning model is further based, at least in part, on a comparison between the personalized machine learning model and the consensus machine learning model. 4. The method of claim 1, wherein transmitting the personalized machine learning model to the server comprises: de-identifying at least a portion of the information collected locally by the client device. 5. The method of claim 1, wherein the information comprises private information of a user of the system. 6. The method of claim 1, wherein modifying the consensus machine learning model is further based, at least in part, on a pattern of behavior of a user of the client device over at least a predetermined time. 7. The method of claim 1, wherein collecting information comprises one or more of the following: capturing an image of a user of the client device, capturing a voice sample of the user of the client device, or receiving a search query from the user of the client device. 8. The method of claim 1, further comprising: modifying the global machine learning model received from the server based, at least in part, on additional information collected locally by the client device, wherein modifying the global machine learning model generates an updated personalized machine learning model. 9. The method of claim 8, further comprising: transmitting the updated personalized machine learning model to the server; and receiving an updated global machine learning model from the server, wherein the updated global machine learning model is based, at least in part, on i) the updated personalized machine learning model transmitted to the server and ii) an aggregation of a plurality of other updated personalized machine learning models transmitted from at least a portion of the plurality of other client devices to the server. 10. A method comprising: hosting, by a server, a global machine learning model; receiving, from a plurality of client devices, personalized machine learning models, wherein the personalized machine learning models are based, at least in part, on information collected locally by each of the plurality of client devices; modifying the global machine learning model based, at least in part, on the personalized machine learning models received from the plurality of client devices, wherein modifying the global machine learning model generates a modified global machine learning model; and transmitting the modified global machine learning model to at least a portion of the plurality of client devices. 11. The method of claim 10, wherein modifying the global machine learning model is further based, at least in part, on a hinge loss function including vectors representing (i) an aggregation of the information collected locally by the plurality of client devices, (ii) target labels of the modified global machine learning model, (iii) the modified global machine learning model, and the transpose of the vector representing the modified global machine learning model. 12. The method of claim 11, wherein modifying the global machine learning model is further based, at least in part, on a minimization operation of a product of the global machine learning model and an estimate of a Lagrange multiplier. 13. The method of claim 10, wherein the personalized machine learning models received by the server include de-identified data representative of the information collected locally by the client devices. 14. The method of claim 13, wherein the de-identified data comprises private information of users of the client devices. 15. The method of claim 10, wherein modifying the global machine learning model and/or transmitting the modified global machine learning model is performed asynchronously with the plurality of the client devices. 16. The method of claim 10, wherein information collected locally by each of the plurality of client devices information comprises one or more of the following: a captured image of a user of the client device, a captured voice sample of the user of the client device, or a received search query from the user of the client device. 17. The method of claim 10, further comprising: further modifying the global machine learning model based, at least in part, on additional information collected locally by at least a portion of the client devices, wherein further modifying the global machine learning model generates an updated global machine learning model; and transmitting the updated global machine learning model to at least another portion of the plurality of the client devices. 18. Computer-readable storage media of a client device storing computer-executable instructions that, when executed by one or more processors of the client device, configure the one or more processors to perform operations comprising: hosting, by the client device, a consensus machine learning model; collecting information, locally by the client device, associated with an application executed by the client device; and modifying the consensus machine learning model accessible by the application based, at least in part, on the information collected locally by the client device, wherein modifying the consensus machine learning model generates a personalized machine learning model; transmitting the personalized machine learning model to a server; and receiving a global machine learning model from the server, wherein the global machine learning model is based, at least in part, on i) the personalized machine learning model transmitted to the server and ii) an aggregation of a plurality of other personalized machine learning models transmitted from a plurality of other client devices to the server. 19. The computer-readable storage media of claim 18, wherein transmitting the personalized machine learning model to the server comprises: de-identifying at least a portion of the information collected locally by the client device. 20. The computer-readable storage media of claim 18, wherein collecting information, locally by the client device, comprises monitoring one or more use patterns of a user of the client device.
REJECTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Machine learning may be personalized to individual users of computing devices, and can be used to increase machine learning prediction accuracy and speed, and/or reduce memory footprint. Personalizing machine learning can include hosting, by a computing device, a consensus machine learning model and collecting information, locally by the computing device, associated with an application executed by the client device. Personalizing machine learning can also include modifying the consensus machine learning model accessible by the application based, at least in part, on the information collected locally by the client device. Modifying the consensus machine learning model can generate a personalized machine learning model. Personalizing machine learning can also include transmitting the personalized machine learning model to a server that updates the consensus machine learning model.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Machine learning may be personalized to individual users of computing devices, and can be used to increase machine learning prediction accuracy and speed, and/or reduce memory footprint. Personalizing machine learning can include hosting, by a computing device, a consensus machine learning model and collecting information, locally by the computing device, associated with an application executed by the client device. Personalizing machine learning can also include modifying the consensus machine learning model accessible by the application based, at least in part, on the information collected locally by the client device. Modifying the consensus machine learning model can generate a personalized machine learning model. Personalizing machine learning can also include transmitting the personalized machine learning model to a server that updates the consensus machine learning model.
Prediction systems and methods are provided. The system obtains a first social media data pertaining to a first set of users, filters the first social media data to obtain a filtered social media data, generates a word embedding matrix including co-occurrence words each represented as a vector having a context, aggregates vectors pertaining each social data to obtain a first set of vectors, and trains machine learning technique(s) (MLTs) using the first set of vectors and context of the first set of vectors. The system further obtains a second social media data pertaining to a second set of users, and performs filtering, word embedding matrix generation, and aggregation operations to obtain a second set of vectors, and further applies the trained MLTs on the second set of vectors and context associated with the second set of vectors to predict age and gender of the second set of users.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A processor implemented method, comprising: (i) obtaining, using one or more hardware processors, a first social media data from one or more sources, said first social media data pertains to a first set of users; (ii) filtering said first social media data by identifying one or more stop words, and one or more expressions to obtain a first filtered social media data; (iii) generating a word embedding matrix comprising a first set of co-occurrence words from said first filtered social media data, each co-occurrence word from said first set of co-occurrence words is represented as a vector comprising context; (iv) aggregating one or more vectors pertaining to each social data submitted by each user to obtain a first set of vectors for said first set of users; and (v) training one or more machine learning techniques using said first set of vectors and context associated with said first set of vectors to obtain one or more trained machine learning techniques. 2. The processor implemented method of claim 1, further comprising: obtaining a second social media data from one or more sources, wherein said second social media data pertains to a second set of users; repeating the steps of (ii) till (iv) to obtain a second set of vectors for said second set of users based on said second social media data; and applying said one or more trained machine learning techniques (MLTs) on said second set of vectors and context associated with each of said second set of vectors. 3. The processor implemented method of claim 2, further comprising: predicting an age and a gender of each user from said second set of users upon said one or more machine learning techniques applied on said second set of vectors and said context associated with each of said second set of vectors, wherein each of said predicted age and said predicted gender are associated with a probability score. 4. The processor implemented method of claim 3, wherein applying said one or more trained MLTs includes selecting at least a subset of said one or more trained machine learning techniques based on a training level of the MLTs. 5. The processor implemented method of claim 4, wherein said at least a subset of said one or more trained machine learning techniques is selected based on a weight assigned to said one or more machine learning techniques during training. 6. A system comprising: a memory storing instructions; one or more communication interfaces; and one or more hardware processors communicatively coupled to said memory using said one or more communication interfaces, wherein said one or more hardware processors are configured by said instructions to: (i) obtain a first social media data from one or more sources, said social media data pertains to a first set of users; (ii) filter said first social media data by identifying one or more stop words, and one or more expressions to obtain a first filtered social media data; (iii) generate a word embedding matrix comprising a first set of co-occurrence words from said first filtered social media data, each co-occurrence word from said first set of co-occurrence words is represented as a vector comprising context; (iv) aggregate one or more vectors pertaining each social data submitted by each user to obtain a first set of vectors for said first set of users; and (v) train one or more machine learning techniques using said first set of vectors and context associated with said first set of vectors to obtain one or more trained machine learning techniques. 7. The system of claim 6, wherein said one or more hardware processors are further configured to obtain a second social media data from one or more sources, wherein said second social media data pertains to a second set of users, repeat the steps (ii) till (iv) to obtain a second set of vectors for said second set of users based on said second social media data, and apply said one or more machine learning techniques (MLTs) on said second set of vectors and context associated with each of said second set of vectors. 8. The system of claim 7, wherein said one or more hardware processors are further configured to predict an age and a gender of each user from said second set of users upon said one or more machine learning techniques applied on said second set of vectors and said context associated with each of said second set of vectors, wherein each of said predicted age and said predicted gender are associated with a probability score. 9. The system of claim 8, wherein said one or more hardware processors are configured to select at least a subset of said one or more machine learning techniques based on a training level of the MLTs. 10. The system of claim 9, wherein said at least a subset of said one or more machine learning techniques is selected based on a weight assigned to said one or more machine learning techniques during training. 11. One or more non-transitory machine readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors causes: (i) obtaining, using the one or more hardware processors, a first social media data from one or more sources, said first social media data pertains to a first set of users; (ii) filtering said first social media data by identifying one or more stop words, and one or more expressions to obtain a first filtered social media data; (iii) generating a word embedding matrix comprising a first set of co-occurrence words from said first filtered social media data, each co-occurrence word from said first set of co-occurrence words is represented as a vector comprising context; (iv) aggregating one or more vectors pertaining to each social data submitted by each user to obtain a first set of vectors for said first set of users; and (v) training one or more machine learning techniques using said first set of vectors and context associated with said first set of vectors to obtain one or more trained machine learning techniques. 12. The one or more non-transitory machine readable information storage mediums of claim 11, wherein the one or more instructions which when executed by the one or more hardware processors further cause: obtaining a second social media data from one or more sources, wherein said second social media data pertains to a second set of users; repeating the steps of (ii) till (iv) to obtain a second set of vectors for said second set of users based on said second social media data; and applying said one or more trained machine learning techniques (MLTs) on said second set of vectors and context associated with each of said second set of vectors. 13. The one or more non-transitory machine readable information storage mediums of claim 12, wherein the one or more instructions which when executed by the one or more hardware processors further cause: predicting an age and a gender of each user from said second set of users upon said one or more machine learning techniques applied on said second set of vectors and said context associated with each of said second set of vectors, wherein each of said predicted age and said predicted gender are associated with a probability score. 14. The one or more non-transitory machine readable information storage mediums of claim 13, wherein applying said one or more trained MLTs includes selecting at least a subset of said one or more trained machine learning techniques based on a training level of the MLTs. 15. The one or more non-transitory machine readable information storage mediums of claim 14, wherein said at least a subset of said one or more trained machine learning techniques is selected based on a weight assigned to said one or more machine learning techniques during training.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Prediction systems and methods are provided. The system obtains a first social media data pertaining to a first set of users, filters the first social media data to obtain a filtered social media data, generates a word embedding matrix including co-occurrence words each represented as a vector having a context, aggregates vectors pertaining each social data to obtain a first set of vectors, and trains machine learning technique(s) (MLTs) using the first set of vectors and context of the first set of vectors. The system further obtains a second social media data pertaining to a second set of users, and performs filtering, word embedding matrix generation, and aggregation operations to obtain a second set of vectors, and further applies the trained MLTs on the second set of vectors and context associated with the second set of vectors to predict age and gender of the second set of users.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Prediction systems and methods are provided. The system obtains a first social media data pertaining to a first set of users, filters the first social media data to obtain a filtered social media data, generates a word embedding matrix including co-occurrence words each represented as a vector having a context, aggregates vectors pertaining each social data to obtain a first set of vectors, and trains machine learning technique(s) (MLTs) using the first set of vectors and context of the first set of vectors. The system further obtains a second social media data pertaining to a second set of users, and performs filtering, word embedding matrix generation, and aggregation operations to obtain a second set of vectors, and further applies the trained MLTs on the second set of vectors and context associated with the second set of vectors to predict age and gender of the second set of users.
An apparatus is described herein. The apparatus includes a clustering mechanism that is to partition a dictionary into a plurality of clusters. The apparatus also includes a feature-matching mechanism that is to pre-compute feature matching results for each cluster of the plurality of clusters. Moreover, the apparatus includes a selector that is to locate a best representative feature from the dictionary in response to an input vector.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. An apparatus, comprising: a clustering mechanism that is to partition a dictionary into a plurality of clusters; a feature-matching mechanism that is to pre-compute feature matching results for each cluster of the plurality of clusters; and a selector that is to locate a best representative feature from the dictionary in response to an input vector. 2. The apparatus of claim 1, wherein the feature-matching mechanism is to pre-compute feature matching results using a Gram-matrix. 3. The apparatus of claim 1, wherein the feature-matching mechanism is to pre-compute feature matching results using a Gram-matrix trick. 4. The apparatus of claim 1, wherein the selector is to locate a best representative feature from the dictionary in response to an input vector via a matching pursuit algorithm. 5. The apparatus of claim 1, wherein the clustering mechanism is to partition the dictionary into a plurality of clusters for a hierarchical traversal. 6. A method, comprising: partitioning a dictionary into a plurality of clusters; pre-computing feature matching results for each cluster of the plurality of clusters; and locating a best representative feature from the dictionary in response to an input vector. 7. The method of claim 6, wherein pre-computing feature matching results for each cluster of the plurality of clusters is performed using a Gram-matrix. 8. The method of claim 6, wherein pre-computing feature matching results for each cluster of the plurality of clusters is performed using a Gram-matrix trick. 9. The method of claim 6, wherein locating a best representative feature from the dictionary in response to an input vector is performed using a matching pursuit algorithm. 10. The method of claim 6, wherein the dictionary is partitioned into a plurality of clusters for a hierarchical traversal. 11. The method of claim 6, wherein each cluster of the plurality of clusters is substantially large such that a resulting feature matching vector remains synchronized with GMT-based matching pursuit on an original dictionary. 12. The method of claim 6, wherein the best representative feature from the dictionary is located in an iterative fashion by using cluster residuals to determine the next cluster to be selected until the best representative feature is found. 13. The method of claim 12, wherein the cluster residuals are computed using a Gram-matrix based pre-computation. 14. The method of claim 6, wherein the best representative feature is a feature vector. 15. The method of claim 6, wherein pre-computing feature matching results gives an increase of at least two times when compare to traditional feature matching. 16. A tangible, non-transitory, computer-readable medium comprising instructions that, when executed by a processor, direct the processor to: partition a dictionary into a plurality of clusters; pre-compute feature matching results for each cluster of the plurality of clusters; and locate a best representative feature from the dictionary in response to an input vector. 17. The computer readable medium of claim 16, wherein pre-computing feature matching results for each cluster of the plurality of clusters is performed using a Gram-matrix. 18. The computer readable medium of claim 16, wherein pre-computing feature matching results for each cluster of the plurality of clusters is performed using a Gram-matrix trick. 19. The computer readable medium of claim 16, wherein locating a best representative feature from the dictionary in response to an input vector is performed using a matching pursuit algorithm. 20. The computer readable medium of claim 16, wherein the dictionary is partitioned into a plurality of clusters for a hierarchical traversal. 21. A system, comprising: a display; an image capture mechanism; a memory that is to store instructions and that is communicatively coupled to the image capture mechanism and the display; and a processor communicatively coupled to the image capture mechanism, the display, and the memory, wherein when the processor is to execute the instructions, the processor is to: partition a dictionary into a plurality of clusters; pre-compute feature matching results for each cluster of the plurality of clusters; and locate a best representative feature from the dictionary in response to an input vector. 22. The system of claim 21, wherein the feature-matching mechanism that is to pre-compute feature matching results using a Gram-matrix. 23. The system of claim 21, wherein the feature-matching mechanism that is to pre-compute feature matching results using a Gram-matrix trick. 24. The system of claim 21, wherein the selector is to locate a best representative feature from the dictionary in response to an input vector via a matching pursuit algorithm. 25. The system of claim 21, wherein the clustering mechanism is to partition the dictionary into a plurality of clusters for a hierarchical traversal.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: An apparatus is described herein. The apparatus includes a clustering mechanism that is to partition a dictionary into a plurality of clusters. The apparatus also includes a feature-matching mechanism that is to pre-compute feature matching results for each cluster of the plurality of clusters. Moreover, the apparatus includes a selector that is to locate a best representative feature from the dictionary in response to an input vector.
G06N502
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An apparatus is described herein. The apparatus includes a clustering mechanism that is to partition a dictionary into a plurality of clusters. The apparatus also includes a feature-matching mechanism that is to pre-compute feature matching results for each cluster of the plurality of clusters. Moreover, the apparatus includes a selector that is to locate a best representative feature from the dictionary in response to an input vector.
An encoder and decoder for translating sequential data into a fixed dimensional vector are created by applying an encoding-trainer input vector set as input to an encoding neural network to generate an encoding-trainer output vector set. One vector of the encoding-trainer output vector set is selected and a decoding-trainer input vector set is generated from it. A decoding neural network is trained by applying the generated decoding-trainer input vector set to the decoding neural network. The encoder and decoder can be used in implementations processing sequential data.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for translating between a first context and a second context comprising: encoding source data of the first context using an encoding neural network to create an encoded input fixed dimensional vector representation of the source data; applying the encoded input fixed dimensional vector as input to a translator neural network trained to generate an output encoded fixed dimensional vector representation of target data of the second context, wherein the first context and second context differ; and decoding the output encoded fixed dimensional vector representation using a decoding neural network to recreate the target data. 2. The method of claim 1, wherein the encoding neural network is trained by applying a plurality of sequences of training input vectors from a first data corpus of the first context as input to the encoding neural network. 3. The method of claim 1, wherein encoding the source data using the encoding neural network includes: generating a sequence of input vectors corresponding the source data, applying the sequence of input vectors to the encoding network; and, selecting one vector from the output of the encoding neural network. 4. The method of claim 3, wherein generating the sequence of input vectors corresponding the source data includes: generating sequential indices for the source data, and mapping the sequential indices to vectors in an embedding layer. 5. The method of claim 1, wherein the decoding neural network is trained by applying a plurality of encoded fixed dimensional vectors representing sequential data encoded from a second data corpus of the second context as input to the decoding neural network. 6. The method of claim 5, wherein applying the plurality of encoded fixed dimensional vectors representing sequential data from the second data corpus includes: creating training vector sets for each of the plurality of encoded fixed dimensional vectors. 7. The method of claim 6, wherein creating the training vector sets for each of the plurality of encoded fixed dimensional vectors includes concatenating the each of the plurality of encoded fixed dimensional vectors with a beginning of sequence vector and a subset of vectors used to encode the each of the plurality of encoded fixed dimensional vectors. 8. The method of claim 1, wherein decoding the output encoded fixed dimensional vector representation using the decoding neural network includes concatenating the output encoded fixed dimensional vector representation with a beginning of sequence vector used when training the decoding neural network. 9. The method of claim 8, wherein decoding the output encoded fixed dimensional vector representation using the decoding neural network further includes concatenating the output encoded fixed dimensional vector representation with one or more output vectors of the decoding neural network. 10. A method for creating an encoder and a decoder comprising: training an encoding neural network by applying an encoding-trainer input vector set as input to the encoding neural network to generate an encoding-trainer output vector set so that the encoding-trainer output vector set is an additive inverse of the encoding-trainer input vector set, wherein the encoding-trainer input vector set corresponds to sequential data; determining a selected encoding vector by selecting one vector of the encoding-trainer output vector set; generating a decoding-trainer input vector set by concatenating the selected encoding vector with a beginning of sequence vector and concatenating the selected encoding vector with each non-selected vector of the encoding-training output vector set; and training a decoding neural network by applying the generated decoding-trainer input vector set to the decoding neural network so that the decoding neural network outputs an additive inverse of the encoding-trainer input vector set. 11. The method of claim 10 wherein the encoding neural network includes a plurality of hidden layers. 12. The method of claim 11 wherein the encoding neural network is trained such that connections between nodes of consecutive hidden layers of the plurality of hidden layers have the same weight. 13. The method of claim 10 wherein the decoding neural network includes a plurality of hidden layers. 14. The method of claim 13 wherein the decoding neural network includes is trained such that connections between nodes of consecutive hidden layers of the plurality of hidden layers have the same weight. 15. A method for decoding an encoded fixed dimensional vector into a sequence of data, the method comprising: performing a number of iterations equal to a set size, wherein each iteration comprises: concatenating the encoded fixed dimensional vector with a beginning of sequence vector to configure a first vector of an ordered input vector set of set size vectors, concatenating the encoded fixed dimensional vector with an ordered output vector set determined during a previous iteration of the number of iterations to configure subsequent vectors of the ordered input vector set, the subsequent vectors occurring in the ordered input vector set after the first vector of the ordered input vector set, and applying the ordered input vector set to a decoding neural network trained concurrently with an encoding neural network uses to encode the encoded fixed dimensional vector; and, determining, once the number of iterations have been performed, the sequence of data based on the ordered output vector set of the last iteration. 16. The method of claim 15, wherein concatenating the encoded fixed dimensional vector with the ordered output vector set determined during the previous iteration includes concatenating the encoded fixed dimensional vector with a vector of the ordered output vector set located at a position in the sequence equal to the number of iterations performed plus one. 17. The method of claim 15 wherein the decoding neural network and encoding neural network were trained using an embedding layer. 18. The method of claim 17 wherein determining the sequence of data is further based on correlating the ordered output vector set of the last iteration with the embedding layer. 19. The method of claim 15 wherein the decoding neural network is trained by creating decoding training vector sets based on a vector output from the encoding neural network during training of the encoding neural network. 20. The method of claim 19, wherein creating the training vector sets includes concatenating the vector output from the encoding neural network with a beginning of sequence vector and input vector sets provided as input to the encoding neural network during training.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: An encoder and decoder for translating sequential data into a fixed dimensional vector are created by applying an encoding-trainer input vector set as input to an encoding neural network to generate an encoding-trainer output vector set. One vector of the encoding-trainer output vector set is selected and a decoding-trainer input vector set is generated from it. A decoding neural network is trained by applying the generated decoding-trainer input vector set to the decoding neural network. The encoder and decoder can be used in implementations processing sequential data.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An encoder and decoder for translating sequential data into a fixed dimensional vector are created by applying an encoding-trainer input vector set as input to an encoding neural network to generate an encoding-trainer output vector set. One vector of the encoding-trainer output vector set is selected and a decoding-trainer input vector set is generated from it. A decoding neural network is trained by applying the generated decoding-trainer input vector set to the decoding neural network. The encoder and decoder can be used in implementations processing sequential data.
A computer processor determines a first span of a communication, wherein a span includes content associated with one or more dialog statements. If the content of the first span contains one or more topic change indicators which are identified by at least one detector of a learning model, the computer processor, in response, generates scores for each of the one or more indicators. The computer processor aggregates scores of the one or more indicators of the first span, which may be weighted, to produce an aggregate score. The computer processor compares the aggregate score to a threshold value, wherein the threshold value is determined during training of the learning model, and the computer processor, in response to the aggregate score crossing the threshold value, determines a topic change has occurred within the first span.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for determining a topic change of a communication, the method comprising: monitoring, by a computer processor, a communication including a first span; determining, by a computer processor, the communication containing a set of dialog statements, wherein the first span of the communication includes one or more dialog statements of the set of dialog statements; determining, by the computer processor, if the one or more dialog statements of the first span include one or more indicators of a topic change, wherein the one or more indicators are identified by at least one detector of a learning model, wherein each of the one or more indicators of the topic change within the first span includes at least one of: a particular key phrase, a pause of particular duration, a particular activity on a participant's communication device, and a particular duration of the first span; responsive to determining the first span includes the one or more indicators of the topic change, generating, by the computer processor, a score for the one or more indicators, based on the learning model; responsive to the score for the one or more indicators triggering a threshold condition, determining, by the computer processor, a topic change within the first span, wherein the threshold condition is based on a determination of the topic change within the first span of the communication during training of the learning model, and wherein the threshold condition determined during training of the learning model includes: determining, by the computer processor, a weighted value for the at least one detector, based on heuristics, receiving input of labelled communication dialog statements, wherein the labelled communication dialog statements include one or more topic change indicators that are known, the one or more topic change indicators corresponding to the at least one detector, adjusting, by the computer processor, the weighted value of the at least one detector in response to a delta between an output of scores of the at least one detector of the learning model and scores of the one or more topic change indicators that are known, and determining, by the computer processor, the threshold condition in response to achieving an acceptable minimum for the delta between the output of the scores which are determined by the at least one detector of the learning model and the scores of the one or more topic change indicators that are known; generating, by the computer processor, a second span based on adjusting boundaries of the first span by performing at least one of, adding to the first span one or more dialog statements of the set of dialog statements not included in the first span, and removing one or more dialog statements from the first span; determining, by the computer processor, a score for the first span and a score for the second span, wherein the score for the first span and the score for the second span is based on a topic of the first span and a topic of the second span, respectively; responsive to the score of the second span being more favorable than the score of the first span, extracting, by the computer processor, one or more features from the one or more dialog statements of the second span not included in the first span, wherein extracting the one or more features from the one or more dialog statements of the second span, includes classifying the one or more features to correspond with the at least one detector of the learning model; and training, by the computer processor, the learning model to determine a topic change, based, at least in part, on including the one or more features from the one or more dialog statements of the second span, in at least one detector of the learning model.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A computer processor determines a first span of a communication, wherein a span includes content associated with one or more dialog statements. If the content of the first span contains one or more topic change indicators which are identified by at least one detector of a learning model, the computer processor, in response, generates scores for each of the one or more indicators. The computer processor aggregates scores of the one or more indicators of the first span, which may be weighted, to produce an aggregate score. The computer processor compares the aggregate score to a threshold value, wherein the threshold value is determined during training of the learning model, and the computer processor, in response to the aggregate score crossing the threshold value, determines a topic change has occurred within the first span.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A computer processor determines a first span of a communication, wherein a span includes content associated with one or more dialog statements. If the content of the first span contains one or more topic change indicators which are identified by at least one detector of a learning model, the computer processor, in response, generates scores for each of the one or more indicators. The computer processor aggregates scores of the one or more indicators of the first span, which may be weighted, to produce an aggregate score. The computer processor compares the aggregate score to a threshold value, wherein the threshold value is determined during training of the learning model, and the computer processor, in response to the aggregate score crossing the threshold value, determines a topic change has occurred within the first span.
A first seed concept term may be identified. The first seed concept term may be to train a cognitive computing system. The cognitive computing system may analyze the first seed concept term to generate a first set one or more concept terms that are candidates for being conceptually related to the first seed concept term. A first plurality of individual characters and the first seed concept term may be provided. A first user of a client computing device may be prompted to generate a second set of one or more concept terms that are conceptually related to the first seed concept term using one or more of the first plurality of individual characters.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented method, comprising: identifying, by a first client computing device, a first seed concept term, the first seed concept term to train a cognitive computing system, wherein the cognitive computing system analyzes the first seed concept term to generate a first set one or more concept terms that are candidates for being conceptually related to the first seed concept term; providing, by the first client computing device, a first plurality of individual characters and the first seed concept term; and prompting, by the first client computing device, a first user of the first client computing device to generate a second set of one or more concept terms that are conceptually related to the first seed concept term using one or more of the first plurality of individual characters. 2. The method of claim 1, further comprising obtaining, by the first client computing device, a list of the second set of concept terms. 3. The method of claim 2, further comprising transmitting, by the first client computing device, the list of the second set of concept terms to the cognitive computing system, wherein the cognitive computing system determines, by comparing the first set of concept terms with the second set of concept terms, that one of the first set of concept terms is not included on the list, the cognitive computing system further providing a validity score for the one of the first set of concept terms based on the determining that one of the first set of concept terms is not included on the list. 4. The method of claim 1, further comprising: providing, by a second set of client computing devices, a second plurality of individual characters and the first seed concept term; prompting, by the second set of client computing devices, a second set of users of the second set of client computing devices to each generate a third set of one or more concept terms that are conceptually related to the first seed concept term using one or more of the second plurality of individual characters; obtaining, by the second set of client computing devices and from the second set of users, a second set of lists corresponding to the third set of concept terms; and transmitting, by the second set of client computing devices, the second set of lists to the cognitive computing system, wherein the cognitive computing system determines that one concept term was generated above a quantity threshold in the third sets of concept terms, and wherein, in response to the determining, the cognitive computing system updates the cognitive computing system by storing the one concept term for performing concept expansion, and wherein concept expansion is a process of inputting a set of seed concept terms that are expanded by the cognitive computing system to a more complete set of concept terms. 5. The method of claim 1, further comprising: identifying, by the first client computing device, the first set of concept terms; parsing, by the first client computing device, each of the first set of concept terms into the first plurality of individual characters; and shuffling, prior to the providing, the first plurality of individual characters. 6. The method of claim 1, further comprising displaying a pictorial representation of a domain, the domain for use in providing a context for the first seed concept term. 7. The method of claim 1, further comprising displaying the first seed concept term within a sentence to indicate a domain, the domain for use in providing a context for the first seed concept term. 8. The method of claim 1, wherein the first plurality of individual characters are displayed within in a two-dimensional array of respective cells, the second set of concept terms each being generated by connecting one or more of the respective cells. 9. A system comprising: a server computing device having a processor; and a computer readable storage medium having program instructions embodied therewith, the program instructions executable by the processor to cause the system to: identify a first seed concept term, the first seed concept term to train a cognitive computing system, wherein the cognitive computing system analyzes the first seed concept term to generate a first set one or more concept terms that are candidates for being conceptually related to the first seed concept term; provide a first plurality of individual characters and the first seed concept term to a client computing device; and cause the client computing device to prompt a first user of the client computing device to generate a second set of one or more concept terms that are conceptually related to the first seed concept term using one or more of the first plurality of individual characters. 10. The system of claim 9, wherein the program instructions executable by the processor further cause the system to obtain a list of the second set of concept terms. 11. The system of claim 10, wherein the program instructions executable by the processor further cause the system to: receive the list of the second set of concept terms from the client computing device; determine, by comparing the first set of concept terms with the second set of concept terms, that one of the first set of concept terms are not included on the list; provide a validity score for the one of the first set of concept terms based on the determining that one of the first set of concept terms are not included on the list. 12. The system of claim 9, wherein the program instructions executable by the processor further cause the system to: identify a domain associated with the first seed concept term, the domain corresponding to a field of knowledge; identify a subject matter expertise of the first user; compare the subject matter expertise of the first user with the domain; and provide a validity score for the second set of concept terms based on the comparing the subject matter expertise of the first user with the domain. 13. The system of claim 9, wherein the program instructions executable by the processor further cause the system to: provide a second plurality of individual characters and the first seed concept term to a second set of client computing devices; prompt a second set of users of the second set of client computing devices to each generate a third set of one or more concept terms that are conceptually related to the first seed concept term using one or more of the second plurality of individual characters; receive, from the second set of client computing devices based on input from the second set of users, a second set of lists corresponding to the third set of concept terms; and determine that one term of the third set of concept terms was generated above a quantity threshold, wherein in response to the determining, the system updates the cognitive computing system by storing the one of the third concept terms for performing concept expansion, and wherein concept expansion is a process of inputting a set of seed concept terms that are expanded by the cognitive computing system to a more complete set of concept terms which belongs to a same category or semantic class as the set of seed concept terms. 14. The system of claim 9, wherein the server computing device is the cognitive computing system. 15. A computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a first client computing device to cause the first client computing device to: identify a first seed concept term, the first seed concept term to train a cognitive computing system, wherein the cognitive computing system analyzes the first seed concept term to generate a first set of one or more concept terms that are candidates for being conceptually related to the first seed concept term; provide the first seed concept term; and prompt a first user of the first client computing device to generate a second set of one or more concept terms that are conceptually related to the first seed concept term. 16. The computer program product of claim 15, wherein the program instructions executable by the first client computing device further causes the first client computing device to obtain a list of the second set of concept terms. 17. The computer program product of claim 16, wherein the program instructions executable by the first client computing device further causes the first client computing device to transmit the list of the second set of concept terms to the cognitive computing system, wherein the cognitive computing system determines, by comparing the first set of concept terms with the second set of concept terms, that one of the first set of concept terms is not included on the list, the cognitive computing system further providing a validity score for the one of the first set of concept terms based on the determining that one of the first set of concept terms is not included on the list. 18. The computer program product of claim 15, wherein the program instructions executable by the first client computing device further causes the first client computing device to display a pictorial representation of a domain to the first user, the domain for use in providing a context for the first seed concept term. 19. The computer program product of claim 15, wherein the program instructions executable by the first client computing device further causes the first client computing device to display the first seed concept term within a sentence to indicate a domain to the first user, the domain for use in providing a context for the first seed concept term. 20. The computer program product of claim 15, wherein the program instructions executable by the first client computing device further causes the system to provide a game score to the first user, the game score corresponding to a point total earned by the first user for generating the second set of concept terms.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A first seed concept term may be identified. The first seed concept term may be to train a cognitive computing system. The cognitive computing system may analyze the first seed concept term to generate a first set one or more concept terms that are candidates for being conceptually related to the first seed concept term. A first plurality of individual characters and the first seed concept term may be provided. A first user of a client computing device may be prompted to generate a second set of one or more concept terms that are conceptually related to the first seed concept term using one or more of the first plurality of individual characters.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A first seed concept term may be identified. The first seed concept term may be to train a cognitive computing system. The cognitive computing system may analyze the first seed concept term to generate a first set one or more concept terms that are candidates for being conceptually related to the first seed concept term. A first plurality of individual characters and the first seed concept term may be provided. A first user of a client computing device may be prompted to generate a second set of one or more concept terms that are conceptually related to the first seed concept term using one or more of the first plurality of individual characters.
A data classification method which classifies a plurality of data into a plurality of classification items based on a feature quantity included in the data, the method includes calculating, by a processor, an appearance probabilities in which training data including the feature quantity appears in the classification items in a distribution of the data, generating, by the processor, a rule having the feature quantity and a weighting of the feature quantity based on a plurality of the training data having the feature quantity based on the appearance probabilities; and classifying, by the processor, the plurality of data according to the rule.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A data classification method which classifies a plurality of data into a plurality of classification items based on a feature quantity included in the data, the method comprising: calculating, by a processor, an appearance probabilities in which training data including the feature quantity appears in the classification items in a distribution of the data; generating, by the processor, a rule having the feature quantity and a weighting of the feature quantity based on a plurality of the training data having the feature quantity based on the appearance probabilities; and classifying, by the processor, the plurality of data according to the rule. 2. The data classification method according to claim 1, wherein the calculating comprises calculating the appearance probabilities, based on a first ratio of the plurality of the classification items in the data, a second ratio of the plurality of the classification items in the plurality of the training data, and a ratio of the training data including the feature quantity in each of the plurality of classification items in the plurality of the training data. 3. The data classification method according to claim 1, wherein the generating comprises: determining whether or not a value of the feature quantity in the plurality of the training data is used based on the appearance probabilities of the feature quantity; and generating the rule having the feature quantity and the weighting of the feature quantity based on the feature quantity which is determined to use. 4. The data classification method according to claim 3, wherein the determining comprises: first determining whether or not the value of the feature quantity in the training data belonging to a first classification item in the plurality of the training data is used based on the appearance probability of the first classification item of the feature quantity; and second determining whether or not the value of the feature quantity in the training data belonging to a second classification item in the plurality of the training data is used based on the appearance probability of the second classification item of the feature quantity. 5. The data classification method according to claim 1, wherein the method further comprising repeating the generating until a precision of a classification result to the classification items of the plurality of the training data based on the rule which is generated reaches a predetermined standard. 6. The data classification method according to claim 5, wherein the generating comprising generating the rule based on the value of the feature quantity which is once obtained based on the appearance probabilities of the feature quantity, when repeatedly using the plurality of the training data. 7. The data classification method according to claim 5, wherein the generating comprising generating the rule based on the value of the feature quantity which is obtained every time based on the appearance probabilities of the feature quantity, when repeatedly using the plurality of the training data. 8. A non-transitory computer readable storage medium storing therein a program for causing a computer to execute a process, the process comprising: calculating an appearance probabilities in which training data including a feature quantity appears in a classification items in a distribution of the data; and generating a rule which classifies the plurality of data into the plurality of classification items and has the feature quantity and a weighting of the feature quantity based on a plurality of the training data having the feature quantity based on the appearance probabilities. 9. The non-transitory computer readable storage medium according to claim 8, wherein the calculating comprises calculating the appearance probabilities, based on a first ratio of the plurality of the classification items in the data, a second ratio of the plurality of the classification items in the plurality of the training data, and a ratio of the training data including the feature quantity in each of the plurality of classification items in the plurality of the training data. 10. The non-transitory computer readable storage medium according to claim 8, wherein the generating comprises: determining whether or not a value of the feature quantity in the plurality of the training data is used based on the appearance probabilities of the feature quantity; and generating the rule having the feature quantity and the weighting of the feature quantity based on the feature quantity which is determined to use. 11. The non-transitory computer readable storage medium according to claim 10, wherein the determining comprises: first determining whether or not the value of the feature quantity in the training data belonging to a first classification item in the plurality of the training data is used based on the appearance probability of the first classification item of the feature quantity; and second determining whether or not the value of the feature quantity in the training data belonging to a second classification item in the plurality of the training data is used based on the appearance probability of the second classification item of the feature quantity. 12. The non-transitory computer readable storage medium according to claim 8, wherein the method further comprising repeating the generating until a precision of a classification result to the classification items of the plurality of the training data based on the rule which is generated reaches a predetermined standard. 13. A classification device comprising: a memory which stores a plurality of data for classification; and a processor configured to execute a process, the process including: calculating an appearance probabilities in which training data including a feature quantity appears in a classification items in a distribution of the data; and generating a rule which classifies the plurality of data into the plurality of classification items and has the feature quantity and a weighting of the feature quantity based on a plurality of the training data having the feature quantity based on the appearance probabilities. 14. The classification device according to claim 13, wherein the processor calculates the appearance probabilities, based on a first ratio of the plurality of the classification items in the data, a second ratio of the plurality of the classification items in the plurality of the training data, and a ratio of the training data including the feature quantity in each of the plurality of classification items in the plurality of the training data in the memory. 15. The classification device according to claim 13, wherein the processor determines whether or not a value of the feature quantity in the plurality of the training data is used based on the appearance probabilities of the feature quantity, and generates the rule having the feature quantity and the weighting of the feature quantity based on the feature quantity which is determined to use. 16. The classification device according to claim 15, wherein the processor determines whether or not the value of the feature quantity in the training data belonging to a first classification item in the plurality of the training data is used based on the appearance probability of the first classification item of the feature quantity, and determines whether or not the value of the feature quantity in the training data belonging to a second classification item in the plurality of the training data is used based on the appearance probability of the second classification item of the feature quantity. 17. The classification device according to claim 13, wherein the processor repeats the generating until a precision of a classification result to the classification items of the plurality of the training data based on the rule which is generated reaches a predetermined standard.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A data classification method which classifies a plurality of data into a plurality of classification items based on a feature quantity included in the data, the method includes calculating, by a processor, an appearance probabilities in which training data including the feature quantity appears in the classification items in a distribution of the data, generating, by the processor, a rule having the feature quantity and a weighting of the feature quantity based on a plurality of the training data having the feature quantity based on the appearance probabilities; and classifying, by the processor, the plurality of data according to the rule.
G06N502
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A data classification method which classifies a plurality of data into a plurality of classification items based on a feature quantity included in the data, the method includes calculating, by a processor, an appearance probabilities in which training data including the feature quantity appears in the classification items in a distribution of the data, generating, by the processor, a rule having the feature quantity and a weighting of the feature quantity based on a plurality of the training data having the feature quantity based on the appearance probabilities; and classifying, by the processor, the plurality of data according to the rule.
Methods and apparatus are provided for identifying environmental stimuli in an artificial nervous system using both spiking onset and spike counting. One example method of operating an artificial nervous system generally includes receiving a stimulus; generating, at an artificial neuron, a spike train of two or more spikes based at least in part on the stimulus; identifying the stimulus based at least in part on an onset of the spike train; and checking the identified stimulus based at least in part on a rate of the spikes in the spike train. In this manner, certain aspects of the present disclosure may respond with short response latencies and may also maintain accuracy by allowing for error correction.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for operating an artificial nervous system, comprising: receiving a stimulus; generating, at a first device, a spike train of two or more spikes based at least in part on the stimulus; identifying the stimulus based at least in part on an onset of the spike train; and checking the identified stimulus based at least in part on a rate of the spikes in the spike train. 2. The method of claim 1, further comprising inhibiting a behavior associated with the identified stimulus if the checking fails. 3. The method of claim 1, wherein identifying the stimulus comprises generating, at a second device, a first spike based at least in part on the onset of the spike train. 4. The method of claim 3, further comprising: generating, at the second device, a second spike based at least in part on the rate of spikes in the spike train, wherein the second spike occurs subsequent to the first spike. 5. The method of claim 4, wherein checking the identified stimulus comprises determining whether an interval between the first spike and the second spike corresponds to the identified stimulus. 6. The method of claim 3, wherein the first device neuron is a receptor artificial neuron and wherein the second device is a detector artificial neuron connected with the receptor artificial neuron via an artificial synapse. 7. The method of claim 1, wherein an input strength of the stimulus is proportional to the rate of spikes and inversely proportional to the onset of the spike train. 8. The method of claim 1, further comprising determining the onset of the spike train based on at least one of the foremost spike in the spike train or a set of initial spikes in the spike train. 9. The method of claim 1, further comprising determining the onset of the spike train based on a time difference between a reference signal and the foremost spike in the spike train. 10. The method of claim 1, further comprising determining the onset of the spike train based on a transient increase in spike probability. 11. The method of claim 1, wherein the rate of spikes is averaged over a longer period of time than the onset is determined. 12. The method of claim 1, further comprising updating a representation of the identified stimulus with a corrected stimulus if the checking fails. 13. The method of claim 11, further comprising outputting an additive signal to generate a behavior associated with the corrected stimulus. 14. The method of claim 1, further comprising outputting a notification if the checking fails. 15. The method of claim 1, wherein the generating comprises generating the spike train based at least in part on the stimulus and an encoding scheme. 16. An apparatus for operating an artificial nervous system, comprising: a processing system configured to: receive a stimulus; generate a spike train of two or more spikes based at least in part on the stimulus; identify the stimulus based at least in part on an onset of the spike train; and check the identified stimulus based at least in part on a rate of the spikes in the spike train; and a memory coupled to the processing system. 17. An apparatus for operating an artificial nervous system, comprising: means for receiving a stimulus; means for generating a spike train of two or more spikes based at least in part on the stimulus; means for identifying the stimulus based at least in part on an onset of the spike train; and means for checking the identified stimulus based at least in part on a rate of the spikes in the spike train. 18. A computer program product for operating an artificial nervous system, comprising a non-transitory computer-readable medium having instructions executable to: receive a stimulus; generate a spike train of two or more spikes based at least in part on the stimulus; identify the stimulus based at least in part on an onset of the spike train; and check the identified stimulus based at least in part on a rate of the spikes in the spike train. 19. A method for identifying a stimulus in an artificial nervous system, comprising: receiving a spike train of two or more spikes at an artificial neuron; outputting a first spike from the artificial neuron based at least in part on the onset of the spike train; and outputting a second spike from the artificial neuron based at least in part on a rate of the spikes in the spike train. 20. The method of claim 19, wherein an interval between the first spike and the second spike is used to check whether the stimulus, as identified based at least in part on the onset of the spike train, is correct. 21. The method of claim 19, wherein the first spike corresponds to an estimate of the stimulus. 22. The method of claim 19, wherein the second spike is used to improve the estimate of the stimulus. 23. The method of claim 19, wherein a timing of the second spike is based on an integration of multiple spikes in the spike train. 24. The method of claim 19, wherein an input strength of the stimulus is proportional to the rate of spikes and inversely proportional to the onset of the spike train. 25. An apparatus for identifying a stimulus in an artificial nervous system, comprising: a processing system configured to: receive a spike train of two or more spikes at an artificial neuron; output a first spike from the artificial neuron based at least in part on the onset of the spike train; and output a second spike from the artificial neuron based at least in part on a rate of the spikes in the spike train; and a memory coupled to the processing system. 26. An apparatus for identifying a stimulus in an artificial nervous system, comprising: means for receiving a spike train of two or more spikes at an artificial neuron; means for outputting a first spike from the artificial neuron based at least in part on the onset of the spike train; and means for outputting a second spike from the artificial neuron based at least in part on a rate of the spikes in the spike train. 27. A computer program product for identifying a stimulus in an artificial nervous system, comprising a non-transitory computer-readable medium having instructions executable to: receive a spike train of two or more spikes at an artificial neuron; output a first spike from the artificial neuron based at least in part on the onset of the spike train; and output a second spike from the artificial neuron based at least in part on a rate of the spikes in the spike train.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Methods and apparatus are provided for identifying environmental stimuli in an artificial nervous system using both spiking onset and spike counting. One example method of operating an artificial nervous system generally includes receiving a stimulus; generating, at an artificial neuron, a spike train of two or more spikes based at least in part on the stimulus; identifying the stimulus based at least in part on an onset of the spike train; and checking the identified stimulus based at least in part on a rate of the spikes in the spike train. In this manner, certain aspects of the present disclosure may respond with short response latencies and may also maintain accuracy by allowing for error correction.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Methods and apparatus are provided for identifying environmental stimuli in an artificial nervous system using both spiking onset and spike counting. One example method of operating an artificial nervous system generally includes receiving a stimulus; generating, at an artificial neuron, a spike train of two or more spikes based at least in part on the stimulus; identifying the stimulus based at least in part on an onset of the spike train; and checking the identified stimulus based at least in part on a rate of the spikes in the spike train. In this manner, certain aspects of the present disclosure may respond with short response latencies and may also maintain accuracy by allowing for error correction.
This disclosure provides a computer-program product, system, method and apparatus for accessing a representation of a category or item and accessing a set of multiple transactions. The transactions are processed to identify items found amongst the transactions, and the items are ordered based on an information-gain heuristic. A depth-first search for a group of best association rules is then conducted using a best-first heuristic and constraints that make the search efficient. The best rules found during the search can then be displayed to a user, along with accompanying statistics. The user can then select rules that appear to be most relevant, and further analytics can be applied to the selected rules to obtain further information about the information provided by these rules.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-program product tangibly embodied in a non-transitory computer-readable storage medium having instructions stored thereon, the instructions executable to cause a data processing apparatus to perform operations including: accessing a representation of a category or item; accessing a set of multiple transactions (hereinafter “set A”), each of the transactions including a label indicating whether or not the transaction is a member of that category or contains that item; compiling a list of distinct items found in the transactions of set A; computing an information gain metric, wherein each of the information gain metrics represent mutual information of the respective distinct item and the target category or item within set A; ordering the distinct items on the list based on the information gain metrics; and evaluating usefulness of multiple association rules in detection of the category or item within transactions, wherein each of the association rules includes a logical conjunction of the presence or absence of one or more items on the list, wherein evaluating the usefulness of the multiple association rules includes applying each of the association rules to each of the text documents of set A. 2. The computer-program product of claim 1, wherein applying each of the association rules to each of the transactions of set A includes: counting a number (b) of the transactions in set A that are associated with a true-positive application of the association rule, wherein a transaction is associated with a true-positive application of the association rule when the transaction: includes the presence of all items in that rule that are supposed to be present; excludes all items in that rule that are supposed to be absent and either contains the item of interest in the transaction, or is a member of the target category; counting a number (c) of the text documents in set A that are associated with a false-positive application of the association rule, wherein a document is associated with a false-positive application of the association rule when the document: includes the presence of all items in that rule that are supposed to be present; excludes all items in that rule that are supposed to be absent and either does not contain the item of interest in the transaction, or is not a member of the target category. 3. The computer-program product of claim 2, wherein applying each of the association rules to each of the text transactions further includes: counting a number (d) of the text transactions in set A that are associated with a true-negative application of the association rule, wherein a transaction is associated with a true-negative application of the association rule when the transaction: does not contain at least one item in that rule that is supposed to be present or contains at least one item in that rule that is supposed to be absent and does not contain the item of interest in the transaction, or is not a member of the target category; counting a number (e) of the transactions in set A that are associated with a false-negative application of the association rule, wherein a transaction is associated with a false-negative application of the association rule when the transaction: does not contain at least one item in that rule that is supposed to be present or contains at least one item in that rule that is supposed to be absent and either contains the item of interest in the transaction, or is a member of the target category; omits at least one of the at least two distinct words of the logical conjunction; and does not address the topic; 4. The computer-program product of claim 3, wherein, with respect to each of the multiple association rules, evaluating the usefulness further includes computing a usefulness metric as a function of b, c, d and e. 5. The computer-program product of claim 1, wherein the operations further include: computing a statistical significance metric with respect to each of the distinct items, wherein each of the statistical significance metrics computed with respect to a distinct item indicates statistical significance of the information gain metric computed with respect to the distinct item; accessing a statistical significance threshold parameter; and removing at least one of the distinct items from the list prior to applying each of the association rules to each of the text documents of set A, wherein each of the at least one distinct item removed from the list is removed based on its respective statistical significance metric being less than the statistical significance threshold. 6. The computer-program product of claim 1, wherein the operations further include: accessing a parameter (k), wherein k is an integer; accessing a rule expansion parameter (f), wherein f is an integer; and defining a top-k set of association rules that is empty upon being defined and is operable for referencing k association rules found to be most useful in the classification while evaluation of the usefulness of the multiple association rules progresses. 7. The computer-program product of claim 6, wherein the multiple association rules include f initial association rules, and wherein evaluating usefulness of multiple association rules further includes: selecting a first one of the distinct items on the list, wherein the first one of the distinct items is selected based on the information gain metric computed with respect to the first one of the distinct items being higher than all other of the information gain metrics computed with respect to distinct items on the list; forming the f initial association rules such that the f initial association rules collectively include f+1 distinct items on the list, wherein each of the f initial association rules is a logical conjunction of items comprising: the first one of the distinct items on the list; and another one of the distinct items on the list; and removing the f+1 distinct items from the list. 8. The computer-program product of claim 7, wherein evaluating usefulness of multiple association rules further includes: identifying k association rules from amongst the f initial association rules, wherein the usefulness metrics computed with respect to the k identified association rules are higher than the usefulness metrics computed with respect to the initial association rules not selected. 9. The computer-program product of claim 8, wherein evaluating usefulness of multiple association rules further includes: adding the k identified association rules to the top-k set of association rules. 10. The computer-program product of claim 7, wherein evaluating usefulness of multiple association rules further includes: identifying improvable association rules from amongst the f initial association rules; subsequent to removing the f+1 items from the list, generating f second-stage association rules with respect to each of the improvable association rules, wherein each of the second-stage association rules generated with respect to an improvable association rule is a logical conjunction that includes: the two items included in the improvable association rule; and a distinct item remaining on the list after the f+1 distinct items are removed from the list.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: This disclosure provides a computer-program product, system, method and apparatus for accessing a representation of a category or item and accessing a set of multiple transactions. The transactions are processed to identify items found amongst the transactions, and the items are ordered based on an information-gain heuristic. A depth-first search for a group of best association rules is then conducted using a best-first heuristic and constraints that make the search efficient. The best rules found during the search can then be displayed to a user, along with accompanying statistics. The user can then select rules that appear to be most relevant, and further analytics can be applied to the selected rules to obtain further information about the information provided by these rules.
G06N502
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: This disclosure provides a computer-program product, system, method and apparatus for accessing a representation of a category or item and accessing a set of multiple transactions. The transactions are processed to identify items found amongst the transactions, and the items are ordered based on an information-gain heuristic. A depth-first search for a group of best association rules is then conducted using a best-first heuristic and constraints that make the search efficient. The best rules found during the search can then be displayed to a user, along with accompanying statistics. The user can then select rules that appear to be most relevant, and further analytics can be applied to the selected rules to obtain further information about the information provided by these rules.
A digital human generation method and system, where the method includes: defining a digital human model, where the digital human model includes multiple dimensions of user profile models; acquiring multiple dimensions of data of a specific user that is from multiple data sources; and processing, based on the multiple dimensions of user profile models included in the digital human model, the multiple dimensions of data of the specific user that is from the multiple data sources, to generate multiple dimensions of user profiles corresponding to the specific user, where the multiple dimensions of user profiles of the specific user form a digital human corresponding to the specific user.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A digital human generation method, comprising: defining a digital human model, wherein the digital human model comprises multiple dimensions of user profile models; acquiring multiple dimensions of data of a specific user that is from multiple data sources; and processing, based on the multiple dimensions of user profile models in the digital human model, the multiple dimensions of data of the specific user that is from the multiple data sources, to generate multiple dimensions of user profiles corresponding to the specific user, wherein the multiple dimensions of user profiles of the specific user form a digital human corresponding to the specific user. 2. The method according to claim 1, wherein acquiring multiple dimensions of data of the specific user that is from multiple data sources comprises: acquiring multiple dimensions of data of multiple users that is from multiple data sources; and determining, among the multiple dimensions of data of the multiple users that is from the multiple data sources and according to a belonging relationship between data and a user, the multiple dimensions of data belonging to the specific user that is from the multiple data sources. 3. The method according to claim 2, wherein acquiring multiple dimensions of data of multiple users that is from multiple data sources comprises acquiring the multiple dimensions of data of the multiple users that is from the multiple data sources by using at least one device of a terminal, a communications network element, and a data collection agent. 4. The method according to claim 1, wherein the multiple dimensions of user profiles comprise at least two of the following: a user profile in an image dimension, a user profile in a health dimension, a user profile in a behavioral habit dimension, a user profile in a social pattern dimension, a user profile in a consumption habit dimension, and a user profile in an interest and hobby dimension. 5. The method according to claim 1, wherein the method further comprises: performing data cleaning on the multiple dimensions of data of the specific user that is from the multiple data sources; extracting time and a keyword that are corresponding to content of cleaned data; and annotating the cleaned data by using the time and the keyword as annotation information, wherein processing, based on the multiple dimensions of user profile models in the digital human model, the multiple dimensions of data of the specific user that is from the multiple data sources, to generate multiple dimensions of user profiles corresponding to the specific user comprises processing annotated data based on the multiple dimensions of user profile models in the digital human model, to generate the multiple dimensions of user profiles corresponding to the specific user. 6. The method according to claim 1, wherein the method further comprises: performing data cleaning on the multiple dimensions of data of the specific user that is from the multiple data sources; extracting time, a location, and a keyword that are corresponding to content of cleaned data; and annotating the cleaned data by using the time, the location, and the keyword as annotation information, wherein the processing, based on the multiple dimensions of user profile models in the digital human model, the multiple dimensions of data of the specific user that is from the multiple data sources, to generate multiple dimensions of user profiles corresponding to the specific user comprises processing annotated data based on the multiple dimensions of user profile models in the digital human model, to generate the multiple dimensions of user profiles corresponding to the specific user. 7. The method according to claim 5, wherein the method further comprises storing the annotated data. 8. The method according to claim 1, wherein after processing, based on the multiple dimensions of user profile models in the digital human model, the multiple dimensions of data of the specific user that is from the multiple data sources, to generate multiple dimensions of user profiles corresponding to the specific user, the method further comprises providing, according to a query condition input by a client, the client with a user profile of a digital human corresponding to the query condition. 9. The method according to claim 1, wherein before defining the digital human model, the method further comprises creating, according to a requirement of the client, user profile models that are used to generate user profiles and corresponding to the requirement. 10. The method according to claim 1, wherein the processing, based on the multiple dimensions of user profile models in the digital human model, the multiple dimensions of data of the specific user that is from the multiple data sources, to generate multiple dimensions of user profiles corresponding to the specific user comprises processing, based on the multiple dimensions of user profile models in the digital human model, the multiple dimensions of data of the specific user that is from the multiple data sources by using at least one of the following algorithms, to generate the multiple dimensions of user profiles corresponding to the specific user: a classification algorithm, a clustering algorithm, a regression algorithm, a reinforcement learning algorithm, a transfer learning algorithm, a deep learning algorithm, and an active learning algorithm. 11. A digital human generation system, comprising: a defining module configured to define a digital human model, wherein the digital human model comprises multiple dimensions of user profile models; an acquiring module configured to acquire multiple dimensions of data of a specific user that is from multiple data sources; and a generating module configured to process, based on the multiple dimensions of user profile models in the digital human model defined by the defining module, the multiple dimensions of data of the specific user that is from the multiple data sources and acquired by the acquiring module, to generate multiple dimensions of user profiles corresponding to the specific user, wherein the multiple dimensions of user profiles of the specific user form a digital human corresponding to the specific user. 12. The system according to claim 11, wherein the acquiring module comprises: an acquiring unit configured to acquire multiple dimensions of data of multiple users that is from multiple data sources; and a determining unit configured to determine, among the multiple dimensions of data of the multiple users that is from the multiple data sources and acquired by the acquiring unit and according to a belonging relationship between data and a user, the multiple dimensions of data belonging to the specific user that is from the multiple data sources. 13. The system according to claim 12, wherein the acquiring unit is further configured to acquire the multiple dimensions of data of the multiple users that is from the multiple data sources by using at least one device of a terminal, a communications network element, and a data collection agent. 14. The system according to claim 11, wherein the multiple dimensions of user profiles comprise at least two of the following: a user profile in an image dimension, a user profile in a health dimension, a user profile in a behavioral habit dimension, a user profile in a social pattern dimension, a user profile in a consumption habit dimension, and a user profile in an interest and hobby dimension. 15. The system according to claim 11, wherein the system further comprises: a cleaning module configured to perform data cleaning on the multiple dimensions of data of the specific user that is from the multiple data sources and acquired by the acquiring module; an extracting module configured to extract time and a keyword that are corresponding to content of data obtained by cleaning by the cleaning module; and an annotating module configured to annotate, by using the time and the keyword as annotation information, the data obtained by cleaning by the cleaning module, wherein the generating module is further configured to process annotated data based on the multiple dimensions of user profile models in the digital human model, to generate the multiple dimensions of user profiles corresponding to the specific user. 16. The system according to claim 11, wherein the system further comprises: a cleaning module configured to perform data cleaning on the multiple dimensions of data of the specific user that is from the multiple data sources and acquired by the acquiring module; an extracting module configured to extract time, a location, and a keyword that are corresponding to content of data obtained by cleaning by the cleaning module; and an annotating module configured to annotate, by using the time, the location and the keyword as annotation information, the data obtained by cleaning by the cleaning module, wherein the generating module is specifically configured to process annotated data based on the multiple dimensions of user profile models in the digital human model, to generate the multiple dimensions of user profiles corresponding to the specific user. 17. The system according to claim 15, wherein the system further comprises: a storing module configured to store the data annotated by the annotating module. 18. The system according to claim 11, wherein the system further comprises: a query module configured to provide, according to a query condition input by a client, the client with a user profile of a digital human corresponding to the query condition. 19. The system according to claim 11, wherein the system further comprises: a creating module configured to create, according to a requirement of the client, user profile models that are used to generate user profiles and corresponding to the requirement. 20. The system according to claim 11, wherein the generating module is further configured to process, based on the multiple dimensions of user profile models in the digital human model, the multiple dimensions of data of the specific user that is from the multiple data sources by using at least one of the following algorithms, to generate the multiple dimensions of user profiles corresponding to the specific user: a classification algorithm, a clustering algorithm, a regression algorithm, a reinforcement learning algorithm, a transfer learning algorithm, a deep learning algorithm, and an active learning algorithm. 21. A digital human generation system, comprising: a receiving module configured to receive multiple dimensions of data of multiple users that is from multiple data sources; a data preprocessing module configured to determine a user to which the data received by the receiving module belongs, and perform data cleaning and annotation on the data; a data storing module configured to store data preprocessed by the data preprocessing module; a user identity management module configured to manage accounts of the user in the multiple data sources, to determine a belonging relationship between data of multiple users that is stored in the storing module and a user; a user profile model configuration library configured to define user profile models for generating user profiles; an algorithm library configured to store and update multiple algorithms used to generate user profiles; a digital human generating and maintaining module configured to process, based on the user profile models in the user profile model configuration library and according to an algorithm in the algorithm library, the data stored in the storing module, to generate corresponding user profiles, wherein the user profiles form a digital human corresponding to the user; and a digital human application programming interface (API) configured to interact with a client, so that the client queries a user profile of a digital human that is generated by the digital human generating and maintaining module or accepts a requirement raised by the client to create a user profile model.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A digital human generation method and system, where the method includes: defining a digital human model, where the digital human model includes multiple dimensions of user profile models; acquiring multiple dimensions of data of a specific user that is from multiple data sources; and processing, based on the multiple dimensions of user profile models included in the digital human model, the multiple dimensions of data of the specific user that is from the multiple data sources, to generate multiple dimensions of user profiles corresponding to the specific user, where the multiple dimensions of user profiles of the specific user form a digital human corresponding to the specific user.
G06N5025
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A digital human generation method and system, where the method includes: defining a digital human model, where the digital human model includes multiple dimensions of user profile models; acquiring multiple dimensions of data of a specific user that is from multiple data sources; and processing, based on the multiple dimensions of user profile models included in the digital human model, the multiple dimensions of data of the specific user that is from the multiple data sources, to generate multiple dimensions of user profiles corresponding to the specific user, where the multiple dimensions of user profiles of the specific user form a digital human corresponding to the specific user.
The dynamic risk analyzer (DRA) provided by the present invention periodically assesses real-time or historic process data, or both, associated with an operations site, such as a manufacturing, production, or processing facility, including a plant's operations, and identifies hidden near-misses of such operation, when in real time the process data appears otherwise normal. DRA assesses the process data in a manner that enables operating personnel including management at a facility to have a comprehensive understanding of the risk status and changes in both alarm and non-alarm based process variables. The hidden process near-miss data may be analyzed alone or in combination with other process data and/or data resulting from prior near-miss situations to permit strategic action to be taken to reduce or avert the occurrence of adverse incidents or catastrophic failure of a facility operation.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A system for predicting risk levels for manufacturing operations with risk indicators comprising: a server that receives process data from a real-time data source and/or a historical archive data source comprising a relational database with a key-value storage solution; a processor that analyzes values of parameters P and/or groups G of said parameters P of said process data at time interval T to identify operational risk and/or near-miss risk that would otherwise be unknown or concealed in parameters P, whereby said operational risk and/or near-miss risk may be used for strategic corrective action; and a display that presents said operational risk and/or near-miss risk in a graphic that visually depicts a plotted value V of parameter(s) P of said operational risk and/or near-miss risk in time interval T relationally within time period TP; wherein said system continuously and autonomously operates contemporaneously with said manufacturing operation. 2. A method for dynamic prediction of risk levels in a manufacturing operation comprising: identifying risk and/or near-miss risk of said manufacturing operation that would otherwise be unknown or concealed in parameters P and/or groups G of said parameters P of process data, said process data comprising: data collected from said manufacturing operation and processed in either (a) real-time or (b) from an archive server having a relational database with a key-value storage solution, or both; and displaying said risk or near-miss risk in a graphic that visually reports a plotted value V of parameter(s) P of said risk or near-miss risk relationally within time T period, whereby said plotted value V is displayed with a variable visual indicator corresponding with magnitude of said plotted value V; wherein said method is performed continuously and autonomously. 3. A display system for risk indicators for a manufacturing operation comprising: identifying risk and/or near-miss risk of said manufacturing operation that would otherwise be unknown or concealed in parameters P and/or groups G of said parameters P of process data in either real-time and/or historically from an archive server having a relational database with a key-value storage solution; plotting parameter P of said risk and/or near-miss risk on a circular or semi-circular chart of graphic visual indicators comprising: a petal for P parameter at each T time interval; said petal comprising an area plotted with a radius R having a maximum and minimum reportable length and an angle spread greater than 1 degree, wherein said length of said radius R corresponds with a magnitude of said parameter P at said T time interval and partially determines said area of said petal displayed on said chart; and displaying said parameter P at a time interval on said chart over a predetermined time period TP.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: The dynamic risk analyzer (DRA) provided by the present invention periodically assesses real-time or historic process data, or both, associated with an operations site, such as a manufacturing, production, or processing facility, including a plant's operations, and identifies hidden near-misses of such operation, when in real time the process data appears otherwise normal. DRA assesses the process data in a manner that enables operating personnel including management at a facility to have a comprehensive understanding of the risk status and changes in both alarm and non-alarm based process variables. The hidden process near-miss data may be analyzed alone or in combination with other process data and/or data resulting from prior near-miss situations to permit strategic action to be taken to reduce or avert the occurrence of adverse incidents or catastrophic failure of a facility operation.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: The dynamic risk analyzer (DRA) provided by the present invention periodically assesses real-time or historic process data, or both, associated with an operations site, such as a manufacturing, production, or processing facility, including a plant's operations, and identifies hidden near-misses of such operation, when in real time the process data appears otherwise normal. DRA assesses the process data in a manner that enables operating personnel including management at a facility to have a comprehensive understanding of the risk status and changes in both alarm and non-alarm based process variables. The hidden process near-miss data may be analyzed alone or in combination with other process data and/or data resulting from prior near-miss situations to permit strategic action to be taken to reduce or avert the occurrence of adverse incidents or catastrophic failure of a facility operation.
Systems and methods for system identification, encoding and decoding signals in a non-linear system are disclosed. An exemplary method can include receiving the one or more input signals and performing dendritic processing on the input signals. The method can also encode the output of the dendritic processing of the input signals, at a neuron, to provide encoded signals.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of encoding one or more input signals in a non-linear system, comprising: receiving the one or more input signals; performing non-linear dendritic processing on the one or more signals to provide a first output; providing the first output to one or more neurons; and encoding the first output, at the one or more neurons, to provide one or more encoded signals. 2. The method of claim 1, wherein the receiving further comprises modeling the one or more input signals. 3. The method of claim 2, wherein the modeling further comprises modeling the one or more input signals using Volterra series. 4. The method of claim 1, further comprising: modeling the one or more input signals into one or more spaces; performing dendritic processing on each of the one or more spaces to provide an output; and adding the output from dendritic processing of each of the one or more orders to provide a first output. 5. A method of decoding one or more encoded signals in a non-linear system, comprising: receiving the one or more encoded signals; performing convex optimization on the one or more encoded signals to produce a coefficient; and constructing one or more output signals using the coefficient. 6. The method of claim 5, wherein the performing comprises: determining a sampling matrix using the one or more encoded signals; determining a measurement using a time of the one or more encoded signals; and determining a coefficient using the sample matrix and the measurement. 7. The method of claim 5, wherein the constructing the one or more output signals further comprises: determining a bias based on the one or more encoded signals; and determining the one or more output signals based on the bias and the coefficient. 8. The method of claim 5, wherein the receiving further comprises modeling the one or more encoded signals. 9. The method of claim 8, wherein the modeling further comprises modeling using Volterra series. 10. The method of claim 5, further comprising: modeling the one or more encoded signals into one or more orders; and performing convex optimization on each of the one or more orders to provide the coefficient for each of the one or orders. 11. A method of identifying a projection of an unknown dendritic processor in a non-linear system, comprising: receiving a known input signal; processing the known input signal using a projection of the unknown dendritic processor to produce a first output; encoding the first output, using a neuron, to produce an output signal; and comparing the known input signal and the output signal to identify the projection of the unknown dendritic processor. 12. The method of claim 11, wherein the receiving further comprises modeling the known input signal. 13. The method of claim 12, wherein the modeling further comprises modeling the known input signal using Volterra series. 14. The method of claim 11, further comprising: modeling the known input signal into first one or more orders; and modeling the projection of the dendritic processor of the channel into second one or more orders. 15. The method of claim 14, for each of the first one or more orders: processing the projection of each of the second one or more orders using the known input signal to produce a first output; and adding the output from dendritic processing of each of the one or more orders to provide a first output. 16. A system for encoding one or more input signals, comprising: a first computing device having a processor and a memory thereon for the storage of executable instructions and data, wherein the instructions are executed to: receiving the one or more input signals; performing dendritic processing on the one or more signals to provide a first output; providing the first output to one or more neurons; and encoding the first output, at the one or more neurons, to provide one or more encoded signals. 17. The system of claim 16, wherein the receiving further comprises modeling the one or more input signals. 18. The system of claim 17, wherein the modeling further comprises modeling the one or more input signals using Volterra series. 19. The system of claim 16, further comprising: modeling the one or more input signals into one or more orders; performing dendritic processing on each of the one or more orders to provide an output; and adding the output from dendritic processing of each of the one or more orders to provide a first output. 20. The system of claim 16, further comprising: providing the one or more encoded signals to a decoder for decoding the one or more output signals.
REJECTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Systems and methods for system identification, encoding and decoding signals in a non-linear system are disclosed. An exemplary method can include receiving the one or more input signals and performing dendritic processing on the input signals. The method can also encode the output of the dendritic processing of the input signals, at a neuron, to provide encoded signals.
G06N302
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Systems and methods for system identification, encoding and decoding signals in a non-linear system are disclosed. An exemplary method can include receiving the one or more input signals and performing dendritic processing on the input signals. The method can also encode the output of the dendritic processing of the input signals, at a neuron, to provide encoded signals.
A request classifier service implemented on a server computer receives an input request from a client device. The request classifier service accesses classification data from a knowledge repository. The knowledge repository includes one or more defined input requests mapped to one or more classification types. The request classifier service determines confidence values for the one or more defined input requests. The confidence values represent a relative match score between the input request from the client device and each of the one or more defined input requests. The request classifier service sends classification types to a processing service implemented on the server computer. The processing service determines a process response type for the input request based upon the one or more classification types. The processing logic routes the process response type and the input request to a destination mapped to the process response type.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. An apparatus comprising: one or more processors; and one or more memories storing instructions which, when processed by one or more processors, cause: a request classifier service, executing on a server computer, receiving an input request from a client device; in response to the request classifier service receiving the input request from the client device, the request classifier service: accessing classification data from a knowledge repository, where the classification data comprises one or more defined input requests mapped to one or more classification types; determining confidence values for each of the one or more defined input requests mapped to the one or more classification types, where the confidence value represents a relative match score between the input request from the client device and each of the one or more defined input requests mapped to the one or more classification types; in response to the request classifier service determining the confidence values for each of the one or more defined input requests mapped to the one or more classification types, sending one or more classification types from the one or more defined input requests mapped to the one or more classification types to a processing service on the server computer; the processing service, determining a process response type, based upon the one or more classification types from the one or more defined input requests mapped to the one or more classification types received from the request classifier service; the processing service, routing the process response type, determined by the processing service, and the input request from the client device to a destination mapped to the process response type. 2. The apparatus of claim 1, wherein determining a process response type, based upon the one or more classification types from the one or more defined input requests mapped to the one or more classification types received from the request classifier service further comprises evaluating the one or more classification types from the one or more defined input requests mapped to the one or more classification types received from the request classifier service based upon the confidence values for each of the one or more defined input requests; wherein the confidence values are evaluated against a configured confidence value threshold. 3. The apparatus of claim 1, wherein routing the process response type, determined by the processing service, and the input request from the client device to a destination mapped to the process response type further comprises transforming the input request from the client device into a specific transformed request based upon the destination mapped to the process response type. 4. The apparatus of claim 3, wherein the destination mapped to the process response type is a location on a support ticket server, wherein the specific transformed request is formatted to at least one of: SMS message, voice message, and IVR message. 5. The apparatus of claim 3, wherein the destination mapped to the process response type is a location on a database server used to allocate data tasks, wherein the specific transformed request is formatted to a database entry that represents allocating data tasks. 6. The apparatus of claim 1, wherein the one or more memories storing instructions which, when processed by the one or more processors, further cause: the processing service, sending an acknowledgement message to the client device, wherein the acknowledge message includes information related to the process response type and the destination mapped to the process response type. 7. The apparatus of claim 6, wherein the one or more memories storing instructions which, when processed by the one or more processors, further cause: the processing service, sending a second acknowledgement message to a second client device associated with the client device, wherein the second acknowledge message includes information related to the process response type and the destination mapped to the process response type. 8. One or more non-transitory computer-readable media storing instructions, which, when processed by one or more processors, cause: a request classifier service, executing on a server computer, receiving an input request from a client device; in response to the request classifier service receiving the input request from the client device, the request classifier service: accessing classification data from a knowledge repository, where the classification data comprises one or more defined input requests mapped to one or more classification types; determining confidence values for each of the one or more defined input requests mapped to the one or more classification types, where the confidence value represents a relative match score between the input request from the client device and each of the one or more defined input requests mapped to the one or more classification types; in response to the request classifier service determining the confidence values for each of the one or more defined input requests mapped to the one or more classification types, sending one or more classification types from the one or more defined input requests mapped to the one or more classification types to a processing service on the server computer; the processing service, determining a process response type, based upon the one or more classification types from the one or more defined input requests mapped to the one or more classification types received from the request classifier service; the processing service, routing the process response type, determined by the processing service, and the input request from the client device to a destination mapped to the process response type. 9. The one or more non-transitory computer-readable media of claim 8, wherein determining a process response type, based upon the one or more classification types from the one or more defined input requests mapped to the one or more classification types received from the request classifier service further comprises evaluating the one or more classification types from the one or more defined input requests mapped to the one or more classification types received from the request classifier service based upon the confidence values for each of the one or more defined input requests; wherein the confidence values are evaluated against a configured confidence value threshold. 10. The one or more non-transitory computer-readable media of claim 8, wherein routing the process response type, determined by the processing service, and the input request from the client device to a destination mapped to the process response type further comprises transforming the input request from the client device into a specific transformed request based upon the destination mapped to the process response type. 11. The one or more non-transitory computer-readable media of claim 10, wherein the destination mapped to the process response type is a location on a support ticket server, wherein the specific transformed request is formatted to at least one of: SMS message, voice message, and IVR message. 12. The one or more non-transitory computer-readable media of claim 10, wherein the destination mapped to the process response type is a location on a database server used to allocate data tasks, wherein the specific transformed request is formatted to a database entry that represents allocating data tasks. 13. The one or more non-transitory computer-readable media of claim 8, further comprising storing instructions, which, when processed by the one or more processors, cause: the processing service, sending an acknowledgement message to the client device, wherein the acknowledge message includes information related to the process response type and the destination mapped to the process response type. 14. The one or more non-transitory computer-readable media of claim 13, further comprising storing instructions, which, when processed by the one or more processors, cause: the processing service, sending a second acknowledgement message to a second client device associated with the client device, wherein the second acknowledge message includes information related to the process response type and the destination mapped to the process response type. 15. A computer-implemented method comprising: a request classifier service, executing on a server computer, receiving an input request from a client device; in response to the request classifier service receiving the input request from the client device, the request classifier service: accessing classification data from a knowledge repository, where the classification data comprises one or more defined input requests mapped to one or more classification types; determining confidence values for each of the one or more defined input requests mapped to the one or more classification types, where the confidence value represents a relative match score between the input request from the client device and each of the one or more defined input requests mapped to the one or more classification types; in response to the request classifier service determining the confidence values for each of the one or more defined input requests mapped to the one or more classification types, sending one or more classification types from the one or more defined input requests mapped to the one or more classification types to a processing service on the server computer; the processing service, determining a process response type, based upon the one or more classification types from the one or more defined input requests mapped to the one or more classification types received from the request classifier service; the processing service, routing the process response type, determined by the processing service, and the input request from the client device to a destination mapped to the process response type. 16. The method of claim 15, wherein determining a process response type, based upon the one or more classification types from the one or more defined input requests mapped to the one or more classification types received from the request classifier service further comprises evaluating the one or more classification types from the one or more defined input requests mapped to the one or more classification types received from the request classifier service based upon the confidence values for each of the one or more defined input requests; wherein the confidence values are evaluated against a configured confidence value threshold. 17. The method of claim 15, wherein routing the process response type, determined by the processing service, and the input request from the client device to a destination mapped to the process response type further comprises transforming the input request from the client device into a specific transformed request based upon the destination mapped to the process response type. 18. The method of claim 17, wherein the destination mapped to the process response type is a location on a support ticket server, wherein the specific transformed request is formatted to at least one of: SMS message, voice message, and IVR message. 19. The method of claim 17, wherein the destination mapped to the process response type is a location on a database server used to allocate data tasks, wherein the specific transformed request is formatted to a database entry that represents allocating data tasks. 20. The method of claim 15, further comprising the processing service, sending an acknowledgement message to the client device, wherein the acknowledge message includes information related to the process response type and the destination mapped to the process response type.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A request classifier service implemented on a server computer receives an input request from a client device. The request classifier service accesses classification data from a knowledge repository. The knowledge repository includes one or more defined input requests mapped to one or more classification types. The request classifier service determines confidence values for the one or more defined input requests. The confidence values represent a relative match score between the input request from the client device and each of the one or more defined input requests. The request classifier service sends classification types to a processing service implemented on the server computer. The processing service determines a process response type for the input request based upon the one or more classification types. The processing logic routes the process response type and the input request to a destination mapped to the process response type.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A request classifier service implemented on a server computer receives an input request from a client device. The request classifier service accesses classification data from a knowledge repository. The knowledge repository includes one or more defined input requests mapped to one or more classification types. The request classifier service determines confidence values for the one or more defined input requests. The confidence values represent a relative match score between the input request from the client device and each of the one or more defined input requests. The request classifier service sends classification types to a processing service implemented on the server computer. The processing service determines a process response type for the input request based upon the one or more classification types. The processing logic routes the process response type and the input request to a destination mapped to the process response type.
According to embodiments, methods, systems, and computer program products are provided for receiving one or more input compositions comprising one or more materials, assigning a material vector to each material, learning, for each of the input compositions, a composition vector based on the material vectors of the materials that form each composition, assigning predicted rating values having a confidence level to each of the composition vectors, selecting a composition to be rated based on the confidence levels, presenting the selected composition to be rated to a user, receiving a user rating for the composition to be rated; adjusting the predicted rating values and confidence levels of the composition vectors that have not been rated by the user, and generating a predictive model to predict a user's ratings for compositions when confidence levels of each composition vector is above a predetermined threshold value.
Please help me write a proper abstract based on the patent claims. CLAIM: 1.-8. (canceled) 9. A system to generate a modified material composition, the system comprising: a memory having computer readable instructions; and a processor configured to execute the computer readable instructions, the computer readable instructions comprising: receiving, by the processor, one or more input compositions, each input composition comprising one or more materials; assigning a material vector to each of the materials; learning for each of the one or more input compositions a composition vector based on the material vectors of the materials that form each composition; assigning a predicted rating value to each of the composition vectors, each predicted rating value having a confidence level; selecting a composition to be rated based on the confidence levels of the composition vectors; presenting the selected composition to be rated to a user; receiving a user rating for the composition to be rated; adjusting the predicted rating values and confidence levels of the composition vectors that have not been rated by the user; and generating a predictive model to predict a user's ratings for compositions when confidence levels of each composition vector is above a predetermined threshold value. 10. The system claim 9, wherein the material vectors are learned by training an artificial neural network. 11. The system of claim 10, wherein inputs and outputs of the artificial neural network include one or more materials from a prior example composition. 12. The system of claim 9, wherein the composition vector is learned based on the material vectors of respective materials of respective compositions and proportions of the materials within the respective compositions. 13. The system of claim 12, wherein ensemble-normalized proportion weights are computed for each material. 14. The system of claim 13, wherein the ensemble-normalized weights are responsive to at least one of a distribution of a material percentage of the material within a group of compositions or a normalization of a material percentage of the one or more materials within a single composition by at least one distribution parameter. 15. The system of claim 9, wherein the materials are ingredients and the compositions are recipes. 16. The system of claim 9, further comprising recommending a new composition to the user based on the predictive model. 17. A computer program product to generate a material composition, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to: receive one or more input compositions, each input composition comprising one or more materials; assign a material vector to each of the materials; learn for each of the one or more input compositions a composition vector based on the material vectors of the materials that form each composition; assign a predicted rating value to each of the composition vectors, each predicted rating value having a confidence level; select a composition to be rated based on the confidence levels of the composition vectors; present the selected composition to be rated to a user; receive a user rating for the composition to be rated; adjust the predicted rating values and confidence levels of the composition vectors that have not been rated by the user; and generate a predictive model to predict a user's ratings for compositions when confidence levels of each composition vector is above a predetermined threshold value. 18. The computer program product of claim 17, wherein the material vectors are learned by training an artificial neural network. 19. The computer program product of claim 18, wherein the materials are ingredients and the compositions are recipes. 20. The computer program product of claim 18, further causing the processor to recommend a new composition to the user based on the predictive model.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: According to embodiments, methods, systems, and computer program products are provided for receiving one or more input compositions comprising one or more materials, assigning a material vector to each material, learning, for each of the input compositions, a composition vector based on the material vectors of the materials that form each composition, assigning predicted rating values having a confidence level to each of the composition vectors, selecting a composition to be rated based on the confidence levels, presenting the selected composition to be rated to a user, receiving a user rating for the composition to be rated; adjusting the predicted rating values and confidence levels of the composition vectors that have not been rated by the user, and generating a predictive model to predict a user's ratings for compositions when confidence levels of each composition vector is above a predetermined threshold value.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: According to embodiments, methods, systems, and computer program products are provided for receiving one or more input compositions comprising one or more materials, assigning a material vector to each material, learning, for each of the input compositions, a composition vector based on the material vectors of the materials that form each composition, assigning predicted rating values having a confidence level to each of the composition vectors, selecting a composition to be rated based on the confidence levels, presenting the selected composition to be rated to a user, receiving a user rating for the composition to be rated; adjusting the predicted rating values and confidence levels of the composition vectors that have not been rated by the user, and generating a predictive model to predict a user's ratings for compositions when confidence levels of each composition vector is above a predetermined threshold value.
A geofence filtering method, system, and non-transitory computer readable medium, include a user location monitoring circuit configured to monitor a pinpoint location of a user and a boundary location of the user, a geofence determining circuit configured to determine a plurality of geofences that overlap with the boundary location of the user, the plurality of geofences being stored in a database, and a cognitive filtering and ranking circuit configured to filter the plurality of geofences that overlap with the boundary location of the user based on a cognitive factor and to rank the filtered geofences based on the cognitive factor to deliver to a user device when a pinpoint location of the user overlaps with the plurality of geofences.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A geofence filtering system comprising: a user location monitoring circuit configured to monitor a pinpoint location of a user and a boundary location of the user; a geofence determining circuit configured to determine a plurality of geofences that overlap with the boundary location of the user, the plurality of geofences being stored in a database; and a cognitive filtering and ranking circuit configured to filter the plurality of geofences that overlap with the boundary location of the user according to a behavioral measure of the user. 2. The system of claim 1, wherein the cognitive filtering and ranking circuit dynamically filters and ranks the plurality of geofences that overlap with the boundary location of the user such that the filtered and ranked geofences correspond to a real-time cognitive state when the pinpoint location of the user overlaps with the filtered and ranked geofences. 3. The system of claim 1, wherein the cognitive filtering and ranking circuit includes a learned mapping between a set of predetermined cognitive states and the plurality of geofences stored in the database, and wherein the cognitive filtering and ranking circuit continuously updates the ranking of the plurality of geofences based on the user having a predetermined cognitive state of the set of predetermined cognitive states. 4. The system of claim 1, wherein the cognitive filtering and ranking circuit filters all of the plurality of geofences based on the behavioral measure regardless of a location of the user. 5. The system of claim 1, wherein the cognitive filtering and ranking circuit stores at least one of the ranked geofences and the mappings in the database. 6. The system of claim 1, wherein the user selectively sets a size of the boundary location of the user. 7. The system of claim 1, wherein the cognitive filtering and ranking circuit selectively sets a size of the geofence. 8. The system of claim 1, wherein a number of the ranked filtered geofences delivered to the user device is based on an operating system of the user device. 9. The system of claim 1, wherein the behavioral measure includes at least one of: a dwell time of the user, a location history of the user; personal information of the user; weather data of the plurality of geofences; a time of day; a current state of traffic and traffic data between the plurality of geofences and the pinpoint location of the user; a social alert; a news alert; a number of people inside the plurality of geofences; an activity of the user device; biometric data of the user; a prior purchase behavior of the user, a speech analysis of the user; an analysis of user device usage; and venue information. 10. A geofence filtering method comprising: monitoring a pinpoint location of a user and a boundary location of the user; determining a plurality of geofences that overlap with the boundary location of the user, the plurality of geofences being stored in a database; and filtering the plurality of geofences that overlap with the boundary location of the user according to a behavioral measure of the user. 11. The method of claim 10, wherein the filtering and ranking further dynamically filters and ranks the plurality of geofences that overlap with the boundary location of the user such that the filtered and ranked geofences correspond to a real-time cognitive state when the pinpoint location of the user overlaps with the filtered and ranked geofences. 12. The method of claim 10, wherein the filtering and ranking further filters and ranks based on a learned mapping between a set of predetermined cognitive states and the plurality of geofences stored in the database, and wherein the filtering and ranking continuously updates the ranking of the plurality of geofences based on the user having a predetermined cognitive state of the set of predetermined cognitive states. 13. The method of claim 10, wherein the filtering and ranking further filters all of the plurality of geofences based on the behavioral measure regardless of a location of the user. 14. The method of claim 10, wherein the filtering and ranking further stores at least one of the ranked geofences and the mapping in the database. 15. The method of claim 10, wherein the user selectively sets a size of the boundary location of the user. 16. The method of claim 10, wherein the filtering and ranking selectively sets a size of the geofence. 17. A non-transitory computer-readable recording medium recording a geofence filtering program, the program causing a computer to perform: monitoring a pinpoint location of a user and a boundary location of the user; determining a plurality of geofences that overlap with the boundary location of the user, the plurality of geofences being stored in a database; and filtering the plurality of geofences that overlap with the boundary location of the user according to a behavioral measure of the user. 18. The non-transitory computer-readable recording medium of claim 17, wherein the filtering and ranking further dynamically filters and ranks the plurality of geofences that overlap with the boundary location of the user such that the filtered and ranked geofences correspond to a real-time cognitive state when the pinpoint location of the user overlaps with the filtered and ranked geofences. 19. The non-transitory computer-readable recording medium of claim 17, wherein the filtering and ranking further filters and ranks based on a learned mapping between a set of predetermined cognitive states and the plurality of geofences stored in the database, and wherein the filtering and ranking continuously updates the ranking of the plurality of geofences based on the user having one of the set of predetermined cognitive states. 20. The non-transitory computer-readable recording medium of claim 17, wherein the filtering and ranking further filters all of the plurality of geofences based on the behavioral measure regardless of a location of the user.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A geofence filtering method, system, and non-transitory computer readable medium, include a user location monitoring circuit configured to monitor a pinpoint location of a user and a boundary location of the user, a geofence determining circuit configured to determine a plurality of geofences that overlap with the boundary location of the user, the plurality of geofences being stored in a database, and a cognitive filtering and ranking circuit configured to filter the plurality of geofences that overlap with the boundary location of the user based on a cognitive factor and to rank the filtered geofences based on the cognitive factor to deliver to a user device when a pinpoint location of the user overlaps with the plurality of geofences.
G06N502
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A geofence filtering method, system, and non-transitory computer readable medium, include a user location monitoring circuit configured to monitor a pinpoint location of a user and a boundary location of the user, a geofence determining circuit configured to determine a plurality of geofences that overlap with the boundary location of the user, the plurality of geofences being stored in a database, and a cognitive filtering and ranking circuit configured to filter the plurality of geofences that overlap with the boundary location of the user based on a cognitive factor and to rank the filtered geofences based on the cognitive factor to deliver to a user device when a pinpoint location of the user overlaps with the plurality of geofences.
An avatar having artificial intelligence for identifying and providing relationship or wellbeing recommendations is provided. The avatar acts as an electronic representation of a user. The avatar searches available information and makes recommendations to the user based on information received from the user or other sources regarding the user's relationship with another person or the user's wellbeing. In this way, the avatar continually learns more about the user to improve future recommendations to enhance the user's wellbeing and relationship with the other person.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method, performed by one or more computing devices, for providing a relationship recommendation to a user, the method comprising; receiving user input that identifies personal characteristics of a user and a user's mate; searching relationship information to identify one or more behaviors that have successfully enhanced the relationship between people having characteristics in common with the user and the user's mate; providing one or more recommendations to the user, the one or more recommendations identifying one or more behaviors that the user can perform to enhance the user's relationship with the user's mate; receiving feedback from the user regarding whether the one or more recommended behaviors were effective at enhancing the relationship between the user and the user's mate; based on the received feedback, identifying one or more additional behaviors to recommend to the user; and providing one or more additional recommendations to the user, the one or more additional recommendations identifying the one or more additional behaviors. 2. The method of claim 1, wherein the relationship information comprises characteristic-to-behavior mappings which each associate one or more behaviors with one or more characteristics of a person whose relationship was enhanced when the one or more associated behaviors was performed in the relationship. 3. The method of claim 2, wherein at least some of the characteristic-to-behavior mappings are generated from feedback received from a plurality of users to whom one or more mapped behaviors were recommended. 4. The method of claim 2, wherein at least some of the characteristic-to-behavior mappings are created by: identifying a plurality of users which each have provided feedback that a first behavior was successful in enhancing a relationship between the user and a mate; identifying one or more characteristics that each of the plurality of users share in common; and creating a characteristic-to-behavior mapping between the one or more characteristics shared in common and the first behavior. 5. The method of claim 2, wherein the one or more behaviors are identified by: identifying a first characteristic-to-behavior mapping that includes one or more characteristics of the user or the user's mate; and selecting the one or more behaviors from the first characteristic-to-behavior mapping. 6. The method of claim 5, wherein the one or more additional behaviors are identified by: identifying a second characteristic-to-behavior mapping that includes one or more of the same characteristics as the first characteristic-to-behavior mapping; and selecting the one or more additional behaviors from the second characteristic-to-behavior mapping. 7. The method of claim 5, wherein the one or more additional behaviors are identified by: identifying a plurality of users that each has provided feedback that the one or more behaviors of the first character-to-behavior mapping were effective in enhancing a relationship of the user; identifying a second characteristic-to-behavior mapping that includes the one or more additional behaviors for which each of the plurality of users has provided feedback that the one or more additional behaviors were effective in enhancing a relationship of the user; and selecting the one or more additional behaviors from the second characteristic-to-behavior mapping. 8. The method of claim 1, wherein the personal characteristics comprise one or more of personality, capabilities, preferences, beliefs, goals, habits, or interests of the user or the user's mate. 9. One or more computer storage media storing computer executable instructions which when executed by one or more processors implement a method for providing a relationship recommendation to a user, the method comprising; receiving user input that identifies personal characteristics of a user and a user's mate; searching relationship information to identify one or more behaviors that have successfully enhanced the relationship between people having characteristics in common with the user and the user's mate; providing one or more recommendations to the user, the one or more recommendations identifying one or more behaviors that the user can perform to enhance the user's relationship with the user's mate; receiving feedback from the user regarding whether the one or more recommended behaviors were effective at enhancing the relationship between the user and the user's mate; based on the received feedback, identifying one or more additional behaviors to recommend to the user; and providing one or more additional recommendations to the user, the one or more additional recommendations identifying the one or more additional behaviors. 10. The computer storage media of claim 9, wherein the relationship information comprises characteristic-to-behavior mappings which each associate one or more behaviors with one or more characteristics of a person whose relationship was enhanced when the one or more associated behaviors was performed in the relationship. 11. The computer storage media of claim 10, wherein at least some of the characteristic-to-behavior mappings are generated from feedback received from a plurality of users to whom one or more mapped behaviors were recommended. 12. The computer storage media of claim 10, wherein at least some of the characteristic-to-behavior mappings are created by: identifying a plurality of users which each have provided feedback that a first behavior was successful in enhancing a relationship between the user and a mate; identifying one or more characteristics that each of the plurality of users share in common; and creating a characteristic-to-behavior mapping between the one or more characteristics shared in common and the first behavior. 13. The computer storage media of claim 10, wherein the one or more behaviors are identified by: identifying a first characteristic-to-behavior mapping that includes one or more characteristics of the user or the user's mate; and selecting the one or more behaviors from the first characteristic-to-behavior mapping. 14. The computer storage media of claim 13, wherein the one or more additional behaviors are identified by: identifying a second characteristic-to-behavior mapping that includes one or more of the same characteristics as the first characteristic-to-behavior mapping; and selecting the one or more additional behaviors from the second characteristic-to-behavior mapping. 15. The computer storage media of claim 13, wherein the one or more additional behaviors are identified by: identifying a plurality of users that each has provided feedback that the one or more behaviors of the first character-to-behavior mapping were effective in enhancing a relationship of the user; identifying a second characteristic-to-behavior mapping that includes the one or more additional behaviors for which each of the plurality of users has provided feedback that the one or more additional behaviors were effective in enhancing a relationship of the user; and selecting the one or more additional behaviors from the second characteristic-to-behavior mapping. 16. The computer storage media of claim 9, wherein the personal characteristics comprise one or more of personality, capabilities, preferences, beliefs, goals, habits, or interests of the user or the user's mate. 17. A method, performed by one or more computing devices, for providing a relationship recommendation to a user, the method comprising; receiving user input that identifies personal characteristics of a user and a user's mate; searching relationship information to identify one or more behaviors that have successfully enhanced the relationship between people having characteristics in common with the user and the user's mate, the relationship information comprising characteristic-to-behavior mappings which each associate one or more behaviors with one or more characteristics of a person whose relationship was enhanced when the one or more associated behaviors was performed in the relationship such that the one or more behaviors that are identified are mapped to one or more of the personal characteristics of the user and the user's mate; and providing one or more recommendations to the user, the one or more recommendations identifying the one or more behaviors that the user can perform to enhance the user's relationship with the user's mate. 18. The method of claim 17, further comprising: receiving feedback from the user regarding whether the one or more recommended behaviors were effective at enhancing the relationship between the user and the user's mate; based on the received feedback, identifying one or more additional behaviors to recommend to the user; and providing one or more additional recommendations to the user, the one or more additional recommendations identifying the one or more additional behaviors. 19. The method of claim 17, wherein at least some of the characteristic-to-behavior mappings are created by: identifying a plurality of users which each have provided feedback that a first behavior was successful in enhancing a relationship between the user and a mate; identifying one or more characteristics that each of the plurality of users share in common; and creating a characteristic-to-behavior mapping between the one or more characteristics shared in common and the first behavior. 20. The method of claim 18, wherein the one or more behaviors are identified by: identifying a first characteristic-to-behavior mapping that includes one or more characteristics of the user or the user's mate; and selecting the one or more behaviors from the first characteristic-to-behavior mapping; and the one or more additional behaviors are identified by: identifying a second characteristic-to-behavior mapping that includes one or more of the same characteristics as the first characteristic-to-behavior mapping; and selecting the one or more additional behaviors from the second characteristic-to-behavior mapping.
REJECTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: An avatar having artificial intelligence for identifying and providing relationship or wellbeing recommendations is provided. The avatar acts as an electronic representation of a user. The avatar searches available information and makes recommendations to the user based on information received from the user or other sources regarding the user's relationship with another person or the user's wellbeing. In this way, the avatar continually learns more about the user to improve future recommendations to enhance the user's wellbeing and relationship with the other person.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An avatar having artificial intelligence for identifying and providing relationship or wellbeing recommendations is provided. The avatar acts as an electronic representation of a user. The avatar searches available information and makes recommendations to the user based on information received from the user or other sources regarding the user's relationship with another person or the user's wellbeing. In this way, the avatar continually learns more about the user to improve future recommendations to enhance the user's wellbeing and relationship with the other person.
A method for forecasting time delays added to a scheduled start time and a scheduled end time of a task includes generating a stochastic model of the task and resources affecting the task, the stochastic model includes a reactionary delay component that is a function of previous task end times and a root cause delay component that is an independent random process at a specific time. The method further includes: calculating a probability distribution of time delays added to the scheduled start time as a combination of the reactionary delay component and the root cause delay component using the stochastic model to provide a probability distribution of start times; and calculating a probability distribution of time delays added to the scheduled end time as a combination of the reactionary delay component and the root cause delay component using the stochastic model to provide a probability distribution of end times.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. (canceled) 2. (canceled) 3. (canceled) 4. (canceled) 5. (canceled) 6. (canceled) 7. (canceled) 8. (canceled) 9. (canceled) 10. (canceled) 11. (canceled) 12. (canceled) 13. (canceled) 14. (canceled) 15. A system for forecasting time delays added to a scheduled start time and a scheduled end time of a task, the system comprising: a processor configured to: generate a stochastic model of the task and resources affecting the task, the stochastic model comprising a reactionary delay component and a root cause delay component, the reactionary component being a function of previous task end times and the root cause delay component being an independent random process at a specific time; calculate a probability distribution of time delays added to the scheduled start time as a combination of the reactionary delay component and the root cause delay component using the stochastic model to provide a probability distribution of start times; calculate a probability distribution of time delays added to the scheduled end time as a combination of the reactionary delay component and the root cause delay component using the stochastic model to provide a probability distribution of end times; transmit a signal comprising the probability distribution of start times and the probability distribution of end times to a signal receiving device; a signal receiving device configured to receive the signal comprising the probability distribution of start times and the probability distribution of end times. 16. The system according to claim 15, wherein the signal receiving device comprises at least one of a display and a printer. 17. The system according to claim 15, wherein the signal receiving device comprises at least one of a non-transitory storage medium and memory. 18. The system according to claim 15, wherein the stochastic model is a hidden Markov model (HMM) and the processor is further configured to train the HMM using historical schedule and delay data. 19. The system according to claim 15, wherein the processor is further configured to: generate a stochastic model of each of the sub-tasks and resources affecting the sub-tasks, the stochastic model comprising a reactionary delay component and a root-cause delay component, the reactionary delay component being a function of previous sub-task end times and the root-cause delay component being an independent random process at a specific time; calculate a probability distribution of time delays added to a scheduled start time of each sub-task as a combination of the reactionary delay component and the root cause delay component of each sub-task using the stochastic model of each of the sub-tasks and resources affecting the sub-tasks to provide a probability distribution of start times of the sub-tasks; and calculate a probability distribution of time delays added to a scheduled end time of each sub-task as a combination of the reactionary delay component and the root cause delay component of each sub-task using the stochastic model of each of the sub-tasks and resources affecting the sub-tasks to provide a probability distribution of end times of the sub-tasks. 20. A non-transitory computer-readable medium comprising computer-executable instructions for forecasting time delays added to a scheduled start time and a scheduled end time of a task that when executed by a computer implement a method comprising: generating a stochastic model of the task and resources affecting the task, the stochastic model comprising a reactionary delay component and a root cause delay component, the reactionary component being a function of previous task end times and the root cause delay component being an independent random process at a specific time; calculating a probability distribution of time delays added to the scheduled start time as a combination of the reactionary delay component and the root cause delay component using the stochastic model to provide a probability distribution of start times; calculating a probability distribution of time delays added to the scheduled end time as a combination of the reactionary delay component and the root cause delay component using the stochastic model to provide a probability distribution of end times; and transmitting a signal comprising the probability distribution of start times and the probability distribution of end times to a signal receiving device.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method for forecasting time delays added to a scheduled start time and a scheduled end time of a task includes generating a stochastic model of the task and resources affecting the task, the stochastic model includes a reactionary delay component that is a function of previous task end times and a root cause delay component that is an independent random process at a specific time. The method further includes: calculating a probability distribution of time delays added to the scheduled start time as a combination of the reactionary delay component and the root cause delay component using the stochastic model to provide a probability distribution of start times; and calculating a probability distribution of time delays added to the scheduled end time as a combination of the reactionary delay component and the root cause delay component using the stochastic model to provide a probability distribution of end times.
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method for forecasting time delays added to a scheduled start time and a scheduled end time of a task includes generating a stochastic model of the task and resources affecting the task, the stochastic model includes a reactionary delay component that is a function of previous task end times and a root cause delay component that is an independent random process at a specific time. The method further includes: calculating a probability distribution of time delays added to the scheduled start time as a combination of the reactionary delay component and the root cause delay component using the stochastic model to provide a probability distribution of start times; and calculating a probability distribution of time delays added to the scheduled end time as a combination of the reactionary delay component and the root cause delay component using the stochastic model to provide a probability distribution of end times.
An approach is provided for personalizing an error message for a user. The usage of help content by the user to resolve error condition(s) is monitored. Attributes of the usage are determined. A learning style of the user is determined based on the attributes. An error condition is detected. The error message is augmented (i.e., personalized) with a communication and/or hyperlink that is compatible with the learning style of the user, and that is configured to assist the user with resolving the detected error condition. The augmented error message is presented to the user.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of personalizing an error message for a user, the method comprising the steps of: a computer monitoring a usage by the user of content of a help system integrated with a software application, the help system modified so that interactions of the user with the help system are monitored, the usage of the content causing the user to resolve one or more error conditions, and the usage including the content with which the user interacts and one or more types of interaction the user has with the content; the computer determining attributes of the usage of the content of the help system; based on the attributes of the usage, the computer generating a model of a learning style of the user; based on the model, the computer determining the learning style of the user; subsequent to the step of determining the learning style, the computer detecting an error condition; the computer augmenting the error message with a communication and/or a hyperlink that activates an action, the communication and hyperlink compatible with the learning style of the user and configured to assist the user with resolving the error condition; and the computer presenting the augmented error message to the user during an interaction between the user and the application. 2. The method of claim 1, further comprising the step of the computer selecting the communication from a plurality of communications based on the learning style, wherein the step of augmenting the error message includes augmenting the error message with the selected communication. 3. The method of claim 1, further comprising the steps of: the computer monitoring a usage by a second user of content of the help system, the usage of the content by the second user causing the second user to resolve one or more error conditions; the computer determining second attributes of the usage of the content by the second user; the computer determining a second learning style of the second user based on the second attributes; subsequent to the step of determining the second learning style, the computer detecting another error condition; the computer augmenting a second error message with a communication and/or a hyperlink compatible with the second learning style of the second user and configured to assist the second user with resolving the second error condition; and the computer presenting the augmented second error message to the second user, wherein the augmented second error message presented to the second user is different from the augmented error message presented to the user, which results from the second learning style of the second user being different from the learning style of the user. 4. The method of claim 1, wherein the step of monitoring the usage includes tracking the usage from within the computer or via a monitoring agent external to the computer. 5. The method of claim 1, wherein the step of determining the attributes of the usages includes determining a learning type, a learning format, an interactivity level, an interactivity type, and a semantic density specified by the usage by the user of the content of the help system. 6. The method of claim 1, further comprising the steps of: the computer retrieving information about the user from a user profile of the user, the information including a name of the user and/or characteristics of the user other than the attributes on which the learning style is based; and the computer augmenting the error message with the retrieved information and/or with a message based on the retrieved information. 7. The method of claim 1, further comprising the step of the computer receiving an instruction from the user to look up the non-augmented error message in the help system, wherein the step of presenting the augmented error message is performed in response to the step of receiving the instruction to look up the non-augmented error message in the help system. 8. The method of claim 1, further comprising the step of providing at least one support service for at least one of creating, integrating, hosting, maintaining, and deploying computer-readable program code in the computer, the program code being executed by a processor of the computer to implement the steps of monitoring, determining the attributes, generating, determining the learning style, detecting, augmenting, and presenting. 9. A computer system comprising: a central processing unit (CPU); a memory coupled to the CPU; a computer-readable, tangible storage device coupled to the CPU, the storage device containing instructions that are executed by the CPU via the memory to implement a method of personalizing an error message for a user, the method comprising: the computer system monitoring a usage by the user of content of a help system integrated with a software application, the help system modified so that interactions of the user with the help system are monitored, the usage of the content causing the user to resolve one or more error conditions, and the usage including the content with which the user interacts and one or more types of interaction the user has with the content; the computer system determining attributes of the usage of the content of the help system; based on the attributes of the usage, the computer system generating a model of a learning style of the user; based on the model, the computer system determining the learning style of the user; subsequent to the step of determining the learning style, the computer system detecting an error condition; the computer system augmenting the error message with a communication and/or a hyperlink that activates an action, the communication and hyperlink compatible with the learning style of the user and configured to assist the user with resolving the error condition; and the computer system presenting the augmented error message to the user during an interaction between the user and the application. 10. The computer system of claim 9, wherein the method further comprises the step of the computer system selecting the communication from a plurality of communications based on the learning style, wherein the step of augmenting the error message includes augmenting the error message with the selected communication. 11. The computer system of claim 9, wherein the method further comprises the steps of: the computer system monitoring a usage by a second user of content of the help system, the usage of the content by the second user causing the second user to resolve one or more error conditions; the computer system determining second attributes of the usage of the content by the second user; the computer system determining a second learning style of the second user based on the second attributes; subsequent to the step of determining the second learning style, the computer system detecting another error condition; the computer system augmenting a second error message with a communication and/or a hyperlink compatible with the second learning style of the second user and configured to assist the second user with resolving the second error condition; and the computer system presenting the augmented second error message to the second user, wherein the augmented second error message presented to the second user is different from the augmented error message presented to the user, which results from the second learning style of the second user being different from the learning style of the user. 12. The computer system of claim 9, wherein the step of monitoring the usage includes tracking the usage from within the computer system or via a monitoring agent external to the computer system. 13. The computer system of claim 9, wherein the step of determining the attributes of the usages includes determining a learning type, a learning format, an interactivity level, an interactivity type, and a semantic density specified by the usage by the user of the content of the help system. 14. The computer system of claim 9, wherein the method further comprises the steps of: the computer system retrieving information about the user from a user profile of the user, the information including a name of the user and/or characteristics of the user other than the attributes on which the learning style is based; and the computer system augmenting the error message with the retrieved information and/or with a message based on the retrieved information. 15. A computer program product, comprising: a computer-readable, tangible storage device; and a computer-readable program code stored in the computer-readable, tangible storage device, the computer-readable program code containing instructions that are executed by a central processing unit (CPU) of a computer system to implement a method of personalizing an error message for a user, the method comprising: the computer system monitoring a usage by the user of content of a help system integrated with a software application, the help system modified so that interactions of the user with the help system are monitored, the usage of the content causing the user to resolve one or more error conditions, and the usage including the content with which the user interacts and one or more types of interaction the user has with the content; the computer system determining attributes of the usage of the content of the help system; based on the attributes of the usage, the computer system generating a model of a learning style of the user; based on the model, the computer system determining the learning style of the user; subsequent to the step of determining the learning style, the computer system detecting an error condition; the computer system augmenting the error message with a communication and/or a hyperlink that activates an action, the communication and hyperlink compatible with the learning style of the user and configured to assist the user with resolving the error condition; and the computer system presenting the augmented error message to the user during an interaction between the user and the application. 16. The program product of claim 15, wherein the method further comprises the step of the computer system selecting the communication from a plurality of communications based on the learning style, wherein the step of augmenting the error message includes augmenting the error message with the selected communication. 17. The program product of claim 15, wherein the method further comprises the steps of: the computer system monitoring a usage by a second user of content of the help system, the usage of the content by the second user causing the second user to resolve one or more error conditions; the computer system determining second attributes of the usage of the content by the second user; the computer system determining a second learning style of the second user based on the second attributes; subsequent to the step of determining the second learning style, the computer system detecting another error condition; the computer system augmenting a second error message with a communication and/or a hyperlink compatible with the second learning style of the second user and configured to assist the second user with resolving the second error condition; and the computer system presenting the augmented second error message to the second user, wherein the augmented second error message presented to the second user is different from the augmented error message presented to the user, which results from the second learning style of the second user being different from the learning style of the user. 18. The program product of claim 15, wherein the step of monitoring the usage includes tracking the usage from within the computer system or via a monitoring agent external to the computer system. 19. The program product of claim 15, wherein the step of determining the attributes of the usages includes determining a learning type, a learning format, an interactivity level, an interactivity type, and a semantic density specified by the usage by the user of the content of the help system. 20. The program product of claim 15, wherein the method further comprises the steps of: the computer system retrieving information about the user from a user profile of the user, the information including a name of the user and/or characteristics of the user other than the attributes on which the learning style is based; and the computer system augmenting the error message with the retrieved information and/or with a message based on the retrieved information.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: An approach is provided for personalizing an error message for a user. The usage of help content by the user to resolve error condition(s) is monitored. Attributes of the usage are determined. A learning style of the user is determined based on the attributes. An error condition is detected. The error message is augmented (i.e., personalized) with a communication and/or hyperlink that is compatible with the learning style of the user, and that is configured to assist the user with resolving the detected error condition. The augmented error message is presented to the user.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: An approach is provided for personalizing an error message for a user. The usage of help content by the user to resolve error condition(s) is monitored. Attributes of the usage are determined. A learning style of the user is determined based on the attributes. An error condition is detected. The error message is augmented (i.e., personalized) with a communication and/or hyperlink that is compatible with the learning style of the user, and that is configured to assist the user with resolving the detected error condition. The augmented error message is presented to the user.
A method of predicting growth of a crack in a member includes memorizing, for each portion on the member, stress distribution Δσ(a) in the depth direction obtained in the case that no crack is present, a relationship between depth of growing cracks and creep contribution, and a relationship between creep contribution and parameters C and in of the Paris's law, receiving from a user an indication of a certain portion on the member, acquiring the stress distribution Δσ(a) in the depth direction for the certain portion, acquiring a creep contribution at the depth of a growing crack for the certain portion, from the relationship between depth of cracks and creep contribution memorized for the certain portion, and acquiring parameters C and in corresponding to the acquired creep contribution, from the relationship between creep contribution and parameters C and m of the Paris's law memorized for the certain portion.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of predicting growth of a crack in a member comprising the steps, executed by an information processing device, of: memorizing, for each portion on the member, stress distribution Δσ(a) in the depth direction obtained in the case that no crack is present, a relationship between depth of growing cracks and creep contribution, and a relationship between creep contribution and parameters C and m of the Paris's law; receiving from a user an indication of a certain portion on the member; acquiring the stress distribution Δσ(a) in the depth direction for the certain portion; acquiring a creep contribution at the depth of a growing crack for the certain portion, from the relationship between depth of cracks and creep contribution memorized for the certain portion; acquiring parameters C and m corresponding to the acquired creep contribution, from the relationship between creep contribution and parameters C and m of the Paris's law memorized for the certain portion; and predicting the growth of the crack in the certain portion, based on the following equations: da/dN=C×(ΔK)m, and ΔK=Δσ(a)×(π×a)1/2 wherein “a” is a crack depth, N is a number of occurrences of a cyclic stress, C and m are constants determined for a member, and ΔK is a stress intensity factor range. 2. The method according to claim 1, wherein the relationship between depth of cracks and creep contribution memorized by the information processing device is calculated based on temporal change of stress at a depth of a growing crack in the member, a creep rupture property, a measured value of a number of occurrences of a cyclic stress before a crack occurs in a portion on the member. 3. The method according to claim 2, comprising the steps, executed by an information processing device, of: acquiring, from the user, a length of the crack appeared on a surface of the certain portion; predicting a depth of the crack occurred in the certain portion based on the length of the crack; and defining the predicted depth of the crack as an initial value to be used in predicting a growth of a crack in the certain portion. 4. The method according to claim 3, wherein by the information processing device a value obtained by multiplying the acquired length of the crack appeared on the surface of the certain portion with ⅓ is used as a predicted value for the depth of the crack occurred in the certain portion. 5. The method according to claim 1, wherein by the information processing device, when a curve representing the relationship between number N of the cyclic stresses and length “a” of a crack, the curve being obtained by predicting growth of cracks in the certain portion, has a steeply changing segment where the length “a” of the crack steeply changes relative to the change in the number N of the cyclic stresses, the curve is corrected by drawing a tangent line on an upwardly convex portion of the steeply changing segment from the vicinity of the origin. 6. The method according to claim 1, wherein the stress distribution Δσ(a) in the depth direction obtained in the case that no crack is present for each portion on the member is obtained by: choosing, among known stress-strain properties, a property that satisfies the following equations: 1/Nf=1/Npp+1/Ncp, Δεcp=A2×Ncp−α2, and Δεpp=A1×Npp−α1 wherein Nf is a known number of occurrences of cracks in the portion, Ncp is a number of occurrences of cracks of a cp type (tensile creep strain+compressive plasticity strain) in strain range partitioning, Npp is a number of occurrences of cracks of a pp type (tensile plasticity strain+compressive plasticity strain) in the strain range partitioning, Δεcp is a cp strain range in the strain range partitioning, Δεpp is a pp strain range in the strain range partitioning, and A1, A2, α1, and α2 are all experimentally calculated constants; and determining stress distribution Δσ(0) of the portion in the case that no crack is present in the portion based on the chosen stress-strain property to perform a numerical analysis based on the Δσ(0), and wherein: by the information processing device the growth of the crack in the portion is predicted according to the acquired stress distribution Δσ(a) and the following equations: da/dN=C×(ΔK)m, and ΔK=Δσ(a)×(π×a)1/2 wherein “a” is a crack depth, N is a number of occurrences of a cyclic stress, C and m are constants determined for a member, and ΔK is a stress intensity factor range. 7. An information processing device comprising: a central processing unit (CPU); a memory; means for memorizing, for each portion on a member, stress distribution Δσ(a) in the depth direction obtained in the case no crack is present, a relationship between depth of growing cracks and creep contribution, and a relationship between creep contribution and parameters C and m of the Paris's law; means for receiving from a user an indication of a certain portion on the member; means for acquiring the stress distribution Δσ(a) in the depth direction for the certain portion; means for acquiring a creep contribution at the depth of a growing crack for the certain portion, from the relationship between depth of cracks and creep contribution memorized for the certain portion; means for acquiring parameters C and m corresponding to the acquired creep contribution, from the relationship between creep contribution and parameters C and m of the Paris's law memorized for the certain portion; and means for predicting the growth of the crack in the certain portion, based on the following equations: da/dN=C×(ΔK)m, and ΔK=Δσ(a)×(π×a)1/2 wherein “a” is a crack depth, N is a number of occurrences of a cyclic stress, C and m are constants determined for a member, and ΔK is a stress intensity factor range. 8. The information processing device according to claim 7, wherein the relationship between depth of cracks and creep contribution is calculated based on temporal change of stress at a depth of a growing crack in the member, a creep rupture property, a measured value of a number of occurrences of a cyclic stress before a crack occurs in a portion. 9. The information processing device according to claim 8, further comprising: means for acquiring, from the user, a length of the crack appeared on a surface of the certain portion; means for predicting a depth of the crack occurred in the certain portion based on the length of the crack; and means for defining the predicted depth of the crack as an initial value to be used in predicting a growth of a crack in the certain portion. 10. The information processing device according to claim 9, wherein a value obtained by multiplying the acquired length of the crack appeared on the surface of the certain portion with ⅓ is used as a predicted value for the depth of the crack occurred in the certain portion. 11. The information processing device according to claim 7, further comprising: means for correcting a curve representing the relationship between number N of the cyclic stresses and a length “a” of a crack, the curve being obtained by predicting growth of cracks in the certain portion, when the curve has a steeply changing segment where the length “a” of the crack steeply changes relative to the change in the number N of the cyclic stresses, the curve is corrected by drawing a tangent line on an upwardly convex portion of the steeply changing segment from the vicinity of the origin. 12. The information processing device according to claim 7, further comprising: means for obtaining the stress distribution Δσ(a) in the depth direction obtained in the case no crack is present for each portion on the member by: choosing, among known stress-strain properties, a property that satisfies the following equations: 1/Nf=1/Npp+1/Ncp, Δεcp=A2×Ncp−α2, and Δεpp=A1×Npp−α1 wherein Nf is a known number of occurrences of cracks in the portion, Ncp is a number of occurrences of cracks of a cp type (tensile creep strain+compressive plasticity strain) in strain range partitioning, Npp is a number of occurrences of cracks of a pp type (tensile plasticity strain+compressive plasticity strain) in the strain range partitioning, Δεcp is a cp strain range in the strain range partitioning, Δεpp is a pp strain range in the strain range partitioning, and A1, A2, α1, and α2 are all experimentally calculated constants; and determining stress distribution Δσ(0) of the portion in the case no crack is present in the portion based on the chosen stress-strain property to perform a numerical analysis based on the Δσ(0); and means for predicting the growth of the crack in the portion according to the acquired stress distribution Δσ(a) and the following equations (Paris's law): da/dN=C×(ΔK)m, and ΔK=Δσ(a)×(π×a)1/2 wherein “a” is a crack depth, N is a number of occurrences of a cyclic stress, C and m are constants determined for a member, and ΔK is a stress intensity factor range. 13. The method according to claim 1, comprising the steps, executed by an information processing device, of: acquiring, from the user, a length of the crack appeared on a surface of the certain portion; predicting a depth of the crack occurred in the certain portion based on the length of the crack; and defining the predicted depth of the crack as an initial value to be used in predicting a growth of a crack in the certain portion. 14. The method according to claim 13, wherein by the information processing device a value obtained by multiplying the acquired length of the crack appeared on the surface of the certain portion with ⅓ is used as a predicted value for the depth of the crack occurred in the certain portion. 15. The method according to claim 13, wherein by the information processing device, when a curve representing the relationship between number N of the cyclic stresses and length “a” of a crack, the curve being obtained by predicting growth of cracks in the certain portion, has a steeply changing segment where the length “a” of the crack steeply changes relative to the change in the number N of the cyclic stresses, the curve is corrected by drawing a tangent line on an upwardly convex portion of the steeply changing segment from the vicinity of the origin. 16. The method according to claim 5, wherein the stress distribution Δσ(a) in the depth direction obtained in the case that no crack is present for each portion on the member is obtained by: choosing, among known stress-strain properties, a property that satisfies the following equations: 1/Nf=1/Npp+1/Ncp, Δεp=A2×Ncp−α2, and Δεpp=A1×Npp−α1 wherein Nf is a known number of occurrences of cracks in the portion, Ncp is a number of occurrences of cracks of a cp type (tensile creep strain+compressive plasticity strain) in strain range partitioning, Npp is a number of occurrences of cracks of a pp type (tensile plasticity strain+compressive plasticity strain) in the strain range partitioning, Δεcp is a cp strain range in the strain range partitioning, Δεpp is a pp strain range in the strain range partitioning, and A1, A2, α1, and α2 are all experimentally calculated constants; and determining stress distribution Δσ(0) of the portion in the case that no crack is present in the portion based on the chosen stress-strain property to perform a numerical analysis based on the Δσ(0), and wherein: by the information processing device the growth of the crack in the portion is predicted according to the acquired stress distribution Δσ(a) and the following equations: da/dN=C×(ΔK)m, and ΔK=Δσ(a)×(π×a)1/2 wherein “a” is a crack depth, N is a number of occurrences of a cyclic stress, C and m are constants determined for a member, and ΔK is a stress intensity factor range. 17. The method according to claim 13, wherein the stress distribution Δσ(a) in the depth direction obtained in the case that no crack is present for each portion on the member is obtained by: choosing, among known stress-strain properties, a property that satisfies the following equations: 1/Nf=1/Npp+1/Ncp, Δεcp=A2×Ncp−α2, and Δεpp=A1×Npp−α1 wherein Nf is a known number of occurrences of cracks in the portion, Ncp is a number of occurrences of cracks of a cp type (tensile creep strain+compressive plasticity strain) in strain range partitioning, Npp is a number of occurrences of cracks of a pp type (tensile plasticity strain+compressive plasticity strain) in the strain range partitioning, Δεcp is a cp strain range in the strain range partitioning, Δεpp is a pp strain range in the strain range partitioning, and A1, A2, α1, and α2 are all experimentally calculated constants; and determining stress distribution Δσ(0) of the portion in the case that no crack is present in the portion based on the chosen stress-strain property to perform a numerical analysis based on the Δσ(0), and wherein: by the information processing device the growth of the crack in the portion is predicted according to the acquired stress distribution Δσ(a) and the following equations: da/dN=C×(ΔK)m, and ΔK=Δσ(a)×(π×a)1/2 wherein “a” is a crack depth, N is a number of occurrences of a cyclic stress, C and m are constants determined for a member, and ΔK is a stress intensity factor range. 18. The method according to claim 14, wherein the stress distribution Δσ(a) in the depth direction obtained in the case that no crack is present for each portion on the member is obtained by: choosing, among known stress-strain properties, a property that satisfies the following equations: 1/Nf=1/Npp+1/Ncp, Δεcp=A2×Ncp−α2, and Δεpp=A1×Npp−α1 wherein Nf is a known number of occurrences of cracks in the portion, Ncp is a number of occurrences of cracks of a cp type (tensile creep strain+compressive plasticity strain) in strain range partitioning, Npp is a number of occurrences of cracks of a pp type (tensile plasticity strain+compressive plasticity strain) in the strain range partitioning, Δεcp is a cp strain range in the strain range partitioning, Δεpp is a pp strain range in the strain range partitioning, and A1, A2, α1, and α2 are all experimentally calculated constants; and determining stress distribution Δσ(0) of the portion in the case that no crack is present in the portion based on the chosen stress-strain property to perform a numerical analysis based on the Δσ(0), and wherein: by the information processing device the growth of the crack in the portion is predicted according to the acquired stress distribution Δσ(a) and the following equations: da/dN=C×(ΔK)m, and ΔK=Δσ(a)×(π×a)1/2 wherein “a” is a crack depth, N is a number of occurrences of a cyclic stress, C and in are constants determined for a member, and ΔK is a stress intensity factor range. 19. The method according to claim 15, wherein the stress distribution Δσ(a) in the depth direction obtained in the case that no crack is present for each portion on the member is obtained by: choosing, among known stress-strain properties, a property that satisfies the following equations: 1/Nf=1/Npp+1/Ncp, Δεcp=A2×Ncp−α2, and Δεpp=A1×Npp−α1 wherein Nf is a known number of occurrences of cracks in the portion, Ncp is a number of occurrences of cracks of a cp type (tensile creep strain+compressive plasticity strain) in strain range partitioning, Npp is a number of occurrences of cracks of a pp type (tensile plasticity strain+compressive plasticity strain) in the strain range partitioning, Δεcp is a cp strain range in the strain range partitioning, Δεpp is a pp strain range in the strain range partitioning, and A1, A2, α1, and α2 are all experimentally calculated constants; and determining stress distribution Δσ(0) of the portion in the case that no crack is present in the portion based on the chosen stress-strain property to perform a numerical analysis based on the Δσ(0), and wherein: by the information processing device the growth of the crack in the portion is predicted according to the acquired stress distribution Δσ(a) and the following equations: da/dN=C×(ΔK)m, and ΔK=Δσ(a)×(π×a)1/2 wherein “a” is a crack depth, N is a number of occurrences of a cyclic stress, C and m are constants determined for a member, and ΔK is a stress intensity factor range. 20. The information processing device according to claim 7, comprising: means for acquiring, from the user, a length of the crack appeared on a surface of the certain portion; means for predicting a depth of the crack occurred in the certain portion based on the length of the crack; and means for defining the predicted depth of the crack as an initial value to be used in predicting a growth of a crack in the certain portion. 21. The information processing device according to claim 20, wherein a value obtained by multiplying the acquired length of the crack appeared on the surface of the certain portion with ⅓ is used as a predicted value for the depth of the crack occurred in the certain portion. 22. The information processing device according to claim 20, comprising: means for correcting a curve representing the relationship between number N of the cyclic stresses and a length “a” of a crack, the curve being obtained by predicting growth of cracks in the certain portion, when the curve has a steeply changing segment where the length “a” of the crack steeply changes relative to the change in the number N of the cyclic stresses, the curve is corrected by drawing a tangent line on an upwardly convex portion of the steeply changing segment from the vicinity of the origin. 23. The information processing device according to claim 11, comprising: means for obtaining the stress distribution Δσ(a) in the depth direction obtained in the case no crack is present for each portion on the member by: choosing, among known stress-strain properties, a property that satisfies the following equations: 1/Nf=1/Npp+1/Ncp, ΔΣcp=A2×Ncp−α2, and Δεpp=A1×Npp−α1 wherein Nf is a known number of occurrences of cracks in the portion, Ncp is a number of occurrences of cracks of a cp type (tensile creep strain+compressive plasticity strain) in strain range partitioning, Npp is a number of occurrences of cracks of a pp type (tensile plasticity strain+compressive plasticity strain) in the strain range partitioning, Δεcp is a cp strain range in the strain range partitioning, Δεpp is a pp strain range in the strain range partitioning, and A1, A2, α1, and α2 are all experimentally calculated constants; and determining stress distribution Δσ(0) of the portion in the case no crack is present in the portion based on the chosen stress-strain property to perform a numerical analysis based on the Δσ(0); and means for predicting the growth of the crack in the portion according to the acquired stress distribution Δσ(a) and the following equations (Paris's law): da/dN=C×(ΔK)m, and ΔK=Δσ(a)×(π×a)1/2 wherein “a” is a crack depth, N is a number of occurrences of a cyclic stress, C and m are constants determined for a member, and ΔK is a stress intensity factor range. 24. The information processing device according to claim 20, comprising: means for obtaining the stress distribution Δσ(a) in the depth direction obtained in the case no crack is present for each portion on the member by: choosing, among known stress-strain properties, a property that satisfies the following equations: 1/Nf=1/Npp+1/Ncp, Δεcp=A2×Ncp−α2, and Δεpp=A1×Npp−α1 wherein Nf is a known number of occurrences of cracks in the portion, Ncp is a number of occurrences of cracks of a cp type (tensile creep strain+compressive plasticity strain) in strain range partitioning, Npp is a number of occurrences of cracks of a pp type (tensile plasticity strain+compressive plasticity strain) in the strain range partitioning, Δεcp is a cp strain range in the strain range partitioning, Δεpp is a pp strain range in the strain range partitioning, and A1, A2, α1, and α2 are all experimentally calculated constants; and determining stress distribution Δσ(0) of the portion in the case no crack is present in the portion based on the chosen stress-strain property to perform a numerical analysis based on the Δσ(0); and means for predicting the growth of the crack in the portion according to the acquired stress distribution Δσ(a) and the following equations (Paris's law): da/dN=C×(ΔK)m, and ΔK=Δσ(a)×(π×a)1/2 wherein “a” is a crack depth, N is a number of occurrences of a cyclic stress, C and m are constants determined for a member, and ΔK is a stress intensity factor range. 25. The information processing device according to claim 20, comprising: means for obtaining the stress distribution Δσ(a) in the depth direction obtained in the case no crack is present for each portion on the member by: choosing, among known stress-strain properties, a property that satisfies the following equations: 1/Nf=1/Npp+1/Ncp, Δεcp=A2×Ncp−α2, and Δεpp=A1×Npp−α1 wherein Nf is a known number of occurrences of cracks in the portion, Ncp is a number of occurrences of cracks of a cp type (tensile creep strain+compressive plasticity strain) in strain range partitioning, Npp is a number of occurrences of cracks of a pp type (tensile plasticity strain+compressive plasticity strain) in the strain range partitioning, Δεcp is a cp strain range in the strain range partitioning, Δεpp is a pp strain range in the strain range partitioning, and A1, A2, α1, and α2 are all experimentally calculated constants; and determining stress distribution Δσ(0) of the portion in the case no crack is present in the portion based on the chosen stress-strain property to perform a numerical analysis based on the Δσ(0); and means for predicting the growth of the crack in the portion according to the acquired stress distribution Δσ(a) and the following equations (Paris's law): da/dN=C×(ΔK)m, and ΔK=Δσ(a)×(π×a)1/2 wherein “a” is a crack depth, N is a number of occurrences of a cyclic stress, C and in are constants determined for a member, and ΔK is a stress intensity factor range. 26. The information processing device according to claim 20, comprising: means for obtaining the stress distribution Δσ(a) in the depth direction obtained in the case no crack is present for each portion on the member by: choosing, among known stress-strain properties, a property that satisfies the following equations: 1/Nf=1/Npp+1/Ncp, Δεcp=A2×Ncp−α2, and Δεpp=A1×Npp+α1 wherein Nf is a known number of occurrences of cracks in the portion, Ncp is a number of occurrences of cracks of a cp type (tensile creep strain+compressive plasticity strain) in strain range partitioning, Npp is a number of occurrences of cracks of a pp type (tensile plasticity strain+compressive plasticity strain) in the strain range partitioning, Δεp is a cp strain range in the strain range partitioning, Δεpp is a pp strain range in the strain range partitioning, and A1, A2, α1, and α2 are all experimentally calculated constants; and determining stress distribution Δσ(0) of the portion in the case no crack is present in the portion based on the chosen stress-strain property to perform a numerical analysis based on the Δσ(0); and means for predicting the growth of the crack in the portion according to the acquired stress distribution Δσ(a) and the following equations (Paris's law): da/dN=C=(ΔK)m, and ΔK=Δσ(a)=(π×a)1/2 wherein “a” is a crack depth, N is a number of occurrences of a cyclic stress, C and in are constants determined for a member, and ΔK is a stress intensity factor range.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method of predicting growth of a crack in a member includes memorizing, for each portion on the member, stress distribution Δσ(a) in the depth direction obtained in the case that no crack is present, a relationship between depth of growing cracks and creep contribution, and a relationship between creep contribution and parameters C and in of the Paris's law, receiving from a user an indication of a certain portion on the member, acquiring the stress distribution Δσ(a) in the depth direction for the certain portion, acquiring a creep contribution at the depth of a growing crack for the certain portion, from the relationship between depth of cracks and creep contribution memorized for the certain portion, and acquiring parameters C and in corresponding to the acquired creep contribution, from the relationship between creep contribution and parameters C and m of the Paris's law memorized for the certain portion.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method of predicting growth of a crack in a member includes memorizing, for each portion on the member, stress distribution Δσ(a) in the depth direction obtained in the case that no crack is present, a relationship between depth of growing cracks and creep contribution, and a relationship between creep contribution and parameters C and in of the Paris's law, receiving from a user an indication of a certain portion on the member, acquiring the stress distribution Δσ(a) in the depth direction for the certain portion, acquiring a creep contribution at the depth of a growing crack for the certain portion, from the relationship between depth of cracks and creep contribution memorized for the certain portion, and acquiring parameters C and in corresponding to the acquired creep contribution, from the relationship between creep contribution and parameters C and m of the Paris's law memorized for the certain portion.
A method, computer program product and system for determining a minimal explanation. A model comprising a plurality of constraints is received. An output for the model is determined and a subset of the constraints that provide the same output for the model is constructed. The construction includes determining a first constraint that forms part of the subset of the constraints, testing the constraint(s) that form the subset of the constraints to determine if the subset of constraints provides the same output for the model, selecting a further constraint that has a variable in common with at least one constraint in the subset of constraints, and repeating the testing of the constraint(s) and the selecting of the further constraint until the testing of the constraint(s) that form the subset of the constraints determines that the subset of the constraints does provide the same output for the model.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer implemented method for determining a minimal explanation, the method comprising: receiving a model comprising a plurality of constraints; determining an output for the model; and constructing, by a processor, a subset of the constraints that provide the same output for the model, the construction comprising: determining a first constraint that forms part of the subset of the constraints; testing the constraint(s) that form the subset of the constraints to determine if the subset of constraints provides the same output for the model; selecting a further constraint that has a variable in common with at least one constraint in the subset of constraints, the further constraint forming part of the subset of constraints; and repeating the testing of the constraint(s) and the selecting of the further constraint until the testing of the constraint(s) that form the subset of the constraints determines that the subset of constraints does provide the same output for the model. 2. The method as recited in claim 1, wherein the selecting of the further constraint that has the variable in common with at least one constraint in the subset of constraints, the further constraint forming part of the subset of constraints, comprises generating a list of candidate constraints from 1 to K each of which has a variable in common with at least one constraint in the subset of constraints and performing repeated binary searches on the list of candidate constraints until only one constraint remains. 3. The method as recited in claim 2, wherein the selecting of the further constraint that has the variable in common with at least one constraint in the subset of constraints, the further constraint forming part of the subset of constraints, comprises selecting a candidate i from the list of candidate constraints, building a current constraint system from the constraints of the model excluding the constraints i+1 to K from the candidate list, testing the current constraint system to determine if the current constraint system provides the same output for the model and removing all candidate constraints from i+1 to K from the candidates list and the current constraint system if the current constraint system does provide the same output for the model, or removing all candidate constraints from 1 to i from the candidates list if the current constraint system does not provide the same output for the model. 4. The method as recited in claim 1 further comprising: building a graph of connected constraints wherein two constraints are connected if they share at least one variable, and wherein the selecting of the further constraint that has the variable in common with at least one constraint in the subset of constraints, the further constraint forming part of the subset of constraints comprises navigating the built graph from a node defining a constraint in the subset of constraints to a connected node defining a constraint not in the subset of constraints. 5. The method as recited in claim 4, wherein the connected node selected has a lowest number of connections of all of the nodes connected to the node defining the constraint in the subset of constraints. 6. A computer program product for determining a minimal explanation, the computer program product comprising a computer readable storage medium having program code embodied therewith, the program code comprising the programming instructions for: receiving a model comprising a plurality of constraints; determining an output for the model; and constructing a subset of the constraints that provide the same output for the model, the construction comprising the programming instructions for: determining a first constraint that forms part of the subset of the constraints; testing the constraint(s) that form the subset of the constraints to determine if the subset of constraints provides the same output for the model; selecting a further constraint that has a variable in common with at least one constraint in the subset of constraints, the further constraint forming part of the subset of constraints; and repeating the testing of the constraint(s) and the selecting of the further constraint until the testing of the constraint(s) that form the subset of the constraints determines that the subset of constraints does provide the same output for the model. 7. The computer program product as recited in claim 6, wherein the programming instructions for selecting of the further constraint that has the variable in common with at least one constraint in the subset of constraints, the further constraint forming part of the subset of constraints, comprises the programming instructions for generating a list of candidate constraints from 1 to K each of which has a variable in common with at least one constraint in the subset of constraints and performing repeated binary searches on the list of candidate constraints until only one constraint remains. 8. The computer program product as recited in claim 7, wherein the programming instructions for selecting of the further constraint that has the variable in common with at least one constraint in the subset of constraints, the further constraint forming part of the subset of constraints, comprises the programming instructions for selecting a candidate i from the list of candidate constraints, building a current constraint system from the constraints of the model excluding the constraints i+1 to K from the candidate list, testing the current constraint system to determine if the current constraint system provides the same output for the model and removing all candidate constraints from i+1 to K from the candidates list and the current constraint system if the current constraint system does provide the same output for the model, or removing all candidate constraints from 1 to i from the candidates list if the current constraint system does not provide the same output for the model. 9. The computer program product as recited in claim 6, wherein the program code further comprises the programming instructions for: building a graph of connected constraints wherein two constraints are connected if they share at least one variable, and wherein the programming instructions for selecting of the further constraint that has the variable in common with at least one constraint in the subset of constraints, the further constraint forming part of the subset of constraints comprises the programming instructions for navigating the built graph from a node defining a constraint in the subset of constraints to a connected node defining a constraint not in the subset of constraints. 10. The computer program product as recited in claim 9, wherein the connected node selected has a lowest number of connections of all of the nodes connected to the node defining the constraint in the subset of constraints. 11. A system, comprising: a memory unit for storing a computer program for determining a minimal explanation; and a processor coupled to the memory unit, wherein the processor is configured to execute the program instructions of the computer program comprising: receiving a model comprising a plurality of constraints; determining an output for the model; and constructing a subset of the constraints that provide the same output for the model, the construction comprising the program instructions for: determining a first constraint that forms part of the subset of the constraints; testing the constraint(s) that form the subset of the constraints to determine if the subset of constraints provides the same output for the model; selecting a further constraint that has a variable in common with at least one constraint in the subset of constraints, the further constraint forming part of the subset of constraints; and repeating the testing of the constraint(s) and the selecting of the further constraint until the testing of the constraint(s) that form the subset of the constraints determines that the subset of constraints does provide the same output for the model. 12. The system as recited in claim 11, wherein the program instructions for selecting of the further constraint that has the variable in common with at least one constraint in the subset of constraints, the further constraint forming part of the subset of constraints, comprises the program instructions for generating a list of candidate constraints from 1 to K each of which has a variable in common with at least one constraint in the subset of constraints and performing repeated binary searches on the list of candidate constraints until only one constraint remains. 13. The system as recited in claim 12, wherein the program instructions for selecting of the further constraint that has the variable in common with at least one constraint in the subset of constraints, the further constraint forming part of the subset of constraints, comprises the program instructions for selecting a candidate i from the list of candidate constraints, building a current constraint system from the constraints of the model excluding the constraints i+1 to K from the candidate list, testing the current constraint system to determine if the current constraint system provides the same output for the model and removing all candidate constraints from i+1 to K from the candidates list and the current constraint system if the current constraint system does provide the same output for the model, or removing all candidate constraints from 1 to i from the candidates list if the current constraint system does not provide the same output for the model. 14. The system as recited in claim 11, wherein the program instructions of the computer program further comprise: building a graph of connected constraints wherein two constraints are connected if they share at least one variable, and wherein the program instructions for selecting of the further constraint that has the variable in common with at least one constraint in the subset of constraints, the further constraint forming part of the subset of constraints comprises the program instructions for navigating the built graph from a node defining a constraint in the subset of constraints to a connected node defining a constraint not in the subset of constraints. 15. The system as recited in claim 14, wherein the connected node selected has a lowest number of connections of all of the nodes connected to the node defining the constraint in the subset of constraints.
REJECTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method, computer program product and system for determining a minimal explanation. A model comprising a plurality of constraints is received. An output for the model is determined and a subset of the constraints that provide the same output for the model is constructed. The construction includes determining a first constraint that forms part of the subset of the constraints, testing the constraint(s) that form the subset of the constraints to determine if the subset of constraints provides the same output for the model, selecting a further constraint that has a variable in common with at least one constraint in the subset of constraints, and repeating the testing of the constraint(s) and the selecting of the further constraint until the testing of the constraint(s) that form the subset of the constraints determines that the subset of the constraints does provide the same output for the model.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method, computer program product and system for determining a minimal explanation. A model comprising a plurality of constraints is received. An output for the model is determined and a subset of the constraints that provide the same output for the model is constructed. The construction includes determining a first constraint that forms part of the subset of the constraints, testing the constraint(s) that form the subset of the constraints to determine if the subset of constraints provides the same output for the model, selecting a further constraint that has a variable in common with at least one constraint in the subset of constraints, and repeating the testing of the constraint(s) and the selecting of the further constraint until the testing of the constraint(s) that form the subset of the constraints determines that the subset of the constraints does provide the same output for the model.
A method and a device for predicting insulator pollution grade includes acquiring prediction data affecting the insulator pollution grade; acquiring current pollution status of the insulator; predicting the insulator pollution grade based on the prediction data, the current pollution status and an insulator pollution grade calculating model, wherein the insulator pollution grade calculating model at least comprises an initial pollution status variable of the insulator, and a pollutant accumulation prediction and a pollutant reduction prediction based on the initial pollution status variable, at least one of the accumulation prediction and the reduction prediction being associated with the prediction data, and the initial pollution status variable being associated with the current pollution status.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for predicting insulator pollution grade, comprising: acquiring prediction data affecting the insulator pollution grade; acquiring current pollution status of the insulator; predicting the insulator pollution grade based on the prediction data, the current pollution status and an insulator pollution grade calculating model, wherein the insulator pollution grade calculating model at least comprises an initial pollution status variable of the insulator, and a pollutant accumulation prediction and a pollutant reduction prediction based on the initial pollution status variable; at least one of the accumulation prediction and the reduction prediction being associated with the prediction data, and the initial pollution status variable being associated with the current pollution status. 2. The method according to claim 1, further comprising: determining the probability that pollution flashover occurs on the insulator based on the predicted insulator pollution grade and an insulator pollution flashover model, wherein the pollution flashover model represents a relationship between the insulator pollution grade and pollution flashover occurrence. 3. The method according to claim 1, wherein the pollutant accumulation prediction is at least associated with air density, air speed and humidity contained in the prediction data. 4. The method according to claim 3, wherein the pollutant accumulation prediction is represented by equation ∫t1t2ρDV·e−r2+RH·dt, in which D represents average sea salt concentration in the area which the insulator is located in, ρ represents air density, V represents air speed, r represents the vertical distance from the insulator to coastline, RH represents relative humidity of the air, t1 represents the time corresponding to the current pollution status of the insulator; and t2 represents a moment in future. 5. The method according to claim 1, wherein the pollutant reduction prediction is at least associated with rainfall rate contained in the prediction data. 6. The method according to claim 5, wherein the pollutant reduction prediction is represented by equation ∫t1t2I0L·(1−e−R)·dt, wherein I0 represents the initial pollution status variable of the insulator; L represents an elimination coefficient; R represents the rainfall rate; t1 represents the time corresponding to the current pollution status of the insulator; and t2 represents a moment in future. 7. The method according to claim 1, wherein acquiring the current pollution status of the insulator comprises: determining a basic pollution status based on the position of the insulator in a pollution area distribution; and determining the current pollution status after updating the basic pollution status. 8. The method according to claim 7, wherein the pollution area distribution is a second pollution area distribution determined based on a first pollution area distribution and the insulator pollution grade calculating model, and the first pollution area distribution is determined based on the historical data. 9. The method according to claim 8, wherein determining the second pollution area distribution based on the first pollution area distribution and the insulator pollution grade calculating model comprises: determining the insulator pollution grade of each lattice point based on the first pollution area distribution and the insulator pollution grade calculating model; and determining the second pollution area distribution based on the insulator pollution grade of each lattice point. 10. The method according to claim 1, wherein the prediction data comprises at least one of: weather data, environment data, and geographic data. 11. A device for predicting insulator pollution grade, the device comprising: a first acquiring module, configured to acquire prediction data affecting the insulator pollution grade; a second acquiring module, configured to acquire the current pollution status of the insulator; a predicting module, configured to predict the insulator pollution grade based on the prediction data, the current pollution status and an insulator pollution grade calculating model, wherein the insulator pollution grade calculating model at least includes the initial pollution status variable of the insulator, and a pollutant accumulation prediction and a pollutant reduction prediction on the basis of the initial pollution status variable; at least one of the accumulation prediction and the reduction prediction being associated with the prediction data, and the initial pollution status variable being associated with the current pollution status. 12. The device according to claim 11, further comprising: a pollution flashover probability determining module, configured to determine the probability that pollution flashover occurs on the insulator based on the predicted insulator pollution grade and an insulator pollution flashover model, and the pollution flashover model represents a relationship between the insulator pollution grade and pollution flashover occurrence. 13. The device according to claim 10, wherein the pollutant accumulation prediction is at least associated with air density, air speed and humidity contained in the prediction data. 14. The device according to claim 13, wherein the pollutant accumulation prediction is represented by equation ∫t1t2ρDV·e−r2+RH·dt, in which D represents average sea salt concentration in the area which the insulator is located in, ρ represents the air density, V represents the air speed, r represents the vertical distance from the insulator to coastline, RH represents the relative humidity of the air, t1 represents the time corresponding to the current pollution status of the insulator; and t2 represents a moment in future. 15. The device according to claim 10, wherein the pollutant reduction prediction is at least associated with rainfall rate contained in the prediction data. 16. The device according to claim 15, wherein the pollutant reduction prediction is represented by equation ∫t1t2I0L·(1−e−R)·dt, wherein I0 represents the initial pollution status variable of the insulator; L represents an elimination coefficient; R represents the rainfall rate; t1 represents the time corresponding to the current pollution status of the insulator; and t2 represents a moment in future. 17. The device according to claim 11, wherein the predicting module further comprises: a module configured to determine a basic pollution status based on the position of the insulator in the pollution area distribution; and a module configured to determine the current pollution status after updating the basic pollution status. 18. The device according to claim 17, wherein the pollution area distribution is a second pollution area distribution determined based on a first pollution area distribution and the insulator pollution grade calculating model, and the first pollution area distribution is determined based on the historical data. 19. The device according to claim 18, wherein the module for determining the second pollution area distribution based on the first pollution area distribution and the insulator pollution grade calculating model comprises: a module for determining the insulator pollution grade of each lattice point based on the first pollution area distribution and the insulator pollution grade calculating model; and a module for determining the second pollution area distribution based on the insulator pollution grade of each lattice point. 20. The device according to claim 11, wherein the prediction data comprises at least one of: weather data, environment data, and geographic data.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A method and a device for predicting insulator pollution grade includes acquiring prediction data affecting the insulator pollution grade; acquiring current pollution status of the insulator; predicting the insulator pollution grade based on the prediction data, the current pollution status and an insulator pollution grade calculating model, wherein the insulator pollution grade calculating model at least comprises an initial pollution status variable of the insulator, and a pollutant accumulation prediction and a pollutant reduction prediction based on the initial pollution status variable, at least one of the accumulation prediction and the reduction prediction being associated with the prediction data, and the initial pollution status variable being associated with the current pollution status.
G06N504
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A method and a device for predicting insulator pollution grade includes acquiring prediction data affecting the insulator pollution grade; acquiring current pollution status of the insulator; predicting the insulator pollution grade based on the prediction data, the current pollution status and an insulator pollution grade calculating model, wherein the insulator pollution grade calculating model at least comprises an initial pollution status variable of the insulator, and a pollutant accumulation prediction and a pollutant reduction prediction based on the initial pollution status variable, at least one of the accumulation prediction and the reduction prediction being associated with the prediction data, and the initial pollution status variable being associated with the current pollution status.
Machine-learning methods and apparatus are provided to solve blind source separation problems with an unknown number of sources and having a signal propagation model with features such as wave-like propagation, medium-dependent velocity, attenuation, diffusion, and/or advection, between sources and sensors. In exemplary embodiments, multiple trials of non-negative matrix factorization are performed for a fixed number of sources, with selection criteria applied to determine successful trials. A semi-supervised clustering procedure is applied to trial results, and the clustering results are evaluated for robustness using measures for reconstruction quality and cluster separation. The number of sources is determined by comparing these measures for different trial numbers of sources. Source locations and parameters of the signal propagation model can also be determined. Disclosed methods are applicable to a wide range of spatial problems including chemical dispersal, pressure transients, and electromagnetic signals, and also to non-spatial problems such as cancer mutation.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method, comprising: with a computer: performing a plurality of non-negative matrix factorization (NMF) trials on mixed signals generated by one or more unidentified sources and detected by a plurality of sensors, each of the NMF trials producing a predetermined number of candidate sources referred to each of the sensors according to a signal propagation model; identifying clusters of the candidate sources and determining basis sources, by performing clustering on the candidate sources; evaluating results of the clustering by: calculating a first parameter representing reconstruction error of the basis sources, and calculating a second parameter representing separation of the identified clusters; and determining a first number of the unidentified sources for which a combination of the calculated first parameter and the calculated second parameter is optimized. 2. The method of claim 1, wherein the clustering is constrained to retain equal numbers of the candidate sources within each cluster. 3. The method of claim 1, further comprising: from first clusters of candidate sources and corresponding first basis sources obtained from clustering performed on the candidate sources produced by NMF trials that each produce the first number of candidate sources: identifying the unidentified sources from the first basis sources. 4. The method of claim 3, further comprising: responsive to identifying the unidentified sources, adjusting one or more conditions of a physical environment in which the sensors are located, wherein the adjusting comprises at least one or more of the following acts, for at least a first one of the identified sources and/or a first one of the sensors: probing local conditions at or near a location of the first identified source, counteracting the first identified source to reduce a signal strength of the first identified source, or making changes in the physical environment that will reduce a contribution from the first identified source to the mixed signal detected by the first sensor, without changing the signal strength of the first identified source. 5. The method of claim 3, wherein the sources are sources of one or more of: atmospheric pollution, water pollution, pressure transients, acoustic signals, seismic disturbances, or electromagnetic interference. 6. The method of claim 1, further comprising determining spatial locations for one or more of the unidentified sources. 7. One or more computer-readable storage media storing computer-readable instructions that, when executed by a computer, cause the computer to perform the method of claim 1. 8. The method of claim 1, wherein the propagation model is a wave-like propagation model. 9. The method of claim 8, wherein each NMF trial further produces a shift matrix comprising coefficients, each of the coefficients representing a signal shift from one of the candidate sources to one of the sensors. 10. The method of claim 9, wherein each shift matrix comprises a shift vector for each produced candidate source, and further comprising: for at least a first determined basis source corresponding to a first identified cluster of first candidate sources, producing a final shift vector dependent on the shift vectors of the first candidate sources. 11. The method of claim 8, wherein each of the NMF trials satisfies at least one of the following constraints: (i) a reconstruction quality measure is greater than or equal to a first predetermined threshold; (ii) reconstructed signals at each of the sensors comprise a contribution that is greater than or equal to a second predetermined threshold from each of the candidate sources determined from the NMF trial; or (iii) every coefficient of the produced shift matrix corresponds to a shift that is less than or equal to a third predetermined threshold. 12. The method of claim 1, wherein the propagation model incorporates diffusion and advection. 13. The method of claim 12, wherein at least one NMF trial further produces estimates of one or more transport parameters selected from the group consisting of advection velocity and dispersion coefficient. 14. The method of claim 12, wherein at least one NMF trial performs non-linear least squares minimization of a cost function which incorporates Green's functions for each unidentified source. 15. The method of claim 12, wherein the clustering and evaluating are performed in a plurality of iterations, and each iteration comprises: removing candidate sources based on reconstruction error to leave remaining candidate sources; identifying provisional clusters of the remaining candidate sources; and evaluating the second parameter representing separation of the identified provisional clusters; wherein the identified provisional clusters of a final iteration of the plurality of iterations are the identified clusters. 16. The method of claim 1, wherein the detected mixed signals are related to the unidentified sources by a mixing matrix, the NMF trials are subject to constraints for each sensor on sums of respective mixing matrix coefficients, and the NMF trials use a non-convex optimization procedure. 17. A computer-implemented system comprising: one or more computing nodes each comprising one or more processors, memory coupled thereto, and one or more network adapters, the one or more computing nodes being interconnected by one or more network connections and configured to perform a method comprising: performing a plurality of non-negative matrix factorization (NMF) trials on mixed signals generated by one or more unidentified sources and detected by a plurality of sensors, each of the NMF trials producing a predetermined number of candidate sources referred to each of the sensors according to a signal propagation model; identifying clusters of the candidate sources and determining basis sources, by performing clustering on the candidate sources; evaluating results of the clustering by: calculating a first parameter representing reconstruction error of the basis sources, and calculating a second parameter representing separation of the identified clusters; and determining a first number of the unidentified sources for which a combination of the calculated first parameter and the calculated second parameter is optimized. 18. The computer-implemented system of claim 17, wherein the method further comprises receiving data for the mixed signals, the data being produced using the plurality of sensors responsive to detection of the mixed signals. 19. The computer-implemented system of claim 17, wherein the first parameter is a Frobenius norm and the second parameter is a silhouette value. 20. A method comprising: with a computer: receiving data for mixed signals generated by one or more unidentified sources and detected by a plurality of sensors, the data being produced using the plurality of sensors responsive to detection of the mixed signals; performing a plurality of non-negative matrix factorization (NMF) trials on the mixed signals, each of the NMF trials producing a predetermined number of candidate sources referred to each of the sensors according to a signal propagation model; identifying clusters of the candidate sources and determining basis sources, by performing clustering on the candidate sources, wherein the clustering is constrained to retain equal numbers of candidate sources within each cluster; evaluating results of the clustering by: calculating a first parameter representing reconstruction error of the basis sources, and calculating a second parameter representing separation of the identified clusters; and determining a first number of the unidentified sources for which a combination of the calculated first parameter and the calculated second parameter is optimized; wherein the NMF trials that each produce the first number of candidate sources are first NMF trials, the clusters identified from first NMF trials are first clusters, and the basis sources determined from the first NMF trials are first basis sources; identifying the unidentified sources as the first basis sources; determining spatial locations and signal amplitudes for the identified sources; and responsive to identifying the unidentified sources, adjusting one or more conditions of a physical environment in which the sensors are located, wherein the adjusting comprises making changes in the physical environment that will reduce the mixed signal(s) detected by at least one of the sensors from a first identified source without changing the signal amplitude of the first identified source.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Machine-learning methods and apparatus are provided to solve blind source separation problems with an unknown number of sources and having a signal propagation model with features such as wave-like propagation, medium-dependent velocity, attenuation, diffusion, and/or advection, between sources and sensors. In exemplary embodiments, multiple trials of non-negative matrix factorization are performed for a fixed number of sources, with selection criteria applied to determine successful trials. A semi-supervised clustering procedure is applied to trial results, and the clustering results are evaluated for robustness using measures for reconstruction quality and cluster separation. The number of sources is determined by comparing these measures for different trial numbers of sources. Source locations and parameters of the signal propagation model can also be determined. Disclosed methods are applicable to a wide range of spatial problems including chemical dispersal, pressure transients, and electromagnetic signals, and also to non-spatial problems such as cancer mutation.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Machine-learning methods and apparatus are provided to solve blind source separation problems with an unknown number of sources and having a signal propagation model with features such as wave-like propagation, medium-dependent velocity, attenuation, diffusion, and/or advection, between sources and sensors. In exemplary embodiments, multiple trials of non-negative matrix factorization are performed for a fixed number of sources, with selection criteria applied to determine successful trials. A semi-supervised clustering procedure is applied to trial results, and the clustering results are evaluated for robustness using measures for reconstruction quality and cluster separation. The number of sources is determined by comparing these measures for different trial numbers of sources. Source locations and parameters of the signal propagation model can also be determined. Disclosed methods are applicable to a wide range of spatial problems including chemical dispersal, pressure transients, and electromagnetic signals, and also to non-spatial problems such as cancer mutation.
Techniques are disclosed to determine an expected or predicted opinion of a target individual. To do so, a deep question answer system may build a corpus which includes a first collection of documents attributable to a first person and a second collection of documents identified from content in the first collection of documents and evaluate the corpus to build a model representing opinions of the first person relative to topics, concepts, or subjects discussed in the first and second collections of documents. The deep question answer system may also receive a request to predict an opinion of the first person regarding a topic and generate a predicted opinion of the first person regarding the topic from the model.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method for predicting an opinion, the method comprising: building a corpus which includes a first collection of documents attributable to a first person and a second collection of documents identified from content in the first collection of documents; evaluating the corpus to build a model representing opinions of the first person relative to topics, concepts, or subjects discussed in the first and second collections of documents; receiving a request to predict an opinion of the first person regarding a topic; and generating a predicted opinion of the first person regarding the topic from the model. 2. The method of claim 1, further comprising: parsing the first collection of documents to identify secondary sources, wherein each document in the second collection of documents is available from one the secondary sources; and accessing each secondary source to identify documents to add to the second collection of documents. 3. The method of claim 2, wherein evaluating the corpus comprises: identifying topics, concepts, or subjects referenced by documents in the first collection of documents; determining indications of opinions expressed about the topics, concepts, or subjects referenced by documents in the first collection. 4. The method of claim 3, further comprising: parsing each document accessed from the secondary source to identify topics, concepts, or subjects referenced by the respective documents from the secondary source; determining indications of opinions on the topics, concepts, or subjects referenced by documents in the second collection; and determining a source weight factor characterizing a presumed opinion of each of the secondary sources held by the first person. 5. The method of claim 4, wherein the determining a source weight factor further comprising correlating the topics, concepts, or subjects identified in one of the secondary sources with the associated topics, concepts, or subjects identified in the first collection of documents. 6. The method of claim 5, further comprising, generating the predicted opinion of the first person regarding the topic from the model is based on indications of opinions expressed about the topics, concepts, or subjects referenced by documents in the first collection, indications of opinions on one or more of the topics, concepts, or subjects referenced by documents in the second collection and the source weight factor. 7. The method of claim 1, wherein the first person represents a group of individuals. 8. The method of claim 1, further comprising returning the predicted opinion along with the documents which reference the topic of the questions.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Techniques are disclosed to determine an expected or predicted opinion of a target individual. To do so, a deep question answer system may build a corpus which includes a first collection of documents attributable to a first person and a second collection of documents identified from content in the first collection of documents and evaluate the corpus to build a model representing opinions of the first person relative to topics, concepts, or subjects discussed in the first and second collections of documents. The deep question answer system may also receive a request to predict an opinion of the first person regarding a topic and generate a predicted opinion of the first person regarding the topic from the model.
G06N502
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Techniques are disclosed to determine an expected or predicted opinion of a target individual. To do so, a deep question answer system may build a corpus which includes a first collection of documents attributable to a first person and a second collection of documents identified from content in the first collection of documents and evaluate the corpus to build a model representing opinions of the first person relative to topics, concepts, or subjects discussed in the first and second collections of documents. The deep question answer system may also receive a request to predict an opinion of the first person regarding a topic and generate a predicted opinion of the first person regarding the topic from the model.
Systems and methods for probabilistic semantic sensing in a sensory network are disclosed. The system receives raw sensor data from a plurality of sensors and generates semantic data including sensed events. The system correlates the semantic data based on classifiers to generate aggregations of semantic data. Further, the system analyzes the aggregations of semantic data with a probabilistic engine to produce a corresponding plurality of derived events each of which includes a derived probability. The system generates a first derived event, including a first derived probability, that is generated based on a plurality of probabilities that respectively represent a confidence of an associated semantic datum to enable at least one application to perform a service based on the plurality of derived events.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method comprising: receiving raw sensor data from a plurality of sensors; generating semantic data based on the raw sensor data, the semantic data including a plurality of sensed events and further including a first plurality of classifiers including a first classifier including a semantic datum representing an event detected based on the receiving of the raw sensor data, a second classifier including a probability representing a confidence of an associated semantic datum, a third classifier identifying a location of a sensor that is utilized to sense an associated semantic datum, and a fourth classifier identifying a location of an associated semantic datum; correlating the semantic data based on a second plurality of classifiers to generate a plurality of aggregations of semantic data, the second plurality of classifiers being selected from the first plurality of classifiers; analyzing the plurality of aggregations of semantic data with a probabilistic engine to produce a corresponding plurality of derived events, each of which includes a derived probability, the analyzing including generating a first derived event including a first derived probability that is generated based on a plurality of probabilities that respectively represent a confidence of an associated semantic datum; and enabling at least one application to perform a service based on the plurality of derived events. 2. The method of claim 1, wherein the plurality of sensors includes a first sensor located on a first node in a light sensory network that includes a plurality of nodes. 3. The method of claim 1, wherein the raw sensor data includes visual data, audio data, and environmental data and wherein the event represented by the semantic datum includes detecting a person, detecting a vehicle, detecting an object, and detecting an empty parking space. 4. The method of claim 1, wherein the third classifier includes a spatial coordinate of a first sensor and wherein the plurality of classifiers includes a fifth classifier including a temporal coordinate that is associated with the third classifier. 5. The method of claim 4, wherein the fourth classifier includes a spatial coordinate of a first event represented by the semantic datum and wherein the plurality of classifiers includes a sixth classifier including a temporal coordinate that describes a time the first sensor was utilized to detect a first event represented by the semantic datum and wherein the first event is an occupancy state of empty for a parking spot. 6. The method of claim 1, wherein the first plurality of classifiers includes a seventh classifier including an application identifier that is utilized to identify the at least one application from a plurality of applications, wherein each application from the plurality of applications is utilized to perform a different service. 7. The method of claim 1, the correlating the semantic data includes at least one of correlating to generate abstract graphs based on classifiers that match and correlating to generate abstract graphs based on classifiers that fuzzy match. 8. The method of claim 1, wherein the correlating the semantic data includes correlating based on mathematical relationships between spatial and temporal coordinates and wherein the spatial and temporal coordinates involve at least one of a sensor itself, a detection of an event, and a combination of both. 9. The method of claim 1, wherein the plurality of sensed events include a first sensed event, a second sensed event and a third sensed event, and wherein the first sensed event includes a first classifier that describes an occupancy state of empty for a first parking spot, the second sensed event includes a first classifier that describes an occupancy state of empty for the first parking spot and a third sensed event describes an occupancy state of empty for the first parking spot and wherein the plurality of aggregated semantic data includes a first aggregated semantic data that aggregates the first, second and third sensed events. 10. A system comprising: a plurality of sensing engines, implemented by one or more processors, that are configured to receive raw sensor data from a plurality of sensors, the plurality of sensing engines are further configured to generate semantic data based on the raw sensor data, the semantic data including a plurality of sensed events and further including a first plurality of classifiers including a first classifier including a semantic datum representing an event detected based on the receiving of the raw sensor data, a second classifier including a probability representing a confidence of an associated semantic datum, a third classifier identifying a location of a sensor that is utilized to sense an associated semantic datum, and a fourth classifier identifying a location of an associated semantic datum; and an correlation engine, implemented by one or more processors, that is configured to correlate the semantic data based on a second plurality of classifiers to generate a plurality of aggregations of semantic data, the correlation engine is configured to select the second plurality of classifiers from the first plurality of classifiers; and a probabilistic engine, implemented by one or more processors, that is configured to analyze the plurality of aggregations of semantic data to produce a corresponding plurality of derived events, each of which including a derived probability, the plurality of derived events including a first derived event including a first derived probability, the probabilistic engine generating the first derived event based on a plurality of probabilities that respectively represent a confidence of an associated semantic datum, the probabilistic engine is further configured to communicate the plurality of derived events to an interface to enable at least one application to perform a service based on the plurality of derived events. 11. The system of claim 10, wherein the plurality of sensors includes a first sensor located on a first node in a light sensory network that includes a plurality of nodes. 12. The method of claim 10, wherein the raw sensor data includes visual data, audio data, and environmental data and wherein the event represented by the semantic datum includes a detection of a person, a detection of a vehicle, a detection of an object, and a detection of an empty parking space. 13. The system of claim 10, wherein the third classifier includes a spatial coordinate of a first sensor and wherein the plurality of classifiers includes a fifth classifier including a temporal coordinate that is associated with the third classifier. 14. The system of claim 13, wherein the fourth classifier includes a spatial coordinate of a first event represented by the semantic datum and wherein the plurality of classifiers includes a sixth classifier including a temporal coordinate that describes a time the first sensor was utilized to detect a first event represented by the semantic datum and wherein the first event is an occupancy state of empty for a parking spot. 15. The system of claim 10, wherein the probabilistic engine is configured to analyze, based on user inputs, wherein the user inputs include a desired accuracy and a user preference. 16. The system of claim 10, wherein the probabilistic engine is configured to analyze based on a weight that is assigned to a second classifier and wherein the probabilistic engine alters the weight over time. 17. The system of claim 10, wherein the plurality of derived events includes a first derived event and wherein the probabilistic engine analyzes based on a first threshold and wherein the first threshold defines a minimum level of raw sensor data that is utilized to produce the first derived event. 18. The system of claim 17, wherein the probabilistic engine alters the first threshold over time. 19. The system of claim 10, wherein the at least one application includes a parking location application, a surveillance application, a traffic application, retail customer application, a business intelligence application, an asset monitoring application, an environmental application, an earthquake sensing application. 20. A machine-readable medium having no transitory signals storing a set of instructions that, when executed by a processor, causes a machine to perform operations comprising: receiving raw sensor data from a plurality of sensors; generating semantic data based on the raw sensor data, the semantic data including a plurality of sensed events and further including a first plurality of classifiers including a first classifier including a semantic datum representing an event detected based on the receiving of the raw sensor data, a second classifier including a probability representing a confidence of an associated semantic datum, a third classifier identifying a location of a sensor that is utilized to sense an associated semantic datum, and a fourth classifier identifying a location of an associated semantic datum; correlating the semantic data based on a second plurality of classifiers to generate a plurality of aggregations of semantic data, the second plurality of classifiers being selected from the first plurality of classifiers; analyzing the plurality of aggregations of semantic data with a probabilistic engine to produce a corresponding plurality of derived events, each of which include a derived probability, the analyzing including generating a first derived event including a first derived probability that is generated based on a plurality of probabilities that respectively represent a confidence of an associated semantic datum; and enabling at least one application to perform a service based on the plurality of derived events.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Systems and methods for probabilistic semantic sensing in a sensory network are disclosed. The system receives raw sensor data from a plurality of sensors and generates semantic data including sensed events. The system correlates the semantic data based on classifiers to generate aggregations of semantic data. Further, the system analyzes the aggregations of semantic data with a probabilistic engine to produce a corresponding plurality of derived events each of which includes a derived probability. The system generates a first derived event, including a first derived probability, that is generated based on a plurality of probabilities that respectively represent a confidence of an associated semantic datum to enable at least one application to perform a service based on the plurality of derived events.
G06N7005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Systems and methods for probabilistic semantic sensing in a sensory network are disclosed. The system receives raw sensor data from a plurality of sensors and generates semantic data including sensed events. The system correlates the semantic data based on classifiers to generate aggregations of semantic data. Further, the system analyzes the aggregations of semantic data with a probabilistic engine to produce a corresponding plurality of derived events each of which includes a derived probability. The system generates a first derived event, including a first derived probability, that is generated based on a plurality of probabilities that respectively represent a confidence of an associated semantic datum to enable at least one application to perform a service based on the plurality of derived events.
System and method for training inductive logic programming enhanced deep belief network models for discrete optimization are disclosed. The system initializes (i) a dataset comprising values and (ii) a pre-defined threshold, partitions the values into a first set and a second set based on the pre-defined threshold. Using Inductive Logic Programming (ILP) engine and a domain knowledge associated with the dataset, a machine learning model is constructed on the first set and the second set to obtain Boolean features, and using the Boolean features that are being appended to the dataset, a deep belief network (DBN) model is trained to identify an optimal set of values between the first set and the second set. Using the trained DBN model, the optimal set of values are sampled to generate samples. The pre-defined threshold is adjusted based on the generated samples, and the steps are repeated to obtain optimal samples.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A processor implemented method comprising: (a) initializing (i) a dataset comprising a plurality of values and (ii) a pre-defined threshold; (b) partitioning the plurality of values into a first set of values and a second set of values based on the pre-defined threshold; (c) constructing, using an Inductive Logic Programming (ILP) and a domain knowledge associated with the dataset, a machine learning model on each of the first set of values and the second set of values to obtain one or more Boolean features; (d) training, using the one or more Boolean features that are being appended to the dataset, a deep belief network (DBN) model to identify an optimal set of values between the first set of values and the second set of values; and (e) sampling, using the trained DBN model, the optimal set of values to generate one or more samples. 2. The processor implemented method of claim 1, further comprising adjusting, using the one or more generated samples, value of the pre-defined threshold and repeating steps (b) till (e) until an optimal sample is generated. 3. The processor implemented method of claim 1, wherein the step of partitioning the plurality of values into a first set of values and a second set of values based on the pre-defined threshold comprises performing a comparison of each value from the plurality of values with the pre-defined threshold. 4. The processor implemented method of claim 1, wherein the first set of values are values lesser than or equal to the pre-defined threshold. 5. The processor implemented method of claim 1, wherein the second set of values are values greater than the pre-defined threshold. 6. A system comprising: a memory storing instructions; one or more communication interfaces; and one or more hardware processors communicatively coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to (a) initialize (i) a dataset comprising a plurality of values and (ii) a pre-defined threshold, (b) partition the plurality of values into a first set of values and a second set of values based on the pre-defined threshold, (c) construct, using an Inductive Logic Programming (ILP) and a domain knowledge associated with the dataset, a machine learning model on each of the first set of values and the second set of values to obtain one or more Boolean features, (d) train, using the one or more Boolean features that are being appended to the dataset, a deep belief network (DBN) model to identify an optimal set of values between the first set of values and the second set of values, and (e) sample, using the trained DBN model, the optimal set of values to generate one or more samples. 7. The system of claim 6, wherein the one or more hardware processors are further configured to adjust, using the one or more generated samples, value of the pre-defined threshold and repeat steps (b) till (e) until an optimal sample is generated. 8. The system of claim 6, the plurality of values are partitioned into the first set of values and the second set of values by performing a comparison of each value from the plurality of values with the pre-defined threshold. 9. The system of claim 6, wherein the first set of values are values lesser than or equal to the pre-defined threshold. 10. The system of claim 6, wherein the second set of values are values greater than the pre-defined threshold. 11. One or more non-transitory machine readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors causes (a) initializing (i) a dataset comprising a plurality of values and (ii) a pre-defined threshold; (b) partitioning the plurality of values into a first set of values and a second set of values based on the pre-defined threshold; (c) constructing, using an Inductive Logic Programming (ILP) and a domain knowledge associated with the dataset, a machine learning model on each of the first set of values and the second set of values to obtain one or more Boolean features; (d) training, using the one or more Boolean features that are being appended to the dataset, a deep belief network (DBN) model to identify an optimal set of values between the first set of values and the second set of values; and (e) sampling, using the trained DBN model, the optimal set of values to generate one or more samples. 12. The one or more non-transitory machine readable information storage mediums of claim 11, wherein the instructions further cause adjusting, using the one or more generated samples, value of the pre-defined threshold and repeating steps (b) till (e) until an optimal sample is generated. 13. The one or more non-transitory machine readable information storage mediums of claim 11, wherein the step of partitioning the plurality of values into a first set of values and a second set of values based on the pre-defined threshold comprises performing a comparison of each value from the plurality of values with the pre-defined threshold. 14. The one or more non-transitory machine readable information storage mediums of claim 11, wherein the first set of values are values lesser than or equal to the pre-defined threshold. 15. The one or more non-transitory machine readable information storage mediums of claim 11, wherein the second set of values are values greater than the pre-defined threshold.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: System and method for training inductive logic programming enhanced deep belief network models for discrete optimization are disclosed. The system initializes (i) a dataset comprising values and (ii) a pre-defined threshold, partitions the values into a first set and a second set based on the pre-defined threshold. Using Inductive Logic Programming (ILP) engine and a domain knowledge associated with the dataset, a machine learning model is constructed on the first set and the second set to obtain Boolean features, and using the Boolean features that are being appended to the dataset, a deep belief network (DBN) model is trained to identify an optimal set of values between the first set and the second set. Using the trained DBN model, the optimal set of values are sampled to generate samples. The pre-defined threshold is adjusted based on the generated samples, and the steps are repeated to obtain optimal samples.
G06N5025
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: System and method for training inductive logic programming enhanced deep belief network models for discrete optimization are disclosed. The system initializes (i) a dataset comprising values and (ii) a pre-defined threshold, partitions the values into a first set and a second set based on the pre-defined threshold. Using Inductive Logic Programming (ILP) engine and a domain knowledge associated with the dataset, a machine learning model is constructed on the first set and the second set to obtain Boolean features, and using the Boolean features that are being appended to the dataset, a deep belief network (DBN) model is trained to identify an optimal set of values between the first set and the second set. Using the trained DBN model, the optimal set of values are sampled to generate samples. The pre-defined threshold is adjusted based on the generated samples, and the steps are repeated to obtain optimal samples.
Discriminative pretraining technique embodiments are presented that pretrain the hidden layers of a Deep Neural Network (DNN). In general, a one-hidden-layer neural network is trained first using labels discriminatively with error back-propagation (BP). Then, after discarding an output layer in the previous one-hidden-layer neural network, another randomly initialized hidden layer is added on top of the previously trained hidden layer along with a new output layer that represents the targets for classification or recognition. The resulting multiple-hidden-layer DNN is then discriminatively trained using the same strategy, and so on until the desired number of hidden layers is reached. This produces a pretrained DNN. The discriminative pretraining technique embodiments have the advantage of bringing the DNN layer weights close to a good local optimum, while still leaving them in a range with a high gradient so that they can be fine-tuned effectively.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer-implemented process for pretraining deep neural network (DNN), comprising: using a computer to perform the following process actions: (a) training a single hidden layer neural network (NN) comprising an input layer into which training data is input, a multi-neuron output layer from which an output is generated, and a first fully-formed, multi-neuron hidden layer which is interconnected with the input and output layers with randomly initialized weights, wherein said training comprises, accessing a set of training data entries, each data entry of which has a corresponding label assigned thereto, inputting each data entry of said set one by one into the input layer until all the data entries have been input at least once to produce an initial NN, such that after the inputting of each data entry, said weights associated with the first hidden layer are set via an error back propagation (BP) procedure so that the output generated from the multi-neuron output layer matches the label associated with the training data entry; (b) discarding the current multi-neuron output layer and adding a new fully-formed, multi-neuron hidden layer which is interconnected with the last previously trained hidden layer and a new multi-neuron output layer with randomly initialized weights to produce a new multiple hidden layer deep neural network; (c) inputting each data entry of said set one by one into the input layer until all the data entries have been input at least once to produce a revised multiple hidden layer deep neural network, such that after the inputting of each data entry, said weights associated with the new hidden layer and each previously trained hidden layer are set via the error BP procedure to produce an output from the new multi-neuron output layer that matches the label associated with the training data entry; (d) repeating actions (b) and (c) until a prescribed number of hidden layers have been added; and (e) designating the last produced revised multiple layer DNN to be said pretrained DNN. 2. The process of claim 1, wherein each output layer employed uses a softmax function to match its output to the label associated with a currently entered training data entry. 3. The process of claim 1, wherein the process action of accessing a set of training data entries, each data entry of which has a corresponding label assigned thereto, comprises accessing a set of speech frames each of which corresponds to a senone label. 4. The process of claim 1, wherein the process action of inputting each data entry of said set one by one into the input layer until all the data entries have been input at least once to produce an initial deep neural network, comprises inputting each data entry of the set just once. 5. The process of claim 1, wherein the process action of inputting each data entry of said set one by one into the input layer until all the data entries have been input at least once to produce a revised multiple hidden layer deep neural network, comprises inputting each data entry of the set just once. 6. The process of claim 1, wherein the error BP procedure used to set the weights associated with the first hidden layer employs a prescribed learning rate that ranges between 0.01 and 0.20. 7. The process of claim 1, wherein the error BP procedure used to set the weights associated with each new hidden layer and each previously trained hidden layer employs a prescribed learning rate that ranges between 0.01 and 0.20. 8. A computer storage device having computer-executable instructions stored thereon for training a deep neural network (DNN), said computer-executable instructions comprising: (a) training a single hidden layer neural network (NN) comprising an input layer into which training data is input, a multi-neuron output layer from which an output is generated, and a first fully-formed, multi-neuron hidden layer which is interconnected with the input and output layers with randomly initialized weights, wherein said training comprises, accessing a set of training data entries, each data entry of which has a corresponding label assigned thereto, inputting each data entry of said set one by one into the input layer until all the data entries have been input once to produce an initial NN, such that after the inputting of each data entry, said weights associated with the first hidden layer are set via an error backpropagation procedure to produce an output from the multi-neuron output layer that matches the label associated with the training data entry; (b) discarding the current multi-neuron output layer and adding a new fully-formed, multi-neuron hidden layer which is interconnected with the last previously trained hidden layer and a new multi-neuron output layer with randomly initialized weights to produce a new multiple hidden layer deep neural network; (c) training the last produced new multiple hidden layer deep neural network, wherein said training comprises, inputting each data entry of said set one by one into the input layer until all the data entries have been input once to produce a revised multiple hidden layer deep neural network, such that after the inputting of each data entry, said weights associated with the new hidden layer and each previously trained hidden layer are set via the error backpropagation procedure which employs said prescribed learning rate so that the output generated from the multi-neuron output layer matches the label associated with the training data entry; (d) repeating instructions (b) and (c) until a prescribed number of hidden layers have been added; and (e) designating the last produced revised multiple layer DNN to be a pretrained DNN. 9. The computer storage device of claim 8, wherein the instruction for training the single hidden layer NN comprises each output layer employing a softmax function to match its output to the label associated with a currently entered training data entry. 10. The computer storage device of claim 8, wherein the instruction for training the last produced new multiple hidden layer deep neural network comprises each output layer employing a softmax function to match its output to the label associated with a currently entered training data entry. 11. The computer storage device of claim 8, wherein the instruction for accessing a set of training data entries, each data entry of which has a corresponding label assigned thereto, comprises accessing a set of speech frames each of which corresponds to a senone label. 12. The computer storage device of claim 8, further comprising an instruction for iteratively training the pretrained DNN a prescribed number of times to produce said trained DNN, wherein each training iteration comprises inputting each data entry of a set of training data entries one by one into the input layer until all the data entries have been input once to produce a new fine-tuned version of the pretrained DNN, such that after the inputting of each data entry, said weights associated with the hidden layers are set via the error backpropagation procedure to produce an output from the output layer that matches the label associated with the training data entry. 13. The computer storage device of claim 12, wherein the instruction for iteratively training the pretrained DNN a prescribed number of times to produce said trained DNN, comprises training the pretrained DNN four times to produce said trained DNN. 14. A system for pretraining a deep neural network (DNN), comprising: one or more computing devices, said computing devices being in communication with each other whenever there is a plurality of computing devices, and a computer program having a plurality of sub-programs executable by the one or more computing devices, the one or more computing devices being directed by the sub-programs of the computer program to, (a) train a single hidden layer neural network (NN) comprising an input layer into which training data is input, a multi-neuron output layer from which an output is generated, and a first fully-formed, multi-neuron hidden layer which is interconnected with the input and output layers with randomly initialized weights, wherein said training comprises, accessing a set of training data entries, each data entry of which has a corresponding label assigned thereto, and inputting each data entry of said set one by one into the input layer until all the data entries have been input at least once to produce an initial NN, such that after the inputting of each data entry, said weights associated with the first hidden layer are set via an error back propagation (BP) procedure so that the output generated from the multi-neuron output layer matches the label associated with the training data entry, (b) discard the current multi-neuron output layer and add a new fully-formed, multi-neuron hidden layer, which is interconnected with the last previously trained hidden layer and a new multi-neuron output layer with randomly initialized weights to produce a new multiple hidden layer deep neural network, (c) input each data entry of said set one by one into the input layer until all the data entries have been input at least once to produce a revised multiple hidden layer deep neural network, such that after the inputting of each data entry, said weights associated with the new hidden layer and each previously trained hidden layer are set via the error BP procedure to produce an output from the new multi-neuron output layer that matches the label associated with the training data entry, repeat (b) and (c) until a prescribed number of hidden layers have been added, and designate the last produced revised multiple layer DNN to be said pretrained DNN. 15. The system of claim 14, wherein each output layer employed uses a softmax function to match its output to the label associated with a currently entered training data entry. 16. The system of claim 14, wherein the sub-program for accessing a set of training data entries, each data entry of which has a corresponding label assigned thereto, comprises accessing a set of speech frames each of which corresponds to a senone label. 17. The system of claim 14, wherein the sub-program for inputting each data entry of said set one by one into the input layer until all the data entries have been input at least once to produce an initial deep neural network, comprises inputting each data entry of the set just once. 18. The system of claim 14, wherein the sub-program for inputting each data entry of said set one by one into the input layer until all the data entries have been input at least once to produce a revised multiple hidden layer deep neural network, comprises inputting each data entry of the set just once. 19. The system of claim 14, wherein the error BP procedure used to set the weights associated with the first hidden layer employs a prescribed learning rate that ranges between 0.01 and 0.20. 20. The system of claim 14, wherein the error BP procedure used to set the weights associated with each new hidden layer and each previously trained hidden layer employs a prescribed learning rate that ranges between 0.01 and 0.20.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: Discriminative pretraining technique embodiments are presented that pretrain the hidden layers of a Deep Neural Network (DNN). In general, a one-hidden-layer neural network is trained first using labels discriminatively with error back-propagation (BP). Then, after discarding an output layer in the previous one-hidden-layer neural network, another randomly initialized hidden layer is added on top of the previously trained hidden layer along with a new output layer that represents the targets for classification or recognition. The resulting multiple-hidden-layer DNN is then discriminatively trained using the same strategy, and so on until the desired number of hidden layers is reached. This produces a pretrained DNN. The discriminative pretraining technique embodiments have the advantage of bringing the DNN layer weights close to a good local optimum, while still leaving them in a range with a high gradient so that they can be fine-tuned effectively.
G06N308
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Discriminative pretraining technique embodiments are presented that pretrain the hidden layers of a Deep Neural Network (DNN). In general, a one-hidden-layer neural network is trained first using labels discriminatively with error back-propagation (BP). Then, after discarding an output layer in the previous one-hidden-layer neural network, another randomly initialized hidden layer is added on top of the previously trained hidden layer along with a new output layer that represents the targets for classification or recognition. The resulting multiple-hidden-layer DNN is then discriminatively trained using the same strategy, and so on until the desired number of hidden layers is reached. This produces a pretrained DNN. The discriminative pretraining technique embodiments have the advantage of bringing the DNN layer weights close to a good local optimum, while still leaving them in a range with a high gradient so that they can be fine-tuned effectively.
Methods, systems and computer program products are disclosed for detecting patterns in a data stream that match multi-pattern rules. One embodiment of the invention provides a method of recognizing a specified group of patterns in a data stream. The method comprises identifying a rule for said specified group of patterns in the data stream, and using a first array of finite state machines to scan the data stream for at least some of the patterns in the specified group. For patterns in the specified group that are found in the data stream by the first array of finite state machines, pattern identifiers are sent to a second array of finite state machines. The second array of finite state machines determines if the specified group of patterns is in the data stream in accordance with the identified rule by, at least in part, using said pattern identifiers.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of recognizing specified groups of patterns in a data stream, the method comprising: identifying a multi-pattern rule for each group of said specified groups of patterns; using a first array of finite state machines, in a first, pattern scanner stage, to scan the data stream for at least some of the patterns in the specified groups, including splitting at least a first pattern of the specified groups of patterns into a plurality of subpatterns, using the first array of finite state machines to scan the data stream for the subpatterns of said first pattern, and using a second array of finite state machines to check if the supatterns are found in the data stream in a specified order; for each of the patterns in the specified groups of patterns that are found in the data stream by the first array of finite state machines, sending a pattern identifier identifying said each pattern to the second array of finite state machines, in a second, rule processor stage; and using the second array of finite state machines for determining if any of the specified groups of patterns is in the data stream in accordance with the identified multi-pattern rule. 2. The method according to claim 1, wherein the using a second array of finite state machines to check if the subpatterns are found in the data stream in a specified order includes using the second array of finite state machines to check if the subpatterns are found in the data stream in an order corresponding to a given original pattern. 3. The method according to claim 1, wherein the using a second array of finite state machines to check if the subpatterns are found in the data stream in a specified order includes using the second array of finite state machines to check if the subpatterns are found at adjacent locations in the data stream. 4. The method according to claim 1, wherein the using the second array of finite state machines for determining if any of the specified groups of patterns is in the data stream in accordance with the identified multi-pattern rule includes: running a multitude of threads on the second array of finite state machines, each of the threads having a thread ID identifying said each thread, and using said multitude of threads to determine whether any of the of specified groups of patterns is in the data stream. 5. The method according to claim 4, wherein: the sending a pattern identifier identifying said each pattern includes including in said pattern identifier one of the thread IDs identifying one of said threads; and the using said multitude of threads includes using said identified one of the threads to determine if said each pattern is part of one of the specified groups of patterns. 6. The method according to claim 5, wherein the using said multitude of threads includes using each of the threads to determine if a respective one of the specified groups of patterns is in the data stream. 7. The method according to claim 6, wherein the including in said pattern identifier one of the thread IDS, includes including in said pattern identifier a plurality of the thread IDs. 8. The method according to claim 7, wherein the sending the pattern identifier identifying said each pattern to a second array of finite state machines includes sending said each pattern to all the threads identified by the thread identifiers included in said pattern identifier. 9. The method according to claim 8, wherein the sending the pattern identifier identifying said each pattern to a second array of finite state machines further includes sending said each pattern to only the threads identified by the thread identifiers included in said pattern identifier. 10. The method according to claim 9, wherein the sending a pattern identifier identifying said each pattern to a second array of finite state machines includes sending said each pattern to the thread identified by the thread identifier included in said pattern identifier. 11. A finite state machine engine for recognizing specified groups of patterns in a data stream, wherein at least a first pattern of the specified groups of patterns is comprised of a plurality of subpatterns, the first state machine engine comprising: a rule memory holding a multitude of multi-pattern rules for identifying each group of said specified groups of patterns; a first array of finite state machines to scan the data stream, in a first, pattern scanner stage, for at least some of the patterns in the specified groups, including scanning the data stream for the subpatterns of said first pattern; and a second array of finite state machines for checking if the subpatterns of the first pattern are found in the data stream in a specified order, and for determining, in a second, rule processor stage, if any of the specified groups of patterns is in the data stream in accordance with the identified multi-pattern rule; and wherein: for each of the patterns in the specified groups of patterns that are found in the data stream, the first array of finite state machines sends a pattern identifier identifying said each pattern to the second array of finite state machines. 12. The finite state machine engine according to claim 11, wherein the second array of finite state machines checks if the subpatterns are found in the data stream in an order corresponding to a given original pattern. 13. The finite state machine engine according to claim 11, wherein the second array of finite state machines check if the subpatterns are found at adjacent locations in the data stream. 14. The finite state machine engine according to claim 11, wherein the determining if any of the specified groups of patterns is in the data stream in accordance with the identified multipattern rule includes running a multitude of threads on the second array of finite state machines, each of the threads having a thread ID identifying said each thread, and using said multitude of threads to determine whether any of the of specified groups of patterns is in the data stream. 15. The finite state machine engine according to claim 11, wherein: the pattern identifier identifying said each pattern includes one of the thread IDs identifying one of said threads; and the second array of finite state machines uses said identified one of the threads to determine if said each pattern is part of one of the specified groups of patterns. 16. An article of manufacture comprising: at least one computer usable hardware medium having computer readable program code logic to execute a machine instruction in a processing unit for using a computer for recognizing specified groups of patterns in a data stream in accordance with defined multi-pattern rules, said computer readable program code logic, when executing, performing the following: using a first array of finite state machines, in a first, pattern scanner stage, to scan the data stream for at least some of the patterns in the specified groups, including splitting at least a first pattern of the specified groups of patterns into a plurality of subpatterns, using the first array of finite state machines to scan the data stream for the subpatterns of said first pattern, and using a second array of finite state machines to check if the supatterns are found in the data stream in a specified order; for each of the patterns in the specified groups of patterns that are found in the data stream by the first array of finite state machines, sending a pattern identifier identifying said each pattern to the second array of finite state machines, in a second, rule processor stage; and using the second array of finite state machines for determining if any of the specified groups of patterns is in the data stream in accordance with the identified multi-pattern rule. 17. The article of manufacture according to claim 16, wherein the using a second array of finite state machines to check if the subpatterns are found in the data stream in a specified order includes using the second array of finite state machines to check if the subpatterns are found in the data stream in an order corresponding to a given original pattern. 18. The article of manufacture according to claim 16, wherein the using a second array of finite state machines to check if the subpatterns are found in the data stream in a specified order includes using the second array of finite state machines to check if the subpatterns are found at adjacent locations in the data stream. 19. The article of manufacture according to claim 16, wherein the using the second array of finite state machines for determining if any of the specified groups of patterns is in the data stream in accordance with the identified multi-pattern rule includes: running a multitude of threads on the second array of finite state machines, each of the threads having a thread ID identifying said each thread, and using said multitude of threads to determine whether any of the of specified groups of patterns is in the data stream. 20. The article of manufacture according to claim 19, wherein: the sending a pattern identifier identifying said each pattern includes including in said pattern identifier one of the thread IDs identifying one of said threads; and the using said multitude of threads includes using said identified one of the threads to determine if said each pattern is part of one of the specified groups of patterns.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: Methods, systems and computer program products are disclosed for detecting patterns in a data stream that match multi-pattern rules. One embodiment of the invention provides a method of recognizing a specified group of patterns in a data stream. The method comprises identifying a rule for said specified group of patterns in the data stream, and using a first array of finite state machines to scan the data stream for at least some of the patterns in the specified group. For patterns in the specified group that are found in the data stream by the first array of finite state machines, pattern identifiers are sent to a second array of finite state machines. The second array of finite state machines determines if the specified group of patterns is in the data stream in accordance with the identified rule by, at least in part, using said pattern identifiers.
G06N5025
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: Methods, systems and computer program products are disclosed for detecting patterns in a data stream that match multi-pattern rules. One embodiment of the invention provides a method of recognizing a specified group of patterns in a data stream. The method comprises identifying a rule for said specified group of patterns in the data stream, and using a first array of finite state machines to scan the data stream for at least some of the patterns in the specified group. For patterns in the specified group that are found in the data stream by the first array of finite state machines, pattern identifiers are sent to a second array of finite state machines. The second array of finite state machines determines if the specified group of patterns is in the data stream in accordance with the identified rule by, at least in part, using said pattern identifiers.
A machine learning based procurement system comprises a machine learning classifier to classify bids. The procurement system determines a price risk score and a supplier risk score for each of the bids based on the classifications, and determines if any of the bids are associated with a high-risk procurement based on comparing the price risk score and the supplier risk score to the respective threshold. The procurement system includes a graphical user interface that can display bid evaluation links, which are accessible to provide information explaining high-risk procurements.
Please help me write a proper abstract based on the patent claims. CLAIM: 1-20. (canceled) 21. A machine learning based procurement system comprising: at least one machine learning classifier; at least one memory to store machine readable instructions; and at least one processor, connected to the memory, to execute the machine readable instructions to: generate a procurement request, the procurement request to solicit bids for providing at least one item; determine an item risk score for the procurement request based on a classification performed by the at least one machine learning classifier; display the item risk score in a graphical user interface; determine whether the at least one item is a high-risk item based on the item risk score; in response to determining the at least one item is high risk, generate a link in the graphical user interface, the link to provide access to at least one metric explaining the high-risk of the at least one item; generate a solicitation from the procurement request; receive bids to provide the at least one item in response to the solicitation; evaluate the bids based on classifications performed by the at least one machine learning classifier, where to evaluate the bids: the at least one machine learning classifier is to classify the bids as being associated with at least one of a high-risk supplier and a high-risk price; and the at least one processor is to: determine a price risk score and a supplier risk score for each of the bids based on the classifications; compare, for each bid, the price risk score and the supplier risk score to a respective threshold; determine if any of the received bids are associated with a high-risk procurement based on the comparing of the price risk score and the supplier risk score to the respective threshold; and in response to determining a bid is associated with a high-risk procurement, generate a bid evaluation link in the graphical user interface, the bid evaluation link providing access to information explaining the high-risk procurement. 22. The machine learning based procurement system of claim 21, wherein prior to generating the procurement request, the at least one processor is to: determine the item risk score and a pre-procurement request, supplier risk score; and display the item risk score and the pre-procurement request, supplier risk score in the graphical user interface. 23. The machine learning based procurement system of claim 21, wherein the at least one processor is to: select one of the bids as a winning bid based on the classifications of the bids determined by the at least one machine learning classifier. 24. The machine learning based procurement system of claim 23, wherein to select one of the bids as the winning bid, the at least one processor is to: determine from the evaluation of the one of the bids, whether the one of the bids is associated with a high-risk procurement; if the one of the bids is determined to be associated with the high-risk procurement, invoke a secondary system to perform an audit of the one of the bids. 25. The machine learning based procurement system of claim 24, wherein the secondary system is to: perform an audit process of the one of the bids; and determine from the audit process whether the procurement system is to accept or reject the one of the bids, the determining generating an identifier to accept the one of the bids or an identifier to reject the one of the bids; and the at least one processor is to: receive a message, from the secondary system, the message including the identifier to accept the one of the bids or the identifier to reject the one of the bids; and accept the one of the bids as the winning bid if the identifier to accept the one of the bids is received in the message. 26. The machine learning based procurement system of claim 21, wherein the at least one machine learning classifier comprises: an ensemble classifier comprising a combination, the combination comprising: a machine learning logistic regression function, and at least one of: a decision tree function, a multicollinearity function, and a predictive strength analysis function, where: at least one of the decision tree function, the multicollinearity function, and the predictive strength analysis function are used to determine predictive variables, and the predictive variables are used in a training set and a validation set to generate the ensemble classifier according to the machine learning logistic regression function. 27. The machine learning based procurement system of claim 26, comprising: a data set processing subsystem to generate the training set and the validation set from historic procurement data and the predictive variables, wherein the historic procurement data is comprised of historic bids to supply goods or services and associated procurement data received from a plurality of data sources. 28. The machine learning based procurement system of claim 27, wherein to generate the training set and the validation set, the data set processing subsystem is to: store the historic bids received from a first data source; receive the associated procurement data from at least one other data source; store the associated procurement data with the historic bids; and partition the historic bids and the associated procurement data into first data for the training set and second date for the validation set, where the training set comprises a supervised training set of data objects and labels indicating whether each data object belongs to a particular category. 29. The machine learning based procurement system of claim 28, wherein to receive the associated procurement data from at least one other data source, the data set processing subsystem is to: generate a query based on data in the historic bids received from the first data source; and execute the query on the at least one other data source to retrieve the associated procurement data from the at least one other data source. 30. The machine learning based procurement system of claim 28, wherein the data set processing subsystem is to filter the stored historic bids and associated procurement data according to data scarcity and variation prior to partitioning the stored historic bids and the associated procurement data into the training set and the validation set. 31. The machine learning based procurement system of claim 28, wherein the data set processing subsystem is to execute transformation operations on fields in the stored historic bids and associated procurement data prior to partitioning the historic bids and the associated procurement data into the training set and the validation set. 32. A machine learning based procurement system comprising: at least one machine learning classifier; a contract writing system; at least one memory to store machine readable instructions; and at least one processor, connected to the memory, to execute the machine readable instructions to: generate a solicitation comprised of a procurement request for an item; receive bids to provide the item; evaluate the bids based on classifications performed by the at least one machine learning classifier, where to evaluate the bids: the at least one machine learning classifier classifies the bids as being associated with: a high-risk supplier or not, a high-risk price or not, or a high-risk item or service or not; and determine a price risk score, a supplier risk score, and an item risk score for each of the bids based on the classifications; identify a winning bid from the evaluation of the bids; determine whether the winning bid is for a high-risk procurement based on the price risk score, the supplier risk score, and the item risk score for the winning bid; in response to determining the winning bid is for a high-risk procurement, generating a contract with clauses associated with the high-risk procurement, where the generating of the contract with the clauses associated with the high-risk procurement is performed by the contract writing system. 33. The machine learning based procurement system of claim 32, where the contract writing system is to: receive user input for the procurement request; and generate the procurement request based on the user input. 34. The machine learning based procurement system of claim 32, wherein prior to generating the solicitation, the at least one processor is to: determine the item risk score based on a classification performed by the at least one machine learning classifier; determine whether the item is a high-risk item based on the item risk score; and in response to determining the item is high risk, the contract writing system is to generate a link in a graphical user interface, the link to provide access to at least one metric explaining the high-risk of the item. 35. The machine learning based procurement system of claim 32, wherein the contract writing system comprises a graphical user interface, and in response to the at least one machine learning classifier classifying a bid as being associated with the high-risk supplier, the high-risk price or the high-risk item, the contract writing system is to provide a notification of the high-risk supplier, the high-risk price or the high-risk item in the graphical user interface. 36. The machine learning based procurement system of claim 32, wherein the contract writing system generates the procurement request for the solicitation, and the contract writing system is to automatically include a risk mitigation clause in the procurement request in response to the at least one processor determining the procurement request includes a high-risk item. 37. The machine learning based procurement system of claim 32, wherein the contract writing system generates the procurement request for the solicitation, and the contract writing system is to automatically include a clause in the procurement request to dissuade a fraudulent supplier from bidding in response to the at least one processor determining the procurement request includes an item potentially associated with fraudulent suppliers. 38. The machine learning based procurement system of claim 32, wherein the at least one machine learning classifier comprises: an ensemble classifier comprising a combination, the combination comprising: a machine learning logistic regression function, and at least one of: a decision tree function, a multicollinearity function, and a predictive strength analysis function, where: at least one of the decision tree function, the multicollinearity function, and the predictive strength analysis function are used to determine predictive variables, and the predictive variables are used in a training set and a validation set to generate the ensemble classifier according to the machine learning logistic regression function. 39. A computer-implemented method executable by at least one processor executing machine readable instructions stored on a non-transitory computer readable medium, the method comprising: generating a procurement request, the procurement request to solicit bids for providing at least one item; determining an item risk score for the procurement request based on a classification performed by at least one machine learning classifier; displaying the item risk score in a graphical user interface; determining whether the at least one item is a high-risk item based on the item risk score; in response to determining the at least one item is high risk, generating a link in the graphical user interface, the link to provide access to at least one metric explaining the high-risk of the at least one item; generating a solicitation from the procurement request; receiving bids to provide the at least one item in response to the solicitation; evaluating the bids based on classifications performed by the at least one machine learning classifier, where to evaluate the bids: classifying, by the at least one machine learning classifier, the bids as being associated with at least one of a high-risk supplier and a high-risk price; and determining a price risk score and a supplier risk score for each of the bids based on the classifications; comparing, for each bid, the price risk score and the supplier risk score to a respective threshold; determining if any of the received bids are associated with a high-risk procurement based on the comparing of the price risk score and the supplier risk score to the respective threshold; in response to determining a bid is associated with a high-risk procurement, generating a bid evaluation link in the graphical user interface, the bid evaluation link providing access to information explaining the high-risk procurement; and selecting one of the bids as a winning bid based on the classifications of the bids determined by the at least one machine learning classifier. 40. The method of claim 39, comprising: prior to generating the procurement request, determining the item risk score and a pre-procurement request, supplier risk score; and displaying the item risk score and the pre-procurement request, supplier risk score in the graphical user interface.
ACCEPTED
Please predict whether this patent is acceptable.PATENT ABSTRACT: A machine learning based procurement system comprises a machine learning classifier to classify bids. The procurement system determines a price risk score and a supplier risk score for each of the bids based on the classifications, and determines if any of the bids are associated with a high-risk procurement based on comparing the price risk score and the supplier risk score to the respective threshold. The procurement system includes a graphical user interface that can display bid evaluation links, which are accessible to provide information explaining high-risk procurements.
G06N99005
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A machine learning based procurement system comprises a machine learning classifier to classify bids. The procurement system determines a price risk score and a supplier risk score for each of the bids based on the classifications, and determines if any of the bids are associated with a high-risk procurement based on comparing the price risk score and the supplier risk score to the respective threshold. The procurement system includes a graphical user interface that can display bid evaluation links, which are accessible to provide information explaining high-risk procurements.
According to one embodiment, a method for generating a plurality of candidate visualizations. The method may include receiving a scenario description. The method may also include collecting a plurality of expert data using a training system based on the received scenario description. The method may further include generating at least one predictive model based on the collected plurality of expert data in order to execute the at least one generated predictive model during an application of a plurality of genetic algorithms.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A processor-implemented method for generating a plurality of candidate visualizations, the method comprising: receiving, by a processor, a scenario description; collecting a plurality of expert data using a training system based on the received scenario description; and generating at least one predictive model based on the collected plurality of expert data in order to execute the at least one generated predictive model during an application of a plurality of genetic algorithms. 2. The method of claim 1, wherein the plurality of expert data includes at least one of a plurality of action data, a plurality of metric data, and a plurality of opinion data. 3. The method of claim 1, wherein the training system collects the plurality of expert data through at least one of an expert selecting a preferred visualization within a plurality of candidate visualizations and an expert submitting an opinion related to the preferred visualization. 4. The method of claim 1, wherein the training system includes at least one of a central data storage facility, a plurality of web-based deployment capabilities, and scalability to a corresponding number of experts using the training system. 5. The method of claim 1, further comprising: updating the training system based on the at least one generated predictive model. 6. The method of claim 1, wherein generating the at least one predictive model may include using at least one of a plurality of boosted decision trees, text categorization, and neural networking. 7. The method of claim 1, wherein each at least one generated predictive model may be used to derive an overall fitness evaluation score to rate a candidate visualization when executing the plurality of genetic algorithms. 8. A computer system for generating a plurality of candidate visualizations, the computer system comprising: one or more processors, one or more computer-readable memories, one or more computer-readable tangible storage medium, and program instructions stored on at least one of the one or more tangible storage medium for execution by at least one of the one or more processors via at least one of the one or more memories, wherein the computer system is capable of performing a method comprising: receiving a scenario description; collecting a plurality of expert data using a training system based on the received scenario description; and generating at least one predictive model based on the collected plurality of expert data in order to execute the at least one generated predictive model during an application of a plurality of genetic algorithms. 9. The computer system of claim 8, wherein the plurality of expert data includes at least one of a plurality of action data, a plurality of metric data, and a plurality of opinion data. 10. The computer system of claim 8, wherein the training system collects the plurality of expert data through at least one of an expert selecting a preferred visualization within a plurality of candidate visualizations and an expert submitting an opinion related to the preferred visualization. 11. The computer system of claim 8, wherein the training system includes at least one of a central data storage facility, a plurality of web-based deployment capabilities, and scalability to a corresponding number of experts using the training system. 12. The computer system of claim 8, further comprising: updating the training system based on the at least one generated predictive model. 13. The computer system of claim 8, wherein generating the at least one predictive model may include using at least one of a plurality of boosted decision trees, text categorization, and neural networking. 14. The computer system of claim 8, wherein each at least one generated predictive model may be used to derive an overall fitness evaluation score to rate a candidate visualization when executing the plurality of genetic algorithms. 15. A computer program product for generating a plurality of candidate visualizations, the computer program product comprising: one or more computer-readable tangible storage medium and program instructions stored on at least one of the one or more tangible storage medium, the program instructions executable by a processor, the program instructions comprising: program instructions to receive a scenario description; program instructions to collect a plurality of expert data using a training system based on the received scenario description; and program instructions to generate at least one predictive model based on the collected plurality of expert data in order to execute the at least one generated predictive model during an application of a plurality of genetic algorithms. 16. The computer program product of claim 15, wherein the plurality of expert data includes at least one of a plurality of action data, a plurality of metric data, and a plurality of opinion data. 17. The computer program product of claim 15, wherein the training system collects the plurality of expert data through at least one of an expert selecting a preferred visualization within a plurality of candidate visualizations and an expert submitting an opinion related to the preferred visualization. 18. The computer program product of claim 15, wherein the training system includes at least one of a central data storage facility, a plurality of web-based deployment capabilities, and scalability to a corresponding number of experts using the training system. 19. The computer program product of claim 15, further comprising: program instructions to update the training system based on the at least one generated predictive model. 20. The computer program product of claim 15, wherein generating the at least one predictive model may include using at least one of a plurality of boosted decision trees, text categorization, and neural networking.
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: According to one embodiment, a method for generating a plurality of candidate visualizations. The method may include receiving a scenario description. The method may also include collecting a plurality of expert data using a training system based on the received scenario description. The method may further include generating at least one predictive model based on the collected plurality of expert data in order to execute the at least one generated predictive model during an application of a plurality of genetic algorithms.
G06N3126
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: According to one embodiment, a method for generating a plurality of candidate visualizations. The method may include receiving a scenario description. The method may also include collecting a plurality of expert data using a training system based on the received scenario description. The method may further include generating at least one predictive model based on the collected plurality of expert data in order to execute the at least one generated predictive model during an application of a plurality of genetic algorithms.
A system, method and computer program product for automatic document classification, including an extraction module configured to extract structural, syntactical and/or semantic information from a document and normalize the extracted information; a machine learning module configured to generate a model representation for automatic document classification based on feature vectors built from the normalized and extracted semantic information for supervised and/or unsupervised clustering or machine learning; and a classification module configured to select a non-classified document from a document collection, and via the extraction module extract normalized structural, syntactical and/or semantic information from the selected document, and generate via the machine learning module a model representation of the selected document based on feature vectors, and match the model representation of the selected document against the machine learning model representation to generate a document category, and/or classification for display to a user.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A computer implemented system for automatic document classification, the system comprising: an extraction module configured to extract structural, syntactical and/or semantic information from a document and normalize the extracted information; a machine learning module configured to generate a model representation for automatic document classification based on feature vectors built from the normalized and extracted semantic information for supervised and/or unsupervised clustering or machine learning; and a classification module configured to select a non-classified document from a document collection, and via the extraction module extract normalized structural, syntactical and/or semantic information from the selected document, and generate via the machine learning module a model representation of the selected document based on feature vectors, and match the model representation of the selected document against the machine learning model representation to generate a document category, and/or classification for display to a user. 2. The system of claim 1, wherein the extracted information includes named entities, properties of entities, noun-phrases, facts, events, and/or concepts. 3. The system of claim 1, wherein the extraction module employs text-mining, language identification, gazetteers, regular expressions, noun-phrase identification with part-of-speech taggers, and/or statistical models and rules, and is configured to identify patterns, and the patterns include libraries, and/or algorithms shared among cases, and which can be tuned for a specific case, to generate case-specific semantic information. 4. The system of claim 1, wherein the extracted information is normalized by using normalization rules, groupers, thesauri, taxonomies, and/or string-matching algorithms. 5. The system of claim 1, wherein the model representation of the document is a TF-IDF document representation of the extracted information, and the clustering or machine learning includes a classifier based on decision trees, support vector machines (SVM), naïve-bayes classifiers, k-nearest neighbors, rules-based classification, Linear discriminant analysis (LDA), Maximum Entropy Markov Model (MEMM), scatter-gather clustering, and/or hierarchical agglomerate clustering (HAC). 6. A computer implemented method for automatic document classification, the method comprising: extracting with an extraction module structural, syntactical and/or semantic information from a document and normalizing with the extraction module the extracted information; generating with a machine learning module a model representation for automatic document classification based on feature vectors built from the normalized and extracted semantic information for supervised and/or unsupervised clustering or machine learning; and selecting with a classification module a non-classified document from a document collection, and extracting via the extraction module normalized structural, syntactical and/or semantic information from the selected document, and generating via the machine learning module a model representation of the selected document based on feature vectors, and matching with the classification module the model representation of the selected document against the machine learning model representation and generating with the classification module a document category, and/or classification for display to a user. 7. The method of claim 6, wherein the extracted information includes named entities, properties of entities, noun-phrases, facts, events, and/or concepts. 8. The method of claim 6, wherein the extraction module employs text-mining, language identification, gazetteers, regular expressions, noun-phrase identification with part-of-speech taggers, and/or statistical models and rules, and is configured to identify patterns, and the patterns include libraries, and/or algorithms shared among cases, and which can be tuned for a specific case, to generate case-specific semantic information. 9. The method of claim 6, wherein the extracted information is normalized by using normalization rules, groupers, thesauri, taxonomies, and/or string-matching algorithms. 10. The method of claim 6, wherein the model representation of the document is a TF-IDF document representation of the extracted information, and the clustering or machine learning includes a classifier based on decision trees, support vector machines (SVM), naïve-bayes classifiers, k-nearest neighbors, rules-based classification, Linear discriminant analysis (LDA), Maximum Entropy Markov Model (MEMM), scatter-gather clustering, and/or hierarchical agglomerate clustering (HAC). 11. A computer program product for automatic document classification and including one or more computer readable instructions embedded on a tangible, non-transitory computer readable medium and configured to cause one or more computer processors to perform the steps of: extracting with an extraction module structural, syntactical and/or semantic information from a document and normalizing with the extraction module the extracted information; generating with a machine learning module a model representation for automatic document classification based on feature vectors built from the normalized and extracted semantic information for supervised and/or unsupervised clustering or machine learning; and selecting with a classification module a non-classified document from a document collection, and extracting via the extraction module normalized structural, syntactical and/or semantic information from the selected document, and generating via the machine learning module a model representation of the selected document based on feature vectors, and matching with the classification module the model representation of the selected document against the machine learning model representation and generating with the classification module a document category, and/or classification for display to a user. 12. The computer program product of claim 11, wherein the extracted information includes named entities, properties of entities, noun-phrases, facts, events, and/or concepts. 13. The computer program product of claim 11, wherein the extraction module employs text-mining, language identification, gazetteers, regular expressions, noun-phrase identification with part-of-speech taggers, and/or statistical models and rules, and is configured to identify patterns, and the patterns include libraries, and/or algorithms shared among cases, and which can be tuned for a specific case, to generate case-specific semantic information. 14. The computer program product of claim 11, wherein the extracted information is normalized by using normalization rules, groupers, thesauri, taxonomies, and/or string-matching algorithms. 15. The computer program product of claim 11, wherein the model representation of the document is a TF-IDF document representation of the extracted information, and the clustering or machine learning includes a classifier based on decision trees, support vector machines (SVM), naïve-bayes classifiers, k-nearest neighbors, rules-based classification, Linear discriminant analysis (LDA), Maximum Entropy Markov Model (MEMM), scatter-gather clustering, and/or hierarchical agglomerate clustering (HAC).
PENDING
Please predict whether this patent is acceptable.PATENT ABSTRACT: A system, method and computer program product for automatic document classification, including an extraction module configured to extract structural, syntactical and/or semantic information from a document and normalize the extracted information; a machine learning module configured to generate a model representation for automatic document classification based on feature vectors built from the normalized and extracted semantic information for supervised and/or unsupervised clustering or machine learning; and a classification module configured to select a non-classified document from a document collection, and via the extraction module extract normalized structural, syntactical and/or semantic information from the selected document, and generate via the machine learning module a model representation of the selected document based on feature vectors, and match the model representation of the selected document against the machine learning model representation to generate a document category, and/or classification for display to a user.
G06N502
Please help me predict the CPC LABEL for this patentPATENT ABSTRACT: A system, method and computer program product for automatic document classification, including an extraction module configured to extract structural, syntactical and/or semantic information from a document and normalize the extracted information; a machine learning module configured to generate a model representation for automatic document classification based on feature vectors built from the normalized and extracted semantic information for supervised and/or unsupervised clustering or machine learning; and a classification module configured to select a non-classified document from a document collection, and via the extraction module extract normalized structural, syntactical and/or semantic information from the selected document, and generate via the machine learning module a model representation of the selected document based on feature vectors, and match the model representation of the selected document against the machine learning model representation to generate a document category, and/or classification for display to a user.
We describe a method of reinforcement learning for a subject system having multiple states and actions to move from one state to the next. Training data is generated by operating on the system with a succession of actions and used to train a second neural network. Target values for training the second neural network are derived from a first neural network which is generated by copying weights of the second neural network at intervals.
Please help me write a proper abstract based on the patent claims. CLAIM: 1. A method of reinforcement learning, the method comprising: inputting training data relating to a subject system, the subject system having a plurality of states and, for each state, a set of actions to move from one of said states to a next said state; wherein said training data is generated by operating on said system with a succession of said actions and comprises starting state data, action data and next state data defining, respectively for a plurality of said actions, a starting state, an action, and a next said state resulting from the action; and training a second neural network using said training data and target values for said second neural network derived from a first neural network; the method further comprising: generating or updating said first neural network from said second neural network. 2. A method as claimed in claim 1 further comprising selecting said actions using learnt action-value parameters from said second neural network, wherein said actions are selected responsive to an action-value parameter determined for each action of a set of actions available at a state of said system. 3. A method as claimed in claim 2 wherein said training data comprises experience data derived from said selected actions, the method further comprising generating said experience data by storing data defining said actions selected by said second neural network in association with data defining respective said starting states and next states for the actions. 4. A method as claimed in claim 3 further comprising generating said target values by providing said data defining said actions and said next states to said first neural network, and training said second neural network using said target values and said data defining said starting states. 5. A method as claimed in claim 2 further comprising: inputting state data defining a state of said system; providing said second neural network with a representation of said state of said system; retrieving from said second neural network a learnt said action-value parameter for each action of said set of actions available at said state; and selecting an action to perform having a maximum or minimum said learnt action-value parameter from said second neural network. 6. A method as claimed in claim 5 further comprising storing experience data from said system, wherein said experience data is generated by operating on said system with said actions selected using said second neural network, and wherein said training data comprises said stored experience data. 7. A method as claimed in claim 6 further comprising: selecting, from said experience data, starting state data, action data and next state data for one of said plurality of actions; providing said first neural network with a representation of said next state from said next state data; determining, from said first neural network, a maximum or minimum learnt action-value parameter for said next state; determining a target value for training said second neural network from said maximum or minimum learnt action-value parameter for said next state. 8. A method as claimed in claim 7 wherein said training of said second neural network comprises providing said second neural network with a representation of said starting state from said starting state data and adjusting weights of said neural network to bring a learnt action-value parameter for an action defined by said action data closer to said target value. 9. A method as claimed in claim 7 wherein said experience data further comprises reward data defining a reward value or cost value of said system resulting from said action taken, and wherein said determining of said target value comprises adjusting said maximum or minimum learnt action-value parameter for said next state by said reward value or said cost value respectively. 10. A method as claimed in claim 1 wherein a state of said system comprises a sequence of observations of said system over time representing a history of said system. 11. A method as claimed in claim 2 wherein said training of said second neural network alternates with said selecting of said actions and comprises incrementally updating a set of weights of said second neural network used for selecting said actions. 12. A method as claimed in claim 1 wherein said generating or updating of said first neural network from said second neural network is performed at intervals after repeated said selecting of said actions using said second neural network and said training of said second neural network. 13. A method as claimed in claim 12 wherein said generating or updating of said first neural network from said second neural network comprises copying a set of weights of said second neural network to said first neural network. 14. A method as claimed in claim 1 wherein a said state is defined by image data. 15. A method as claimed in claim 1 wherein said first and second neural networks comprise deep neural networks with a convolutional neural network input stage. 16. A non-transitory data carrier carrying processor control code to implement the method of claim 1. 17. A method of Q-learning wherein Q values are determined by a neural network and used to select actions to be performed on a system to move the system between states, wherein a first neural network is used to generate a Q-value for a target for training a second neural network used to select said actions. 18. A method as claimed in claim 17 wherein at intervals said first neural network is refreshed from said second neural network. 19. A method as claimed in claim 18 wherein weights of said first neural network are quasi-stationary, remaining substantially unchanged during intervals between said refreshing. 20. A method as claimed in claim 19 further comprising storing a record of said selected actions and states, and using said record to generate said Q-value for said target.