Part 1
On July 17, 2024, the U.S. Patent and Trademark Office (the Office) released new guidance on subject matter eligibility, entitled “The 2024 Patent Subject Matter Eligibility Guidance Update Including on Artificial Intelligence” (2024 AI SME Update). The 2024 AI SME Update is a lifeline for patent prosecutors prosecuting AI patent applications because it provides three sets of claim examples that illustrate the nuances between subject matter eligible claims and ineligible ones for patent applications directed to artificial intelligence (AI) inventions. This three-part blog series looks at the examples provided with the 2024 AI SME Update in-depth and provides key takeaways for each of the provided examples.
Background of the Office Action Rejection
The subject matter eligibility question arises when the Office issues a Section 101 rejection (35 U.S.C. 101) of claims recited in a pending utility patent application. To satisfy a prima facie case, the rejection must assert that the claims are deficient within the bounds of the Section 101 rejection, which involves a 2-Step analysis. Step 1 of the ’Office’s subject matter eligibility analysis addresses whether the claimed invention falls into at least one of the four categories set forth in Section 101. Step 2 of the ’Office’s subject matter eligibility analysis applies the Supreme ’Court’s two-part framework (Alice/Mayo) to identify claims that are directed to a judicial exception (Step 2A Prong One), and also evaluate if additional elements of the claim provide an inventive concept (Step 2A Prong Two). Suppose the claim is found to be directed to a judicial exception in Step 2A. In that case, the analysis continues to Step 2B to evaluate whether the claimed additional elements amount to “significantly more” than the recited judicial exception itself.
In practice, the rejection normally hinges on the two prongs of Step 2A. For example, under Step 2A Prong One, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. If the claim does not recite a judicial exception, it is considered eligible, and the eligibility analysis ends. MPEP § 2106.04, subsection II.A.1. If the claim does recite a judicial exception, the eligibility analysis continues to the second prong of Step 2A. This prong (Step 2A, Prong Two) is used to determine whether the claim integrates the recited judicial exception into a “practical application” of the exception. The rejection will typically analyze the claim in view of the judicial considerations identified in MPEP § 2106.04(d), subsection I; 2106.04(d)(1); 2106.04(d)(2); and 2106.05(a)-(c) and (e)-(h), such as whether the additional element(s) is(are) insignificant extra-solution activity; whether the additional element(s) is(are) mere instructions to apply an exception; or whether the claim reflects an improvement in the functioning of a computer or an improvement to another technology or technical field. If the additional element(s) in the claim integrates the judicial exception into a practical application of the exception, the claim is not “directed to” the judicial exception, and the claim is eligible.
2024 AI SME Update: Examples 47-49
With the 2024 AI SME Update, the Office drafts hypothetical and illustrative examples showing the claim analysis performed under MPEP § 2106. The 2024 AI SME Update also provides a helpful Issue Spotting Chart that is reproduced below for reference:
The Office notes that Examples 47-49 should be interpreted based on the fact patterns that are included with each example and that different fact patterns may have different eligibility outcomes. However, the Office asserts that it is “not necessary” for eligible claims to mirror a claim provided with the examples.
Example 47. Anomaly Detection
Key Takeaway for Claim 1: Example 47, Claim 1 is an effective example for Applicants to cite when arguing patent eligibility of software-based AI inventions that have described specific hardware components in the claim, because such components do not amount to the recitation of any abstract ideas, ending the analysis at Step 2A, Prong One.
Claim 1. An application specific integrated circuit (ASIC) for an artificial neural network (ANN), the ASIC comprising:
a plurality of neurons organized in an array, wherein each neuron comprises a register, a microprocessor, and at least one input; and
a plurality of synaptic circuits, each synaptic circuit including a memory for storing a synaptic weight, wherein each neuron is connected to at least one other neuron via one of the plurality of synaptic circuits.
Background for Claim 1:
The application describes structural features of an artificial neural network (ANN). For example, the structure of an ANN has a series of layers, each comprising neurons arranged in neuron arrays. The neurons in this example comprise a register, a microprocessor, and at least one input. Each neuron produces an output, or activation, based on an activation function that uses the outputs of the previous layer and a set of weights as inputs. Each neuron in a neuron array may be connected to another neuron via a synaptic circuit. A synaptic circuit may include a memory for storing a synaptic weight. In some embodiments, an ANN may be implemented by an application-specific integrated circuit (ASIC). ASICs may be specially customized for a specific artificial intelligence application, provide superior computing capabilities, and reduce electricity consumption compared to traditional CPUs.
The use of specially trained ANNs to detect anomalies realizes a number of improvements over traditional methods of detecting anomalies, including more accurate detection of anomalies. The application further provides methods for training an ANN that lead to faster training times and a more accurate model for detecting anomalies.
SME Holding for Claim 1:
Claim 1 is eligible. The claim recites an ASIC used for an ANN. While the background explains that “[a]n ANN can be realized through software, hardware, or a combination of software and hardware,” the broadest reasonable interpretation of the claimed ANN requires hardware because the claimed ASIC is a physical circuit.
Applicant’s Sample Response to a Section 101 Rejection of Claim 1:
Step 1: Under MPEP § 2106.03, the analysis determines whether the claim falls within any statutory category, including processes, machines, manufactures, and compositions of matter. Here, the claim is directed to a physical circuit, which is a machine and/or manufacture, and falls within one of the statutory categories of invention. (Step 1: YES).
Step 2A, Prong One: Under MPEP § 2106.04(II), the analysis determines whether the claim recites a judicial exception. The claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. Here, no judicial exception is recited in the claim. The claim recites a plurality of neurons, which are hardware components comprising a register and a microprocessor, and a plurality of synaptic circuits, which together form an ANN. The claim does not recite any abstract ideas under MPEP § 2106.04(a)(2), such as a mathematical concept, mental process, or a method of organizing human activity, such as a fundamental economic concept or managing interactions between people. While ANNs may be trained using mathematics, no mathematical concept is recited in the claim. Because the claim does not recite a judicial exception (Step 2A, Prong One: NO), it cannot be directed to one (Step 2A: NO). The claim is eligible, but the analysis does not proceed to Step 2A Prong Two or Step 2B.
Key Takeaway for Claim 2: The reliance on only software processing features and a general “computer,” without enumerating how the software performs or achieves its processing features, dooms the claim to being ineligible.
Claim 2. A method of using an artificial neural network (ANN) comprising:
(a) receiving, at a computer, continuous training data;
(b) discretizing, by the computer, the continuous training data to generate input data;
(c) training, by the computer, the ANN based on the input data and a selected training algorithm to generate a trained ANN, wherein the selected training algorithm includes a backpropagation algorithm and a gradient descent algorithm;
(d) detecting one or more anomalies in a data set using the trained ANN;
(e) analyzing one or more detected anomalies using the trained ANN to generate anomaly data; and
(f) outputting the anomaly data from the trained ANN.
Background for Claim 2:
The ANN introduced with Claim 1 is trained in Claim 2. The application further describes conventional operations and functionalities of typical AI training. For example, the application describes the training data as being received as continuous data at a computer, and using the computer to discretize the continuous data. Machine learning models may benefit from being trained with discrete data, with a limited number of values, rather than continuous data. Any type of discretization method may be used to convert continuous data to discrete data, including binning, clustering, and numerical discretization. The ANN is then trained using any known training techniques, including a conventional backpropagation algorithm and a conventional gradient descent algorithm. The trained ANN is then used to monitor incoming data sets to detect anomalies. If the trained ANN detects one or more anomalies, it additionally analyzes the detected anomalies to generate anomaly data, which can be output to a user and/or used to re-train the ANN. For example, the anomaly data may explain the type of anomaly or the cause of the anomaly.
Anomaly detection is an important task that impacts any industry that benefits from identifying abnormal data that deviates from expected data or from a general pattern. For example, an intrusion detection system may use the disclosed anomaly detection method to improve the detection of malicious network packets. A difficulty in anomaly detection is that a system must define the boundary between ordinary and anomalous data and accurately classify data as ordinary or strange. The line between ordinary and anomalous data may be difficult to determine with cases approaching a boundary and based on an application-specific domain. For example, minor variations may trigger an identification of an anomaly in network security or medicine, while relatively more significant deviations may be considered normal in less sensitive applications. Furthermore, malicious actors may attempt to make anomalies appear like ordinary activity. This application provides solutions for using a trained ANN to quickly and accurately identify anomalies as compared to anomaly detection performed using traditional methods.
In some embodiments, the ANN may detect anomalies in a network where the anomalies indicate potential network intrusions or malicious attacks. If the ANN detects one or more anomalies in network traffic, the ANN can additionally determine whether the detected anomaly is associated with a malicious packet. If the detected anomaly is associated with a malicious packet, the ANN may cause a network device to drop the malicious packet and block future traffic from the sender of the malicious packet. By automatically detecting network intrusions or other malicious attacks, the present invention enhances network security by allowing for automatic, proactive remediation of network attacks. In some embodiments, the system may use various detection techniques for detecting potentially malicious network packets and the source of the potentially malicious network packet, and it can alert a network administrator to potential problems. The system may detect the source of a potentially malicious network packet through a tracing operation or the use of a software tool. The disclosed system detects network intrusions and takes remedial actions, including automatically dropping suspicious packets and blocking traffic from suspicious source addresses without the need to alert a network administrator. Unlike conventional network remediation solutions, the disclosed method and system are able to identify malicious network packets and take remediation actions, including dropping suspicious packets and blocking traffic from suspicious source addresses in real-time. The disclosed system realizes an improvement in network security by avoiding the delay involved in waiting on a network administrator to react to a network intrusion by automatically dropping suspicious packets and blocking traffic from suspicious source addresses based on anomalies identified by the ANN in real time.
SME Holding for Claim 2:
Claim 2 is ineligible.
Step 1: Under MPEP § 2106.03, the analysis determines whether the claim falls within any statutory category, including processes, machines, manufactures, and compositions of matter. Here, the claim recites at least one step or act, including receiving continuous training data. Thus, the claim is directed to a process, which is one of the statutory categories of invention. (Step 1: YES).
Step 2A, Prong One: Under MPEP § 2106.04(II), the claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. In Claim Elements (b), (d), and (e), the features fall within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP § 2106.04(a)(2)(III). Claim Elements (b) and (c) are directed to mathematical concepts. (Step 2A Prong One: YES).
Step 2A, Prong Two: Under MPEP § 2106.04(d), the analysis determines whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. Here, the claim recites the additional elements of “(a) receiving, at a computer, continuous training data,” “using the trained ANN” in Claim Elements (d) and (e), and “(f) outputting the anomaly data from the trained ANN.” Claim Elements (b) and (c) are performed by a general computer. Claim Elements (a) and (f) are mere data gathering and output recited at a high level of generality and thus are insignificant extra-solution activity. Claim Elements (a), (b), and (c) are recited as being performed by a computer at a high level of generality. Claim Elements (d) and (e) recite “using the trained ANN” provide nothing more than mere instructions to implement an abstract idea on a generic computer. Claim Elements (d) and (e) merely indicate a field of use or technological environment in which the judicial exception is performed, which merely confines the use of the abstract idea to a particular technological environment (neural networks) and thus fails to add an inventive concept to the claims. (Step 2A Prong Two: NO).
Step 2B: Under MPEP § 2106.05, the analysis evaluates whether the claim as a whole amounts to “significantly more” than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. Generally, as a practice tip takeaway, if the claim cannot provide a “practical application” of the abstract idea, it cannot provide “significantly more” than the abstract idea. Indeed, most of the claim features are examples of an “insignificant extra-solution activity” in Step 2A, Prong Two, thus the analysis for eligibility also fails in Step 2B. (Step 2B: NO).
Applicant’s Sample Response to a Section 101 Rejection of Claim 2:
Respond to the rejection with amendments to your claims that more closely align them with Example 47, Claim 1 or Example 47, Claim 3. Also, when you are initially drafting the application, ensure that the how/why of the invention is described to avoid this type of rejection during prosecution.
Key Takeaway for Claim 3: Example 47, Claim 3 is another helpful example for Applicants to cite when arguing patent eligibility of software-based AI inventions, especially under Step 2A Prong Two. Although the claim recites limitations that can be interpreted as mental processes/mathematical concepts (backpropagation and gradient descent algorithms), as a whole, the claim integrates the judicial exception into a practical application vis-à-vis improving computer functionality or improving a technological field (dropping… malicious network packets/blocking future traffic from the source address).
Ineligible Claim 2 and eligible Claim 3 are provided below for reference:
Claim 2. A method of using an artificial neural network (ANN) comprising:
(a) receiving, at a computer, continuous training data;
(b) discretizing, by the computer, the continuous training data to generate input data;
(c) training, by the computer, the ANN based on the input data and a selected training algorithm to generate a trained ANN, wherein the selected training algorithm includes a backpropagation algorithm and a gradient descent algorithm;
(d) detecting one or more anomalies in a data set using the trained ANN;
(e) analyzing the one or more detected anomalies using the trained ANN to generate anomaly data; and
(f) outputting the anomaly data from the trained ANN.
Claim 3. A method of using an artificial neural network (ANN) to detect malicious network packets comprising:
(a) training, by a computer, the ANN based on input data and a selected training algorithm to generate a trained ANN, wherein the selected training algorithm includes a backpropagation algorithm and a gradient descent algorithm;
(b) detecting one or more anomalies in network traffic using the trained ANN;
(c) determining at least one detected anomaly is associated with one or more malicious network packets;
(d) detecting a source address associated with the one or more malicious network packets in real time;
(e) dropping the one or more malicious network packets in real time; and
(f) blocking future traffic from the source address.
Background for Claim 3:
Same as Claim 2.
SME Holding for Claim 3:
Claim 3 is eligible.
Step 1: Under MPEP § 2106.03, the analysis determines whether the claim falls within any statutory category, including processes, machines, manufactures, and compositions of matter. Here, the claim recites a series of steps and, therefore, is a process. (Step 1: YES).
Step 2A, Prong One: Under MPEP § 2106.04(II), the analysis determines whether the claim recites a judicial exception. The claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. Here, Claim Element (a) recites mathematical calculations (a backpropagation algorithm and a gradient descent algorithm) to perform the training of the ANN and therefore encompasses mathematical concepts. Claim Elements (b) and (c) recite concepts that could be performed in the human mind, particularly reciting “detecting one or more anomalies in network traffic” and “determining that at least one detected anomaly is associated with one or more malicious network packets.” Claim Element (d)-(f) do not recite mental processes because they cannot be practically performed in the human mind. That is, the human mind is not equipped to detect a source address associated with malicious network packets, drop the malicious network packets in real time, and block future traffic as recited in the claim. However, since Claim Elements (a), (b), and (c) fall within different groupings of abstract ideas (i.e., mathematical concepts and mental processes, respectively), the analysis proceeds to Prong Two under a single abstract idea. (Step 2A, Prong One: YES).
Step 2A, Prong Two: Under MPEP § 2106.04(d), the analysis determines whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. One way to determine integration into a practical application is when the claimed invention improves the functioning of a computer or improves another technology or technical field. To evaluate an improvement to a computer or technical field, the specification must set forth an improvement in technology and the claim itself must reflect the disclosed improvement. See MPEP § 2106.04(d)(1) and 2106.05(a). Here, Claim Elements (d), (e), and (f), in view of the claim as a whole, include an improvement to a computer or to a technological field that requires an evaluation of the specification and the claim to ensure that a technical explanation of the asserted improvement is present in the specification and that the claim reflects the asserted improvement. For example, as recited in the background, existing systems use various detection techniques for detecting potentially malicious network packets and can alert a network administrator to potential problems. The disclosed system detects network intrusions and takes real-time remedial actions, including dropping suspicious packets and blocking traffic from suspicious source addresses. The background section further explains that the disclosed system enhances security by acting in real-time to proactively prevent network intrusions. Additionally, Claim Elements (d), (e), and (f) provide for improved network security using the information from the detection to enhance security by taking proactive measures to remediate the danger by detecting the source address associated with the potentially malicious packets. These steps reflect the improvement described in the background. Thus, the claim as a whole integrates the judicial exception into a practical application such that the claim is not directed to the judicial exception (Step 2A, Prong Two: YES), such that the claim is not directed to the judicial exception. (Step 2A: NO). The claim is eligible.
Applicant’s Sample Response to a Section 101 Rejection of Claim 3:
Step 1: Under MPEP § 2106.03, the analysis determines whether the claim falls within any statutory category, including processes, machines, manufactures, and compositions of matter. Here, the claim recites a series of steps and, therefore, is a process. (Step 1: YES).
Step 2A, Prong One: Under MPEP § 2106.04(II), the analysis determines whether the claim recites a judicial exception. The claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. Here, the Office Action alleges that two abstract ideas are recited, including mathematical concepts and mental processes. The features of claim 3 are directed to non-abstract concepts performed on a specific computing device. Should the Office hold that the claims fall within an abstract idea, the analysis proceeds to Step 2A, Prong Two. (Step 2A, Prong One: YES)
Step 2A, Prong Two: Under MPEP § 2106.04(d), the analysis determines whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. Here, Claim Elements (d), (e), and (f), in view of the claim as a whole, include an improvement to a computer or to a technological field. The analysis of these Claim Elements require an evaluation of the specification and the claim to ensure that a technical explanation of the asserted improvement is present in the specification and that the claim, as a whole, reflects the asserted improvement. For example, as recited in the background, existing systems use various detection techniques for detecting potentially malicious network packets and can alert a network administrator to potential problems. The disclosed system detects network intrusions and takes real-time remedial actions, including dropping suspicious packets and blocking traffic from suspicious source addresses. The background section further explains that the disclosed system enhances security by acting in real time to proactively prevent network intrusions. Additionally, Claim Elements (d), (e), and (f) provide for improved network security using the information from the detection to enhance security by taking proactive measures to remediate the danger by detecting the source address associated with the potentially malicious packets. These steps reflect the improvement described in the background. Thus, the claim as a whole integrates the judicial exception into a practical application such that the claim is not directed to the judicial exception (Step 2A, Prong Two: YES). The claim is eligible under Step 2A Prong Two, but the analysis does not proceed to Step 2B.
Related Services:
About the Authors:
Melissa Patterson focuses on the preparation and prosecution of patent applications, licensing, and litigation, particularly in innovative technologies such as software, computer and mechanical devices, AR/VR headsets, mobile communications, artificial intelligence, robotics, blockchain, insurance, healthcare records management, geophysical management, automotive technologies, and financial networks. She has successfully prosecuted over 1,000 U.S. and international patent matters, collaborating extensively with foreign counsel.
Hector Agdeppa focuses his practice on intellectual property law, particularly preparing and prosecuting patent applications. With extensive experience in electrical, mechanical, and computer software arts, as well as medical device innovation, He leverages his industry knowledge to guide clients through patent portfolio management and, when desired, monetization strategies.
Mark Catanese is a patent prosecution attorney in Dickinson Wright San Diego office. His practice focuses on the preparation and prosecution of patent applications for a diverse clientele, ranging from startups to Fortune 500 companies. His experience spans consumer electronics, wireless communications, satellite technologies, computer hardware and software, machine learning and artificial intelligence, robotic systems, LED and LCDs display technologies, augmented and virtual reality, optical systems, semiconductor devices, medical devices, and automotive systems.