-min.webp)
Whether it’s the first alert of the shift, or the last that populates the queue right as an analyst begins to decompress and prepare to call it a day, an Endpoint Detection and Response (EDR) alert demands immediate action. Investigating EDR alerts is complex, spanning files, processes, and network activity. It requires a highly disciplined, multi-stage investigation to uncover the true nature of the event.
This article outlines a robust methodology, centered on Triage and Response, designed to navigate this complexity. Initial efforts focus on classification, vetting the alert's legitimacy, and prioritizing impact. This crucial scoping phase dictates how quickly and extensively the investigation moves from local EDR data to the necessary correlative context gathered from cloud security platforms.
Many high-priority incidents follow an identical sequence. Here’s an example: a phishing email is delivered to Office 365, the recipient clicks a malicious link, and a payload is ultimately dropped and flagged by CrowdStrike’s ML engines. However, the initial alert an analyst sees is often the endpoint detection — the dropped file or suspicious execution — not the email itself. That forces analysts to work backwards from endpoint evidence to reconstruct the delivery chain.
{{ebook-cta}}
What happens when an analyst gets that first alert? The foundational step of triage is classification. Is the alert focused on a Process (e.g., suspicious command line execution), Network activity (e.g., failed C2 connection), a specific Asset status (e.g., system configuration change), or, as in this case, a File? Pinpointing the alert type is critical because it immediately directs the analyst to the relevant data source and initial investigative playbook. In this example, the EDR system has flagged a binary for malicious properties, and the alert is therefore immediately classified as a File-based Alert.
Once the type of alert is determined and its priority is set, a more stringent line of questioning and investigative methodology can be honed in on. This is the scoping stage of the investigation that allows analysts to direct their efforts in a streamlined path.
In this scenario, CrowdStrike EDR detected with its “Machine Learning Algorithm” that a particular binary was flagged as malicious on one user’s device. At first, the investigator would want to see what telemetry was gathered and bundled in the alert.
At minimum, the investigator should confirm the presence of key file metadata:
Before committing to a full forensic dive, the security analyst must quickly answer whether the alert is legitimate. The analyst checks the alert details for its confidence score or detection source. A flag from a generic ML algorithm often carries a higher chance of being a False Positive (FP) than a match against a high-fidelity Indicator of Compromise (IoC). The immediate action is to compare the file hash against internal threat intelligence platforms (TIPs). If the file is unknown, the analyst quickly checks the process context to see if the activity—a file being dropped by a web browser—is expected. Since most browser file drops can be legitimate, the ML flag raises the severity, justifying the next level of investigation.
The investigator must determine whether the EDR system took any remediative activity on the detection event. In this instance it’s whether the file was contained or deleted and was the affected system quarantined?
Checking the remediation status is the fastest way to assess the current risk level. Assuming no remediation actions occurred (perhaps due to policy or initial alert confidence), the investigator must immediately determine why not and if there still needs to be an action taken in order to fully triage this event.
The security analyst determines the priority level by assessing the potential impact. In our example, since the alert involves a user's desktop, it is automatically elevated based on Asset Criticality and User Context. If the user holds elevated permissions or handles sensitive data (e.g., an executive or finance employee), the alert’s priority is immediately raised to Critical. This prioritization dictates the speed and urgency of the investigation, which can require immediate network isolation of the affected asset for containment purposes, even before execution is confirmed.
The most critical question to answer is whether or not the suspect binary was executed. In CrowdStrike, the Falcon Sensor will log execution events under the process rollup log and include key details such as the parent process that initiated the file’s execution. This step is consistent with any EDR product, and can even be traced back to Windows system event logs (Event ID 4688).
If the investigation determines that the file was indeed executed, follow-on activity must then be examined. Malware often arrives on a system through a coordinated sequence of stages; meaning the first event detected was only the precursor for additional malicious files or activities. Analysis has to confirm whether or not there are any artifacts or lingering impact from a malicious event, which could appear anywhere from inconspicuous temporary folder paths, registry value entries, and beyond. The key is to isolate EDR telemetry down to significant events attributed to the suspected malware and tracing their behaviors through file process trees, network connection attempts, or noticeably suspicious process injection attempts.
In this Scenario: The investigation confirmed that there was no follow-on activity reported because the file itself was never executed. The threat can be triaged at the endpoint, but the incident is not closed.
The EDR data reveals what was dropped and where it landed, but not how it arrived. The investigator must learn more about and answer the question of the file's origin to ensure mitigation is put in place to prevent future recurrence.
Since the investigator is already engaged with the affected device’s telemetry and the incident timeline from previous analysis, they can pivot out from the file's creation timestamp. First, the local EDR telemetry can be checked for any suspicious network or web browsing activity that correlates with the file’s creation time. Did the user visit a strange URL immediately prior?
If EDR cannot definitively identify the originating process (e.g., if the file was downloaded via a browser process that doesn't explicitly log the originating URL), detail must be gathered from the Cloud. The investigation must move from Endpoint data to Cloud data, specifically the Office 365 Unified Audit Log. Using the affected User ID and the File Creation Timestamp as bounding boxes, a targeted search is initiated within the O365 logs.
The goal is to conduct a tight timeline analysis to see if the user received a suspicious email or opened a malicious attachment/clicked a link near the creation time of the suspect file. The investigation looks for activities like:
By analyzing the subject line, sender address, and email metadata from the audit logs, the investigator can identify the specific email that led to the file drop. This information allows the investigator to understand the initial attack vector and complete the picture.
Once the malicious file is contained on the endpoint and the source email is identified, the investigation moves to organizational-level mitigation:
Remediate the threat actor by immediately blocking the sender’s address, domain, or IP (depending on the environment’s configuration) at the perimeter (e.g., Exchange Online Protection or a Secure Email Gateway). The investigator must then ensure no other employees in the organization received emails from the malicious sender, and perform a message trace to quarantine or delete the malicious email from all inboxes.
By correlating the forensic data from the Endpoint and the contextual data from the Cloud, the investigator fully triages the threat, prevents immediate execution damage, and implements controls to protect the entire organization from the identified phishing campaign.
To triage and classify an EDR alert, you first identify whether it is a Process, Network, Asset, or File event. This classification points you to the right data source and investigation procedure, such as process trees for execution or file details for malware. Starting with clear classification speeds the rest of triage and response.
For a file based EDR alert, analysts should collect SHA256, SHA1 or MD5 hashes, the folder path, and the original filename. They should also capture access and create timestamps and the precipitating process that created or executed the file. These metadata elements anchor hash reputation checks and timeline analysis.
To decide whether an EDR alert is a true positive or a false positive, compare the file hash with internal threat intelligence and review the detection source. A generic machine learning flag carries more false positive risk than a high fidelity indicator of compromise, so you validate with process context such as a browser download. Rapid legitimacy checks prevent unnecessary deep dives and focus effort where risk is highest.
To prioritize an EDR alert based on asset criticality and user context, weigh the role and data sensitivity of the affected endpoint. Alerts on devices used by administrators or finance users are elevated to critical, which justifies faster investigation and immediate containment such as network isolation. Priority setting governs response speed and scope.
To determine whether a suspicious binary actually executed, review EDR execution logs and parent child process relationships. In CrowdStrike Falcon, check the process rollup log, and in Windows confirm with Event ID 4688 for process creation. Execution confirmation changes containment and scoping decisions.
To trace follow-on activity in EDR telemetry after execution, pivot through the process tree and examine file writes, registry changes, network connections, and code injection attempts. The goal is to isolate significant events attributed to the suspicious process and map their sequence. This confirms impact and guides eradication.
Get Gartner's guidance on evaluating and adopting AI SOC agents

