Need Heartbeat Query


Hi Team,


I am trying to write a KQL query to catch if any single heartbeat missed.

Like we could see in my below screenshot, this server is sending heartbeat after every minute interval.

And now there is gap in heartbeat when i stopped the scx service, so now i want to track if any single heartbeat will miss then i should have an alert notification.



10 Replies
best response confirmed by GouravIN (Contributor)


personally I prefer the example query of 

// Availability rate
// Calculate the availability rate of each connected computer
// bin_at is used to set the time grain to 1 hour, starting exactly 24 hours ago
| summarize heartbeatPerHour = count() by bin_at(TimeGenerated, 1h, ago(24h)), Computer
| extend availablePerHour = iff(heartbeatPerHour > 0, true, false)
| summarize totalAvailableHours = countif(availablePerHour == true) by Computer 
| extend availabilityRate = totalAvailableHours*100.0/24


Heartbeats are expected to be missed (pauses, glitches, load etc...) and the data will catch-up - so you may get false positives.

You can use a date_diff to compare 
Go to Log Analytics and Run Query

| where TimeGenerated >= ago(1h)
| where Computer == "hardening-demo"
| project Computer, TimeGenerated
| order by TimeGenerated desc
| project n = TimeGenerated, nminus = prev(TimeGenerated), TimeGenerated, Computer
| where isnotempty(nminus)
// show time NOW vs time  n -1 row
| extend second = datetime_diff('second',nminus, n)
| where second >= 60

Results for seconds below 60 (mainly 9 and 51 for the demo data) - just remove the last line of the above query to see this

n nminus TimeGenerated Computer second
2019-11-22T17:42:37.88Z 2019-11-22T17:42:46.523Z 2019-11-22T17:42:37.88Z hardening-demo 9
2019-11-22T17:41:46.52Z 2019-11-22T17:42:37.88Z 2019-11-22T17:41:46.52Z hardening-demo 51
2019-11-22T17:41:37.877Z 2019-11-22T17:41:46.52Z 2019-11-22T17:41:37.877Z hardening-demo 9
2019-11-22T17:40:46.52Z 2019-11-22T17:41:37.877Z 2019-11-22T17:40:46.52Z hardening-demo 51
2019-11-22T17:40:37.873Z 2019-11-22T17:40:46.52Z 2019-11-22T17:40:37.873Z hardening-demo 9



@CliveWatson Just to add to this conversation, I've come up with a slightly different way of doing this--would love feedback:

let current = now();
let ostype = 'Windows';
let computername = '';
let environment = 'Non-Azure';
let threshold = 600;
| where TimeGenerated >= ago(1h)
// --for a specific computer:
| where Computer contains computername
// --for a specific computer group:
//| where Computer in (group)
// --for a specific OS type:
| where OSType contains ostype
// --for on-prem or Azure VMs:
| where ComputerEnvironment contains environment
| project Computer, TimeGenerated, current
| order by TimeGenerated desc
| project nminus = prev(TimeGenerated), current, Computer
| where isnotempty(nminus)
| extend ['LastHeartbeat (in seconds)'] = datetime_diff('second', current, nminus)
| summarize arg_max(nminus, *) by Computer
| where ['LastHeartbeat (in seconds)'] >= threshold
| project Computer, QueryTime = current, LastTimeStamp = nminus, ['LastHeartbeat (in seconds)']


@Scott Allison 


Looks good @Scott Allison , I would just swap contains to has as per best practise 

@CliveWatson Thanks! I've seen weirdness with has versus contains. I haven't noted what that weirdness is, but if I run across it again, I'll be sure to share.

@CliveWatson - here's a perfect example of why the HAS operator isn't useful for many operations:

This query returns the expected results every time:

| where Computer contains 'abc'
| distinct Computer

For example, this would return:

When I replace CONTAINS with HAS, I get 0 results. So in 99% of my use cases, HAS doesn't work at all. 

@Scott Allison 


That is the behavior I'd expect 


From the docs: 
Prefer has operator over contains when looking for full tokens. has is more performant as it doesn't have to look-up for substrings.


What does that mean in practice:


1. This query example will fail (as its not a substring).  Computers named: aks-nodepool1.nnnnnnnnn


Go to Log Analytics and run query

Heartbeat | where Computer has 'pool' | distinct Computer
Note: if you used "nodepool1" it would work
Where as this works ("aks" is a full string match(full token)
Go to Log Analytics and run query

So on a small dataset it wont matter if you use contains vs. has - however on a large one it could improve perf. 
When I create a query I will often start with a "contains" but will (If I remember!  Sorry if you find a query from me that isn't optomised) then check and swap to a "has" if that works - but you need to evaluate on a case by case.
Essentially KQL only scans relevant data using indices with has, rather than have to read ALL the data (imagine if the string was really really long, rather than a simple computer name). 
Its sort of like a book, and the index, would you want to find out in what chapters an occurrence of "Scott" was by checking the index, or have to read each line, word and words within a word?
Make sense?

@CliveWatson Definitely makes sense. Today, I don't have but a few use cases to use HAS (querying Event Logs or Syslog comes to mind). Your explanation clears things up for me. 

Appreciate it!



Data source: Azure

Below query is based on Events which are registered and cleared after a while.


// list records for last 30 days:
| where TimeGenerated > ago(30d)
| summarize LastCall = arg_max(TimeGenerated,*) by Computer
// retrieve machines that have not sent a heartbeat in the last 4 hours:
| where LastCall < ago(4h)

| project Computer, LastCall, timestamp = format_datetime(LastCall, 'MM-dd-yyyy hh:mm'),
startofday = format_datetime(startofday(now()), 'MM-dd-yyyy')
,VMUUID, SourceComputerId, ComputerIP, ResourceId
| sort by LastCall


The above KustoQL query runs every day, so today it may return 10 Computers (10 events), tomorrow 7 computers (7 events), etc.
Want to keep records of all events (today, tomorrow, etc.) for 31 days.


How to use Power BI or Power BI Dataflow to achieve above , 

otherwise how do I get data from (Log Analytics workspaces) into Azure SQL Database and apply incremental refresh (so todays records don't overwrite yesterdays records) ?



You can store the data for longer (at a cost) by increasing retention.  You can also see the others days from within the query 

| where TimeGenerated > startofday(ago(30d))
| summarize count(), LastCall = max(TimeGenerated) by Computer, bin(TimeGenerated,1d)
| where LastCall < ago(1m)
| render columnchart 




If I run your query, I get 4 rows returned for today. So it's like a snapshot for today, when query was run. How can I get a snapshot for everyday over the last 30 days where VM downtime is 4 hours or longer, time for getting snapshot is 9am daily ?