Skip to content

Engage Timelines

Shaun Botha edited this page Mar 3, 2021 · 7 revisions

Overview

Timelines are Engage's method for tracking events that occur so they can be used for reporting, analysis, evidenciary purposes, and so on. Timelines are also very often used for quick-and-easy "lookbacks" to hear or see what happened on a group - for example, to play back audio that was recently received.

To see how timelines are used in dedicated recorders, check out the Engage Activity Recorder Service.

Timelines, Events, and Database

Each group has its own timeline and has the same identifier as the group. Within that timeline, events are tracked - each with a unique identifier. All of this information is kept inside an embedded SQLite database maintained by Engage.

This database typically resides in the primary storage area specified by the engine policy but it can also be configured to only reside in memory. That way, there is no record of events outside of the executing process instance of Engage.

Events Table

All Timeline events are stored in a single table.

Name SQL Type Description Comments
event_id CHAR(38) Unique event identifier A valid GUID
group_id CHAR(38) Group identifier A valid GUID
type INT Type of event 0 = undefined
1 = audio
2 = location
direction INT Direction of event 0 = undefined
1 = inbound
2 = outbound
3 = inbound and outbound
4 - undefined
this_node_id CHAR(38) The identifier of the Engage node that recorded the event A valid GUID
ts_started DATETIME Wallclock timestamp when the event began Unix timestamp in milliseconds
ts_ended DATETIME Wallclock timestamp when the event ended Unix timestamp in milliseconds
in_progress BOOL Indicates if the event is/was in progress during the last database write
file_uri TEXT The name of the disk file where the event is stored - may be empty if no disk file was created - e.g. "file://./timelines/{f4ef4bc6-d692-4b59-9631-ca5f8726e6e5}/{a8c336d3-d9f4-4a55-845c-a516d8de657c}.wav" for audio data stored in a RIFF container such as a Microsoft Wave audio file.
node_id CHAR(38) Identifier of the Engage node that generatred the event A valid GUID if not empty
alias CHAR(16) Alias/uit ID of the Engage entity that generatred the event A Engage alias string if not empty
rxtx_flags INT Bit flags associated with RX/TX of the event For example:
0x0001 = emergency
0x0004 = generating entity is an automated system
metadata TEXT Textual metadata associated with the event Metadata is free-format and event specific. However, as an example, an audio event would embed JSON into the RIFF container to describe the audio more clearly. This metadata is retained in the database record as well.
archived BOOL Indicator of whether the event has been archived for long-term storage.
tx_id INT An unsigned 32-bit number used to identify the transmission. This value may be NULL or 0. Use and interpretation of this field is implementation-specific according to the developer of that application.

Indexes

Indexes on the event table are as follows:

Column(s) Comments
event_id UNIQUE
group_id
type
direction
ts_started
ts_ended
in_progress
node_id
alias
group_id, type, ts_started
archived
tx_id

Querying

Querying a timeline for events is accomplished by calling the engageQueryTimeline() API, passing the ID of the group who's timeline you're interested in along with parameters that form the criteria for processing against the database.

For example, a JSON parameter structure like this:

{
    "maxCount": 3,                   // We only want 3 events
    "mostRecentFirst": true,         // We want the events 
    "startedOnOrAfter": 0,           // The UNIX timestamp (in milliseconds) on or after the event started
    "endedOnOrBefore": 0,            // The UNIX timestamp (in milliseconds) on or before the event end
    "onlyDirection": 0,              // Limit to reevents  of a particular direction (see above)
    "onlyType": 0,                   // Limit to events of a particular type (see above)
    "onlyCommitted": true,           // We only want completed events, not events that are still in progress
    "onlyAlias": "",                 // Limit to events from a particular alias/unit ID
    "onlyNodeId": ""                 // Limit to events from a particular Engage node
    "onlyTxId": 0,                  // Limit to events for a specific transmission ID
}

Results in output that looks like this:

{
   "success": true,                 // The query was successful
   "count": 3,                      // The number of items returned
   "execMs": 0.556381,              // The operation completed in 0.556381 milliseconds
   "started": 1573338467,           // The first event started at this timestamp
   "ended": 1573345307,             // The last event ended at this timestamp

   // Here's the array of events - there are 3 of them as per the JSON parameters
   "events": [
      {
         "audio": {
            "ms": 6840.0,
            "samples": 109440
         },
         "direction": 4,
         "ended": 1573345307,
         "groupId": "6187885c-80aa-45f3-986d-0661146ae168",
         "id": "{d88e7262-f04d-a69d-cd85-16795469ffba}",
         "inProgress": false,
         "started": 1573338467,
         "type": 1,
         "uri": "file://.//timelines/6187885c-80aa-45f3-986d-0661146ae168/{d88e7262-f04d-a69d-cd85-16795469ffba}.wav",
         "txId": 12345
      },
      {
         "audio": {
            "ms": 6420.0,
            "samples": 102720
         },
         "direction": 4,
         "ended": 1573344887,
         "groupId": "6187885c-80aa-45f3-986d-0661146ae168",
         "id": "{fed0c559-eae2-4cee-57d0-3be0abcdb5c6}",
         "inProgress": false,
         "started": 1573338467,
         "type": 1,
         "uri": "file://.//timelines/6187885c-80aa-45f3-986d-0661146ae168/{fed0c559-eae2-4cee-57d0-3be0abcdb5c6}.wav",
         "txId": 23456
      },
      {
         "audio": {
            "ms": 3660.0,
            "samples": 58560
         },
         "direction": 4,
         "ended": 1573342127,
         "groupId": "6187885c-80aa-45f3-986d-0661146ae168",
         "id": "{7947d42a-302a-ed96-4125-8fbed91ac6a4}",
         "inProgress": false,
         "started": 1573338467,
         "type": 1,
         "uri": "file://.//timelines/6187885c-80aa-45f3-986d-0661146ae168/{7947d42a-302a-ed96-4125-8fbed91ac6a4}.wav",
         "txId": 34567
      }
   ]
}

Because the timeline database is SQL-based, queries are submitted as "SELECT" SQL query statements and are generated automatically based on the JSON parameters.

If you're feeling adventurous, though, you can provide your own SQL which Engage will use instead of building a SQL query based on the parameters in the JSON above. To do this, all you need to do is provide a JSON attribute named "sql" in the JSON. Engage will ignore any other parameters in the JSON object and use your SQL as provided.

For example, if you just wanted a count of the events for group {44078f0a-407f-45b3-bd8f-93c76a5c9d1a}, your JSON would look as follows:

{
    "sql":"SELECT COUNT(*) AS event_count FROM timeline_events WHERE group_id = '{44078f0a-407f-45b3-bd8f-93c76a5c9d1a}';"
}

Assuming there is a group timeline with this identifier and it has 278 events, Engage will execute this query and return the following JSON:

{
    "success": true,
    "count": 1,
    "execMs": 0.774007,
    "records": [
        {"event_count": "278"}
    ]
}

Notice how the JSON attribute name "event_count" matches the name specified in the "AS" portion of the SQL statement. For each record returned by the query, Engage will create JSON attribute names from the column names in the resultant record.

Also notice that the value of the attribute (in this case "278") is returned as a JSON string. For these kinds of queries, Engage always returns values as strings regardless of their actual data type.

Finally, be aware that for your own SQL queries, your resulting JSON report will not have the "started" and "ended" attributes. Also, instead of an array of "events" being returned, you will have an array of "records".

Now, let's say that your SQL is invalid. The result will indicate that the operation was not successful. There is also generally an error message included - but that may not always be the case.

For example:

{
    "sql":"SELECT foo FROM bar;"
}

Will result in:

{
   "success": false,
   "execMs": 0.069166,
   "errorMessage": "no such table: bar"
}

Caution! Proceed With Care.

Running your own SQL can be dangerous if you're not skilled at SQL. You run the risk of adversely affecting Engage's core operation or even breaking your database altogether. Engage will certainly do as much as it can to protect you from making changes to the database, but we can't think of every possible mistake someone could make.

While Engage's database is tuned and indexed for it's own internal queries, your queries may not be able to take advantage of that tuning. So, to provide some guideline on how your SQL performs (as well as Engage's internal queries), the "execMs" JSON attribute is provide in results. This is a measurement of the number of milliseconds it took to conduct the entire operation. It should ideally be less than 1 millisecond for pretty much any query.

Bottom-line, use your own SQL with caution!

Archival

To prevent unfettered growth over time of the database as well as the external files associated with events, Engage periodically removes events that meet various criteria. These include factors such as storage quota maximums reached, number of events being tracked, disk space utilization, and so on.

If you need to place events into long-term storage - such as a centralized archival cloud using a third-party archiver; there are a number of strategies that a local agent activing on behalf of the archiver could use.

  • Directory scanning: This is pretty straightforward. Simply perform recursive scans from the root of the timeline storage directory, and adding them to the central system. You will of course need to track which files had already been processed, which ones are still in progress, and so on.

  • Via the database: Open a SQLite connection to the database (assuming it has been configured to be stored on disk and not in memory). Run your queries using the "archived" and "in_progress" booleans to track which which events need your attention. Once you've archived the event, update the database to set the "archived" field.

earwax and earbud

Just for kicks, we built a (mostly) functional web-based REST service to archive recordings on a cloud or enterprise web server, along with a local agent to upload recordings. We called these two earwax (Javascript on top of Node.js) and earbud (Python) respectively. You're welcome to use that source code for your own purposes. Check out earwax in the samples directory of the public repository.

Audio Event File Structure

If you're inclined toward the deeper, technical side of things, you might find the structure of audio event files interesting. (You'll see those as the uri field in the JSON examples above.) In the case of audio, Engage stores that audio in standard Microsoft RIFF format - generally referred to as a "Wave file" or, more simply, "wav file". We chose the RIFF specification because it allows for a variety of "chunks" of information to be stored in the file (or "container"). Some chunks are well-known while other, custom-defined, chunks can be embedded without affecting applications that don't understand those chunks. So, you should generally be able to just load the recording into your favorite audio player and listen away. But, if your application understands the special chunks that Engage embeds, you'll gain access to bunch of extra information. This information includes metadata associated with event - such as who made the transmission, where they made it from (if location data was included by the transmitting entity), priority level, transmission ID, and so on. Also present is an ECDSA signature of the file content that guards against tampering - therefore ensuring chain-of-evidence of recordings where required. (Oh, we also include the public portion of the X.509 certificate used to generate that signature.) This all makes for a nicely packaged, self-describing, self-protecting audio event.

Here's an example of the guts of a RIFF file. In the breakdown below, standard RIFF chunk names are in bold black while chunk names created by Engage are in bold red. Engage chunk content is in plain red.