Skip to main content

Tips & Best Practices for Event Monitoring

This guide provides best practices and optimization tips for effective OPC UA event monitoring.

Filtering Strategy

Use Appropriate Where Clauses

Filter events at the server to reduce network traffic and processing overhead:

Good: Filter at source

whereClause: "ofType('AlarmConditionType') AND Severity >= 600"

Avoid: Receive all and filter in Node-RED

whereClause: ""
// Then filter in function node - inefficient

Benefits of server-side filtering:

  • Reduced network bandwidth
  • Lower Node-RED processing load
  • Better server-side optimization
  • Fewer unnecessary events

Combine Multiple Criteria

Build compound Where Clauses for precise filtering:

// High-severity alarms from specific equipment
ofType('AlarmConditionType') AND Severity >= 700 AND SourceName = 'Reactor1'

// Critical events or audit events
Severity >= 900 OR ofType('AuditEventType')

// Active alarms only
ofType('AlarmConditionType') AND ActiveState = true

Test Without Filters First

When setting up a new event monitor:

  1. Start with no Where Clause to see all events
  2. Examine the events you receive
  3. Identify patterns (types, sources, severity)
  4. Add filters gradually to narrow down to what you need
// Step 1: No filter - see everything
whereClause: ""

// Step 2: Filter by type after seeing what exists
whereClause: "ofType('AlarmConditionType')"

// Step 3: Add severity filter
whereClause: "ofType('AlarmConditionType') AND Severity >= 600"

Field Selection

Select Only Needed Fields

Request only the fields you'll actually use:

Good: Minimal fields

EventId,Time,Message,Severity

Avoid: All fields

EventId,EventType,SourceNode,SourceName,Time,ReceiveTime,LocalTime,Message,Severity,Comment,...

Benefits:

  • Smaller messages
  • Faster processing
  • Reduced memory usage
  • Better performance

Common Field Combinations

Basic event monitoring:

EventId,Time,Message,Severity

Alarm monitoring:

EventId,Time,Message,Severity,SourceName,ActiveState,AckedState

Audit logging:

EventId,Time,Message,SourceName,ActionTimeStamp,ClientUserId

Detailed diagnostics:

EventId,EventType,Time,Message,Severity,SourceName,SourceNode,LocalTime

Use the Graphical Selector

The easiest and most reliable way to select fields:

  1. Open Monitor Event node configuration
  2. Click the ... button next to Select Clause
  3. Browse the event type hierarchy
  4. Check boxes for desired fields
  5. The Select Clause is automatically generated correctly

Benefits:

  • See all available fields
  • Avoid typos
  • Field names are validated
  • Understands event type hierarchy

Object Selection

Monitor the Right Object

Choose the appropriate object based on your needs:

Server object (i=2253):

  • System-wide events
  • Server status changes
  • Global alarms
  • Audit events
nodeId: "i=2253"
whereClause: "ofType('SystemEventType')"

Equipment objects:

  • Equipment-specific events
  • Process alarms
  • Equipment state changes
nodeId: "ns=2;s=Equipment.Reactor1"
whereClause: "ofType('AlarmConditionType')"

Folder objects:

  • Aggregate events from multiple children
  • Production line events
  • Area monitoring
nodeId: "ns=2;s=ProductionFloor"
whereClause: "Severity >= 600"

Start Specific, Expand if Needed

Begin with specific equipment and expand scope only if necessary:

// ✅ Start specific
nodeId: "ns=2;s=Reactor1"

// Then expand if needed
nodeId: "ns=2;s=ReactorArea"

// Finally, go system-wide if required
nodeId: "i=2253"

Event Rate Management

Handle Event Floods

Implement rate limiting for high-frequency events:

// Function node: Rate limit events per source
const rateLimit = 1000; // ms between events
const lastEvents = context.get('lastEvents') || {};
const source = msg.payload.SourceName;
const now = Date.now();

if (lastEvents[source] && (now - lastEvents[source] < rateLimit)) {
return null; // Drop event
}

lastEvents[source] = now;
context.set('lastEvents', lastEvents);
return msg;

Aggregate High-Frequency Events

Batch events instead of processing individually:

// Function node: Aggregate events every 5 seconds
const events = context.get('events') || [];
events.push(msg.payload);
context.set('events', events);

const lastEmit = context.get('lastEmit') || 0;
const now = Date.now();

if (now - lastEmit > 5000) {
msg.payload = {
count: events.length,
events: events,
summary: {
highSeverity: events.filter(e => e.Severity >= 800).length,
mediumSeverity: events.filter(e => e.Severity >= 600 && e.Severity < 800).length
}
};
context.set('events', []);
context.set('lastEmit', now);
return msg;
}

return null;

Prioritize Critical Events

Process high-severity events immediately, buffer lower-priority ones:

// Function node: Priority-based processing
const event = msg.payload;

// Critical events: process immediately
if (event.Severity >= 800) {
msg.priority = "critical";
return [msg, null];
}

// Lower priority: aggregate
const buffer = context.get('buffer') || [];
buffer.push(event);
context.set('buffer', buffer);

// Flush buffer every 10 seconds
const lastFlush = context.get('lastFlush') || 0;
const now = Date.now();

if (now - lastFlush > 10000 && buffer.length > 0) {
const batchMsg = {
priority: "normal",
payload: buffer
};
context.set('buffer', []);
context.set('lastFlush', now);
return [null, batchMsg];
}

return [null, null];

Performance Optimization

Use Appropriate Subscription Settings

Configure subscription parameters based on event frequency:

Low-frequency events (alarms, audits):

publishingInterval: 1000  // 1 second
maxNotificationsPerPublish: 100

High-frequency events (process events):

publishingInterval: 100   // 100 ms
maxNotificationsPerPublish: 1000

Diagnostic events:

publishingInterval: 5000  // 5 seconds
maxNotificationsPerPublish: 50

Multiple Monitors for Different Event Types

Use separate Monitor Event nodes for different event types:

// Node 1: Critical alarms - fast processing
{
name: "Critical Alarms",
whereClause: "ofType('AlarmConditionType') AND Severity >= 800",
subscription: "fastSubscription"
}

// Node 2: Standard alarms - normal processing
{
name: "Standard Alarms",
whereClause: "ofType('AlarmConditionType') AND Severity >= 500 AND Severity < 800",
subscription: "normalSubscription"
}

// Node 3: Audit events - slow processing OK
{
name: "Audit Events",
whereClause: "ofType('AuditEventType')",
subscription: "slowSubscription"
}

Event Deduplication

Prevent processing duplicate events:

// Function node: Deduplicate events
const eventId = msg.payload.EventId;
const recentEvents = context.get('recentEvents') || new Set();

// Check if we've seen this event recently
if (recentEvents.has(eventId)) {
return null; // Duplicate
}

// Add to recent events
recentEvents.add(eventId);

// Keep only last 1000 events
if (recentEvents.size > 1000) {
const arr = Array.from(recentEvents);
recentEvents.clear();
arr.slice(-500).forEach(id => recentEvents.add(id));
}

context.set('recentEvents', recentEvents);
return msg;

Event Processing Patterns

Enrich Events with Context

Add additional information to events:

// Function node: Enrich event with equipment info
const equipmentData = global.get('equipmentDatabase') || {};
const source = msg.payload.SourceName;

if (equipmentData[source]) {
msg.payload.equipment = {
location: equipmentData[source].location,
responsiblePerson: equipmentData[source].contact,
priority: equipmentData[source].priority
};
}

return msg;

State Machine for Alarm Tracking

Track alarm state transitions:

// Function node: Track alarm states
const alarmStates = flow.get('alarmStates') || {};
const eventId = msg.payload.ConditionId || msg.payload.SourceName;
const event = msg.payload;

const previousState = alarmStates[eventId];
const currentState = {
active: event.ActiveState,
acknowledged: event.AckedState,
timestamp: event.Time
};

// Detect state changes
if (previousState) {
if (!previousState.active && currentState.active) {
msg.payload.transition = "ACTIVATED";
} else if (previousState.active && !currentState.active) {
msg.payload.transition = "CLEARED";
} else if (!previousState.acknowledged && currentState.acknowledged) {
msg.payload.transition = "ACKNOWLEDGED";
}
}

alarmStates[eventId] = currentState;
flow.set('alarmStates', alarmStates);

return msg;

Event Correlation

Group related events:

// Function node: Correlate events
const correlationWindow = 5000; // 5 seconds
const correlations = context.get('correlations') || [];
const now = Date.now();

// Add current event
correlations.push({
event: msg.payload,
timestamp: now
});

// Remove old events
const recent = correlations.filter(c => (now - c.timestamp) < correlationWindow);
context.set('correlations', recent);

// Find related events
const relatedEvents = recent.filter(c =>
c.event.SourceName === msg.payload.SourceName ||
(Math.abs(c.event.Severity - msg.payload.Severity) < 100)
);

if (relatedEvents.length > 3) {
msg.payload = {
type: "EVENT_STORM",
source: msg.payload.SourceName,
count: relatedEvents.length,
events: relatedEvents.map(c => c.event)
};
return msg;
}

return null;

Alarm Management Best Practices

Alarm Acknowledgment Tracking

Monitor unacknowledged alarms:

// Function node: Track unacknowledged alarms
const unackedAlarms = flow.get('unackedAlarms') || {};
const event = msg.payload;

if (event.ActiveState && !event.AckedState) {
// New unacknowledged alarm
unackedAlarms[event.EventId] = {
source: event.SourceName,
message: event.Message,
severity: event.Severity,
activationTime: event.Time,
reminderCount: (unackedAlarms[event.EventId]?.reminderCount || 0) + 1
};

// Send reminders for long-unacknowledged alarms
const alarm = unackedAlarms[event.EventId];
if (alarm.reminderCount > 5) {
msg.payload = {
type: "ALARM_REMINDER",
message: `Alarm still unacknowledged: ${alarm.message}`,
source: alarm.source,
duration: Date.now() - new Date(alarm.activationTime).getTime()
};
flow.set('unackedAlarms', unackedAlarms);
return [msg, null];
}
} else if (event.AckedState || !event.ActiveState) {
// Alarm acknowledged or cleared
delete unackedAlarms[event.EventId];
}

flow.set('unackedAlarms', unackedAlarms);
return [null, msg];

Alarm Escalation

Escalate alarms based on duration and severity:

// Function node: Alarm escalation
const escalations = flow.get('escalations') || {};
const event = msg.payload;

if (event.ActiveState && !event.AckedState) {
const key = event.EventId;
const now = Date.now();

if (!escalations[key]) {
escalations[key] = {
startTime: now,
level: 0,
lastNotification: now
};
}

const escalation = escalations[key];
const duration = (now - escalation.startTime) / 1000; // seconds

// Escalate based on duration
if (duration > 300 && escalation.level === 0) {
// 5 minutes: escalate to supervisor
escalation.level = 1;
msg.payload = {
...event,
escalation: "SUPERVISOR",
duration: duration
};
escalation.lastNotification = now;
flow.set('escalations', escalations);
return msg;
} else if (duration > 900 && escalation.level === 1) {
// 15 minutes: escalate to manager
escalation.level = 2;
msg.payload = {
...event,
escalation: "MANAGER",
duration: duration
};
escalation.lastNotification = now;
flow.set('escalations', escalations);
return msg;
}
} else {
// Clear escalation
delete escalations[event.EventId];
flow.set('escalations', escalations);
}

return null;

Documentation and Maintenance

Document Your Event Filters

Always document why you chose specific filters:

/**
* Event Monitor Configuration
*
* Where Clause: ofType('AlarmConditionType') AND Severity >= 700
* Rationale: Only high-severity alarms require immediate operator attention
*
* Select Clause: EventId,Time,Message,Severity,SourceName,ActiveState,AckedState
* Rationale: Minimum fields needed for alarm dashboard and acknowledgment
*
* Reviewed: 2024-11-24
* Owner: Operations Team
*/

Log Event Statistics

Track event monitoring effectiveness:

// Function node: Event statistics
const stats = flow.get('eventStats') || {
total: 0,
byType: {},
bySeverity: {},
bySource: {}
};

const event = msg.payload;

stats.total++;
stats.byType[event.EventType] = (stats.byType[event.EventType] || 0) + 1;
stats.bySeverity[event.Severity] = (stats.bySeverity[event.Severity] || 0) + 1;
stats.bySource[event.SourceName] = (stats.bySource[event.SourceName] || 0) + 1;

flow.set('eventStats', stats);

// Periodically log stats
if (stats.total % 100 === 0) {
node.log(`Event Stats: ${JSON.stringify(stats, null, 2)}`);
}

return msg;

Next Steps

See Also