Reading Files from OPC UA Server
This tutorial explains how to use the OPC UA File Operation node to read files stored on an OPC UA server.
Prerequisites
- An OPC UA server that exposes File objects (TypeDefinition = FileType, ns=0;i=11575)
- A configured OPC UA connection endpoint
- Knowledge of the file's NodeId or browse path
Basic File Reading
Step 1: Configure the Node
- Drag an OPC UA File Operation node onto your flow
- Configure the following properties:
- Endpoint: Select your OPC UA server connection
- NodeId: Enter the NodeId of the file object (e.g.,
ns=1;s=MyFileor/ns1:Files/ns1:data.txt) - Mode: Select Read
- Encoding: Select utf8 (for text files)
- Format: Select utf8 (to get string output)
Step 2: Create a Simple Flow
[ Inject ] → [ OPC UA File Operation ] → [ Debug ]
Step 3: Trigger the Read
Click the inject node button. The file contents will appear in the debug panel as a string.
Example Output:
msg.payload: "Hello World\nThis is a text file"
Reading Different File Types
Reading as Raw Binary (Buffer)
For images, executables, or other binary files:
Configuration:
- Mode: Read
- Encoding: none
- Format: buffer
Output:
msg.payload: <Buffer 89 50 4e 47 0d 0a 1a 0a ...>
Reading as Line Array
For CSV files or logs where you need each line separately:
Configuration:
- Mode: Read
- Encoding: utf8
- Format: lines
Output:
msg.payload: [
"Name,Age,City",
"John,25,Paris",
"Jane,30,London"
]
This format is ideal for CSV processing:
// In a function node after reading
const lines = msg.payload;
const headers = lines[0].split(',');
const data = lines.slice(1).map(line => {
const values = line.split(',');
return headers.reduce((obj, header, index) => {
obj[header] = values[index];
return obj;
}, {});
});
msg.payload = data;
return msg;
Checking File Size Without Reading
To get only the file size without reading the entire content:
Configuration:
- Mode: ReadSize
Flow:
[ Inject ] → [ OPC UA File Operation (ReadSize) ] → [ Debug ]
Output:
msg.payload: 4096 // File size in bytes
This is useful for:
- Checking if a file has changed
- Validating file size before processing
- Monitoring file growth in logging scenarios
Dynamic File Selection
Instead of hardcoding the NodeId in the node configuration, you can specify it at runtime:
Node Configuration:
- NodeId: (leave empty)
Flow:
[ Inject ] → [ Function ] → [ OPC UA File Operation ] → [ Debug ]
Function Node:
// Select file dynamically
msg.nodeId = "/ns1:Logs/ns1:system-" + msg.payload.date + ".log";
return msg;
Working with Different Encodings
Reading UTF-8 Files (Default)
Configuration:
- Encoding: utf8
- Format: utf8
Suitable for: English, European languages, most modern text files
Reading Shift_JIS Files (Japanese)
Configuration:
- Encoding: Shift_JIS
- Format: utf8
Suitable for: Japanese text files, legacy systems
Reading GB2312 Files (Chinese)
Configuration:
- Encoding: GB2312
- Format: utf8
Suitable for: Simplified Chinese text files
Dynamic Encoding
Configuration:
- Encoding: setbymsg
Function Node Before File Operation:
// Set encoding based on file type
if (msg.filename.includes('jp')) {
msg.encoding = 'Shift_JIS';
} else if (msg.filename.includes('cn')) {
msg.encoding = 'GB2312';
} else {
msg.encoding = 'utf8';
}
return msg;
Advanced Reading Scenarios
Reading Large Files
The node automatically handles large files by:
- Reading in optimized chunks based on server capabilities
- Streaming data without loading everything into memory
- Respecting server's MaxByteStringLength settings
No special configuration needed - it works automatically!
Reading JSON Configuration Files
Flow:
[ Inject ] → [ OPC UA File (Read) ] → [ JSON Parse ] → [ Debug ]
OPC UA File Configuration:
- Mode: Read
- Encoding: utf8
- Format: utf8
JSON Parse Node: Set to always parse JSON
Output:
msg.payload: {
server: "192.168.1.10",
port: 4840,
settings: { timeout: 5000 }
}
Conditional Reading Based on File Size
Flow:
[ Inject ] → [ File Size Check ] → [ Switch ] → [ Read File ]
↓
[ Too Large Warning ]
File Size Check Node (ReadSize mode):
- Returns file size in msg.payload
Switch Node:
- If msg.payload < 1000000 (1MB) → route to Read File
- Otherwise → route to Too Large Warning
Error Handling
Handling Read Errors
// In a Catch node or Status node watching the File Operation node
if (msg.error) {
// Log the error
node.warn("File read failed: " + msg.error);
// Take corrective action
if (msg.error.includes("BadNodeIdUnknown")) {
msg.payload = "File not found on server";
} else if (msg.error.includes("BadNotReadable")) {
msg.payload = "File is not readable (check permissions)";
} else {
msg.payload = "Unknown error reading file";
}
return msg;
}
Monitoring Node Status
The node status indicator shows:
- "Operating": Reading in progress
- "size = X": Read completed successfully (X bytes)
- "failed": Read operation failed (check debug panel for details)
Complete Example Flow
Here's a complete example that reads a log file and processes it:
[ Daily Timer ] → [ Set NodeId ] → [ Read File ] → [ Parse Lines ] → [ Filter Errors ] → [ Email Alert ]
↓
[ Debug (Raw) ]
Daily Timer: Inject node with cron schedule
Set NodeId (Function):
const today = new Date().toISOString().split('T')[0];
msg.nodeId = `/ns1:Logs/ns1:app-${today}.log`;
return msg;
Read File (OPC UA File Operation):
- Mode: Read
- Encoding: utf8
- Format: lines
Parse Lines (Function):
msg.payload = msg.payload
.filter(line => line.length > 0)
.map(line => {
const parts = line.split(' - ');
return {
timestamp: parts[0],
level: parts[1],
message: parts[2]
};
});
return msg;
Filter Errors (Switch):
- If msg.payload[].level === "ERROR" → route to Email Alert
Tips and Best Practices
-
Choose the Right Format:
- Use
bufferfor binary files (images, PDFs, executables) - Use
utf8for plain text files - Use
linesfor CSV, logs, or line-based processing
- Use
-
Use ReadSize First:
- Check file size before reading large files
- Avoid timeouts by validating size limits
-
Dynamic NodeIds:
- Use browse paths for better readability:
/ns1:Folder/ns1:file.txt - Pass NodeId in msg for runtime file selection
- Use browse paths for better readability:
-
Error Handling:
- Always add a Catch node for production flows
- Check file existence and permissions before reading
-
Performance:
- The node automatically optimizes chunk sizes
- Large files are handled efficiently without memory issues
- No need to manually implement chunking or pagination
Related Operations
- Writing Files - Learn how to write data to files
- Appending to Files - Add data to existing files
- Browse Files - Discover available files on the server
Troubleshooting
Problem: "expecting a nodeIdString" error
Solution: Ensure msg.nodeId is a string in valid format (e.g., "ns=1;s=MyFile")
Problem: File content appears garbled
Solution: Check the file encoding - try different encodings (utf8, Shift_JIS, etc.)
Problem: "Failed" status with no details
Solution: Add a Debug node after the File Operation node and check msg.error for specific error codes
Problem: Reading seems to hang on large files
Solution: This is normal - the node streams large files. Check the node status for progress. If it truly hangs, check server logs for issues.