Example Flow With Node-Red

Overview

This flow provides real-time data integration from multiple sources (PostgreSQL, Cygnus, and Orion) to a WebSocket-based front-end for monitoring purposes. It queries PostgreSQL for table and row counts, checks the Orion Context Broker version, and gathers Cygnus-related data, all of which are relayed to a WebSocket for display.

PostgreSQL Data Queries

Purpose:

To query the PostgreSQL database for table counts and the total number of rows across all tables within the default_service schema.

Nodes and Logic:

  • Function Node (request cygnus table count): Executes a PostgreSQL query to count the number of tables in the default_service schema. The query:

SELECT COUNT(table_name) FROM information_schema.tables WHERE table_schema = 'default_service';
  • Function Node (request total row count): This node creates a PostgreSQL function that calculates the total row count across all tables in the default_service schema. It loops through each table and sums the row counts.

  • PostgreSQL Nodes: Sends the SQL queries to the PostgreSQL database and retrieves the results. These results are forwarded to WebSocket for display.

  • Change Nodes: Modify the data to prepare it for sending over WebSocket, such as extracting the row count and formatting the payload.

How it works:

Periodically, the flow sends queries to PostgreSQL to get the table count and total row count for the default_service schema. This data is processed and sent to a WebSocket client to display live statistics.

Orion Context Broker Version Check

Purpose:

To check the Orion Context Broker version and ensure the broker is functioning as expected.

Nodes and Logic:

  • HTTP Request Node (request Orion version): Sends a GET request to the Orion Context Broker at http://localhost:1026/version to retrieve the broker version.

  • Function Node (check if return is not error string): Checks if the response from Orion is valid JSON (indicating a successful version request). If not, an error status is returned.

  • WebSocket Out Node: Sends the status of the Orion broker (whether the version request succeeded or failed) to a WebSocket client for real-time monitoring.

How it works:

The flow periodically checks the Orion broker’s status by requesting its version and sends the result (success or failure) to the WebSocket for display.

Cygnus Data Gathering

Purpose:

To query the Cygnus system for metadata and row count, and send this information to a WebSocket for display.

Nodes and Logic:

  • Function Node (request cygnus table count): Sends a query to PostgreSQL to count all tables in Cygnus.

  • PostgreSQL Nodes: Executes the SQL queries related to Cygnus, fetching table counts and other related data.

  • Function Node (concatenate cygnus data): Aggregates Cygnus data before sending it over WebSocket. It collects data from different PostgreSQL queries and combines them into a single payload.

  • WebSocket Out Node: Sends the aggregated Cygnus data to the WebSocket for display.

How it works:

This part of the flow gathers relevant Cygnus data by querying PostgreSQL and sends the processed data to the WebSocket for live monitoring of Cygnus-related metrics.

Real-Time Data Transmission via WebSockets

Purpose:

To send live updates of PostgreSQL, Orion, and Cygnus data to the front-end display via WebSockets.

Nodes:

  • WebSocket Out Nodes: Multiple WebSocket nodes are used to send data related to PostgreSQL (/display_data/postgres), Orion (/display_data/orion), and Cygnus (/display_data/cygnus).

  • JSON Nodes: These nodes format the data into JSON before sending it to the WebSocket clients for easy parsing and display on the front-end.

How it works:

The flow gathers data from various sources (PostgreSQL, Cygnus, Orion), processes it, and sends it to WebSocket endpoints where front-end clients receive and display the data in real time.

Scheduled Data Collection

Purpose:

To periodically trigger the collection of data from PostgreSQL, Orion, and Cygnus and send it to the front-end display.

Nodes and Logic:

Inject Nodes: Set to trigger every 5 seconds, these nodes initiate the data collection process by sending requests to PostgreSQL, Orion, and Cygnus.

How it works:

The inject nodes ensure the flow periodically queries the relevant databases and brokers, refreshing the data sent to the WebSocket clients.