Category Archives: MQTT

MonsterMQ ๐ŸงŒ Clustering for Industrial IoT

Did you know that MonsterMQ includes clustering capabilities?

๐Ÿ‘€ Multiple broker instances can form a cluster, building a distributed system.

See this LinkedIn post for a video.

Hereโ€™s an high-level example:
๐Ÿ‘‰ Each production line can run its own broker instance, keeping high data loads local.
๐Ÿ‘‰ A site-level broker aggregates higher-level metrics like OEE or production summaries.

๐Ÿ’ช With MonsterMQ, production line brokers remain independent – if one goes down, the others are unaffected. Yet, cross-line data access is seamless. For example, one line can subscribe to another’s current production order or OEE. Only relevant data is transmitted between brokers, reducing overhead.

๐Ÿ’ช Clustering is powered by Hazelcast, with communication handled by Vert.X.

โš ๏ธ Since MonsterMQ relies on a central #PostgreSQL database, this creates a central point of failure. To mitigate this, the PostgreSQL database must be made high available. Alternatively, you can use a clustered #CrateDB, which allows you to deploy one database node on each MonsterMQ node for improved resilience. Currently subscriptions are stored in memory on all nodes, which restricts scalability, but for factory use-cases it should not matter.

๐Ÿง  Added a MCP (Model Context Protocol) Server to MonsterMQ.com

๐Ÿš€ Now you can chat with your MQTT data. ๐Ÿ’ฅ Still early stages though – lots of room for improvement and more tools to add to the MCP server.

๐Ÿท๏ธ Add Metadata to topics: you can publish a retained subtopic containing metadata to your existing topics. This helps the LLM query that description and better understand the data. It is optional, but recommended.

Example topic: PV/Power/WattAct/<config>, it must contain a JSON object with this fields: { “Unit”: “W”, “Description”: “Current active power output of the photovoltaic system” }

๐Ÿง  Memory Included. MonsterMQ can store the last values in memory or in a PostgreSQL table. You can also historize your data in Postgres, so the LLM has access to historical data as well.

๐Ÿค– Wanna Try It? Publish your data to test.monstermq.com. Add a <config> retained topic to your topics, and then you can use the MCP Server test.monstermq.com/mcp, for example in Claude-Desktop.

See what Claud did by asking “please find some power meters of differnt rooms and plot the last 5 minutes of the rooms power consumption.”

Retained messages are the devil of MQTT ๐Ÿ˜ˆ ๐Ÿ˜ฑ

MQTT brokers handle retained messages differently – and that can have serious implications…

๐Ÿ”ธ HiveMQ Community Edition doesnโ€™t support to persist retained messages to disk. After a restart, theyโ€™re gone – so itโ€™s better to go with the Enterprise Edition. Unfortunately I do not have access to the Enterprise version to perform further tests.

๐Ÿ”ธ Mosquitto writes retained messages to disk periodically(!), default is 30 minutes! And during writing from memory to disk, publishing values pauses, it seems the broker gets frozen during writing to disk. Writing 1M retained messages to disk takes time… plus, I got it to crash at about 4460000 retained values (reproducible), and after the crash, it didn’t start anymore – until I removed the database file and lost all retained messages.

๐Ÿ”ธ EMQX handled the test with a max of 2000 value changes per second. It was already using about 250% CPU of four cores, and I occasionally received โ€œMnesia is overloadedโ€ warnings. I donโ€™t know whether messages are being queued or dropped at that point. Writing to disk didnโ€™t seem fast. Additionally, a broker startup after storing 1 million retained messages took a while (about 30 seconds), likely because all values are being loaded into memory.

๐Ÿ”ธ BifroMQ handled the test at around 8000 v/s, using only about 20% of four CPU cores. I noticed that throughput decreases as the number of published topics increases. Interestingly, it limits wildcard subscriptions to just 10 retained messages – to protect the system, but that number feels a bit low. I couldnโ€™t find a configuration setting to change it.

๐Ÿ”ธ MonsterMQ handled the test with about 7000 v/s with 18% for the Monster plus 70% for the PostgreSQL database of four CPU cores. MonsterMQ stores retained messages in a PostgreSQL table (written in bulk). If you have a high volume of retained messages, it can lead to full queues – especially if your PostgreSQL instance can’t keep up. In that case, the broker will not persist all values.

๐Ÿงจ Be careful when using retained messages!

๐Ÿ‘‰ Understand and test your brokerโ€™s behavior before relying on them for data durability.

Just to note: These results are based on my own tests and specific use cases, and were performed on modest hardware. I was publishing retained messages to hierarchically structured topics (e.g., root/0/0/0/0/0/0/0/0/0/0), using numbers from 0 to 9, without any subscribers.

๐Ÿ“Š MQTT vs. OPC UA: Bandwidth Usage – Unexpected Results!?

๐Ÿค” I often heared about claims that MQTT requires less bandwidth than OPC UA. In my own testing, I found the opposite to be true!

๐Ÿงช I took a tree of over 3,500 topics, changing every topic once a second and replicated the exact same structure in OPC UA. In MQTT I was using a simple JSON payload of TimeMS plus the Value (just to note: OPC UA transmits more information than that). I also tried a version with just the plain value as a string. Then, I compared the network I/O stats of client programs subscribing to all those tags, both for OPC UA and MQTT, and confirmed that both clients are receiving all values.

๐Ÿ‘‰ MQTT didnโ€™t show a bandwidth advantage – in fact, it used four to six times the bandwidth of OPC UA in this setup! See the results. 200 KB/s vs 50 KB/s

And, to be honest, it makes sense: every single value change in MQTT includes the full topic name (enterprise/site/area/cell/…), creating a high overhead compared to OPC UA.

Exploring My Home Automation Architecture

My home automation setup has been an ongoing project, evolving over the last 10โ€“15 years. It started with a simple goal: to track data from my photovoltaic (PV) system, power meters, and temperature sensors, all connected through Raspberry Pi devices. It started with Oracle and over time, itโ€™s grown into a more complex architecture that incorporates multiple layers of data collection, processing, and visualization.

1. Data Collection with Raspberry Pi and MQTT

At the core of my setup are Raspberry Pi devices that connect various sensors, including those for monitoring power generation (PV) by bluetooth, power consumption by meters giving digital signals, and temperature sensors. These Pi devices act as data collectors, feeding data into a local Mosquitto broker. A local broker on the device can serve as a short-term buffer, before itโ€™s synchronized to my central MonsterMQ broker, by using a persistant session and QoS>0.

2. MonsterMQ Broker as the Central Hub

The MonsterMQ broker is the central point where data from all sources is collected. It serves as a bridge, collecting data from the local Mosquitto broker and preparing it for further processing and storage. Before building my own broker, MonsterMQ, I used Mosquitto. Now that I have my own broker, I use MonsterMQ, both to ensure it gets thoroughly tested and to leverage its features. Additionally, in the future, I can use MonsterMQ to store incoming values directly in Apache Kafka. As a database engineer, I appreciate MonsterMQ because it allows me to view the broker’s current state by querying a PostgreSQL database. This lets me see connected clients, their connection details, source IP addresses, and all subscriptions with their parameters..

3. Automation-Gateway for Data Flexibility

To expand the possibilities of what I can do with the data, I use the Automation-Gateway. This tool collects values from MonsterMQ and serves two primary functions:

  • Integration with Apache Kafka: By publishing data to Apache Kafka, I maintain a reliable stream that acts as an intermediary between my data sources and the storage databases. This setup provides resilience, allowing me to manage and maintain the databases independently while keeping the data history intact in Kafka.
  • OPC UA Server Exposure: The Automation-Gateway also exposes data as an OPC UA server, making it accessible to industrial platforms and clients that communicate over OPC UA. This can be achieved just with a simple YAML configuration file.

4. Experimental Integrations: Ignition and WinCC Unified

On top of this setup, Iโ€™ve added experimental connections to Ignition and WinCC Unified. Both of these platforms connect to the Automation-Gateway OPC UA Server. Just for testing those systems are publishing values to my public MQTT broker at test.monstermq.com. While these integrations arenโ€™t necessary, theyโ€™re helpful for testing and exploring new capabilities.

5. Long-Term Data Storage with TimescaleDB, QuestDB, and InfluxDB

Data from Kafka is stored in multiple databases:

  • InfluxDB: My home-automation started with Oracle and then moved to InfluxDB
  • TimescaleDB: Since I am still an advanced SQL user, I needed a database with strong SQL capabilities. Therefore, I added TimescaleDB and imported historical data into it.

Amount of records as of today: 1_773_659_197

Additinally the Automation-Gateway is writing the data now to QuestDB. It is used for experimental data logging and alternative time-series database exploration. Potentially will replace the other databases. I was blown away by how quickly I was able to import 1.5 billion historical data points into QuestDB.

These databases serve as long-term storage solutions, allowing me to create detailed dashboards in Grafana. By keeping Kafka as the layer between my data sources and the databases, I ensure flexibility for database maintenance, as Kafka retains the historical data.

6. Data Logging with MonsterMQ

The public MonsterMQ broker is configured to write data of topics below “grafana/#” directly into a TimescaleDB table. This setup allows you to see updates in Grafana whenever new data is published. In this specific Grafana dashboard configuration, if you publish a JSON object with a key ‘value’ and a numeric value, such as {“value”: 42}, it will appear on the dashboard almost instantly. Here is a public dashboard.

select 
  time, array_to_string(topic,'/') as topic, 
  (payload_json->>'value')::numeric as value 
from grafanaarchive
where $__timeFilter(time)
and payload_json->>'value' is not null
and payload_json->>'value' ~ '^[0-9]+(\.[0-9]+)?$'
order by time asc

7. SparkplugB Decoding with MonsterMQ

The public MonsterMQ broker is configured to decode and expand SparkplugB messages. Expanded messages can be found under the topic “spBv1.0e“. Ignition publishes some of my home automation data via SparkplugB to the public broker, and youโ€™re welcome to publish your own SparkplugB messages here as well.

Final Thoughts

This setup is the result of years of experimentation and adaptation to new tools. In theory, I could simplify parts of it, for example, by replacing more components with the Automation-Gateway. But I appreciate having Kafka as the buffer between data sources and databases – it offers flexibility for maintenance and helps preserve historical data.

Feel free to test the public MonsterMQ broker at test.monstermq.com. And if youโ€™re curious, publish a JSON object with a “value” key to grafana/something to see it immediately reflected in the Grafana dashboard!

Public MonsterMQ ๐Ÿ‘ฝ Broker for testing !

๐Ÿ‘‰ Iโ€™ve just installed MonsterMQ on a public virtual machine, hosted by Hetzner – thanks to Jeremy Theocharis awesome post! You can try it out at test.monstermq.com via TCP or Websockets at port 1883. No password, no security. If you want to leave me a message, then use your name as ClientId ๐Ÿ˜Š

๐Ÿ” Want to take a look at the TimescaleDB behind it? Connect to the database on the default port 5432 using the (readonly) user “monster” with the password “monster”.

๐Ÿ˜ฒ Iโ€™ve intentionally set it to store all messages, not just retained ones, in a table “alllastval” for testing purposes.

๐Ÿ“ˆ Additionally, messages published on topics matching “Test/#” will be archived in a history table “testarchive”!

โ„น๏ธ Keep in mind, itโ€™s hosted on a small machine, and every published value is being written and updated in a PostgreSQL table. So, please donโ€™t expect massive throughput or run performance tests.

Iโ€™d love for you to try it out. If you find any issues, let me know, or drop an issue on GitHub!

Publish OPC UA and MQTT Data to the Cloud with Automation-Gateway

Publish OPC UA and MQTT Data to the Cloud with Automation-Gateway – inspired by a users request ๐Ÿ’ก

If you have a local ๐—ข๐—ฃ๐—– ๐—จ๐—” server or ๐— ๐—ค๐—ง๐—ง broker and want to bring that data to a ๐—ฐ๐—น๐—ผ๐˜‚๐—ฑ-๐—ฏ๐—ฎ๐˜€๐—ฒ๐—ฑ dashboard, Automation-Gateway.com makes it simple. You can easily publish your data to ๐—œ๐—ป๐—ณ๐—น๐˜‚๐˜…๐——๐—• Cloud and visualize it in ๐—š๐—ฟ๐—ฎ๐—ณ๐—ฎ๐—ป๐—ฎ โ€” all without complex setups.

I recently added support for InfluxDB V2 to the gateway, allowing you to configure an Influx token and bucket for data publishing. With just a few steps, your local OPC UA or MQTT data can be ๐˜€๐˜๐—ผ๐—ฟ๐—ฒ๐—ฑ ๐—ถ๐—ป ๐˜๐—ต๐—ฒ ๐—ฐ๐—น๐—ผ๐˜‚๐—ฑ and displayed in Grafana in real time.

MQTT Server Interface for WinCC OA? Made with Kotlin ๐Ÿ˜ฒ


Starting with WinCC OA Version 3.20, you can write your business logic in JavaScript and run them using Node.js, providing direct access to the WinCC OA Runtime.

๐Ÿ™ˆ With that, I have developed a Kotlin program that acts as an MQTT Broker. When you subscribe to a topic (where the topic name matches a datapoint name), the program will send value changes from the corresponding WinCC OA datapoint to your MQTT client.

โ“ But wait, Kotlin is like Java, it runs on the JVM, it is not JavaScript!

๐Ÿ’ก Did you know that a Node.js Runtime built with GraalVM exists? It allows you to mix Java and JavaScript. And it also works with WinCC OA.

๐Ÿคฉ You can use JVM based languages and its huge ecosystem to develop business logic with WinCC OA. I have developed a Java library which makes it easier to use the WinCC OA JavaScript functions in Java.

๐Ÿ‘‰ Here it is: https://github.com/vogler75/winccoa-graalvm please note that the example program is provided as an example; it lacks security features and has not been tested for production use. However, it can be extended and customized to meet specific requirements.

โšก Please be aware that the GraalVM Node.js Runtime is not officially supported by WinCC Open Architecture.

OPC UA Node Tree to MQTT

With just 20 lines of configuration you can publish a OPCUA tree of values to MQTT …

in this example to the HiveMQ cloud … ๐Ÿ‘‰ with the automation-gateway.com

It also supports PLC4x connected devices/plcs…

It can also publish values to Kafka or SQL databases…

git clone https://github.com/vogler75/automation-gateway.git
cd source\app
set GATEWAY_CONFIG=configs/config-opcua-mqtt.yaml
gradle run

Drivers:
  OpcUa:
  - Id: "demo"
    Enabled: true
    LogLevel: INFO
    EndpointUrl: "opc.tcp://192.168.1.3:62541"
    UpdateEndpointUrl: true
    SecurityPolicyUri: http://opcfoundation.org/UA/SecurityPolicy#None
Loggers:
  Mqtt:
    - Id: mqtt1
      Enabled: true
      Host: linux0.rocworks.local
      Port: 1883
      Ssl: false
      Topic: Enterprise/Site
      Logging:
        - Topic: opc/demo/path/Objects/Demo/SimulationMass/#