Category Archives: Allgemein

GraphQL on Unity and Result Event for Data and Errors…

The GraphQL for Unity Asset can be used to execute GraphQL queries in Unity. The result is set on properties of Unity objects and there is also a Unity Event on the GraphQL Query Object where user-defined function of a GameObject can be triggered every time when a queries returns data or an error.

public class Sample1 : MonoBehaviour
{
    public void ResultEvent(GraphQLResult result)
    {
        if (result.Errors.Count==0)
        {
            Debug.Log("Data: " + result.Data.ToString());
        }
        else
        {
            Debug.Log("Error: " + result.Errors.ToString());
        }
    }
}

Create a GameObject with this class as component and then you can drag and drop this GameObject to the Query GameObject and select the “ResultEvent” method. Every time when the query is executed then this function will be called with the result data (or error data).

Dockerfile for Python 3.9 with OpenCV, MediaPipe, TensorFlow Lite and Coral Edge TPU

Dockerfile

FROM python:3.9-slim
RUN apt-get update && apt -y install curl gnupg libgl1-mesa-glx libglib2.0-0 && rm -rf /var/lib/apt/lists/*
RUN echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list 
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && apt-get update && apt-get install -y python3-tflite-runtime && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY requirements.txt /
RUN pip install -r /requirements.txt
COPY * /app/

Requirements.txt

opencv-python
mediapipe

Execute a GraphQL Query in Unity with C# Code

With GraphQL for Unity you can execute GraphQL Queries in a Unity way with GameObjects. But with the asset you can also execute queries in Unity with C# code.

Here is a simple example:

public GraphQL Connection;

public void ScriptQuery()
    {
        var query = "query($Token: String!) { doit($Token) { result } } ";
        var args = new JObject
        {
            { "Token", "123" }
        };
        Connection.ExecuteQuery(query, args, (result) =>
        {
            Debug.Log(result.Result.ToString());
        });
    }

Link the Connection variable to your GraphQL GameObject where the connection is set.

Note: the result callback function is called asynchronous and it is not executed in the Game-loop.

Industrial Data in the Graph Database Neo4j…

The Frankenstein Automation Gateway now also supports to write OPC UA values to the graph database Neo4j.

At startup it can also write the OPC UA node structure into the graph database, so that the basic model of the OPC UA server is mirrored to the graph database. For that you have to add the “Schemas” section in the config file (see an example configuration file below). There you can choose which RootNodes (and all sub nodes) of your OPC UA systems should be mirrored to the graph database.

Once you have the (simplified) OPC UA information model in the graph database, you can add on top of that your own knowledge graph data and create relations to OPC UA nodes of your machines to enrich the semantic data of the OPC UA model.

With that model you can leverage the power of your Knowledge Graphs in combination with live data from your machines and use Cypher queries to get the knowledge out of the graph.

Here we see an example of the OPC UA server from the SCADA System WinCC Open Architecture. The first level of nodes below the “Objects” node represent Datapoint-Types (e.g. PUMP1) followed by the Datapoint-Instances (e.g.: PumpNr) and below that we see the datapoint elements (e.g. value => speed). An datapoint element is an OPC UA variable where we also see the current value from the SCADA system.

Example Gateway configuration file:

Database:
  Logger:
    - Id: neo4j
      Enabled: true
      Type: Neo4j
      Url: bolt://nuc1.rocworks.local:7687
      Username: "neo4j"
      Password: "manager"
      Schemas:
        - System: opc1  # Replicate node structure to the graph database
          RootNodes:
            - "ns=2;s=Demo"  # This node and everything below this node
        - System: winccoa1  # Replicate the nodes starting from "i=85" (Objects) node
      WriteParameters:
        BlockSize: 1000
      Logging:
        - Topic: opc/opc1/path/Objects/Demo/SimulationMass/SimulationMass_Float/+
        - Topic: opc/opc1/path/Objects/Demo/SimulationMass/SimulationMass_Double/+
        - Topic: opc/opc1/path/Objects/Demo/SimulationMass/SimulationMass_Int16/+
        - Topic: opc/winccoa1/path/Objects/PUMP1/#
        - Topic: opc/winccoa1/path/Objects/ExampleDP_Int/#


Docker CPU Limits…

If your docker container do not use all your cpu’s, it may be the case that limits are set in /etc/systemd/system/docker.slice. To apply changed settings I had to reboot my machine (just a restart of docker didn’t change the behaviour).

cat /etc/systemd/system/docker.slice 

[Unit]
Description=Docker Systemd Slice
Before=slices.target

[Slice]
CPUQuota=200%
MemoryAccounting=true
CPUAccounting=true
MemoryLimit=1280M
#StartupCPUWeight=

Niryo with Unity3D and the Automation Gateway…

The Digital Twin is Alive 🙂 #Unity3D, the #Niryo Robot, and the Automation Gateway #Frankenstein with #GraphQL for #PLC4X …

#ModBus data from the Robot can now be used in #Unity for visualisation and also to control the Robot from Unity …

The Unity Package GraphQL for OPCUA is now not only for OPCUA anymore, it can also handle other types which are supported by the Automation Gateway – like the Plc option, which is based on PLC4X.

Display OPC UA data via GraphQL in a HTML page …

Here is a simple HTML page which fetches data from the OPC UA Automation Gateway “Frankenstein”. It uses HTTP and simple GraphQL queries to fetch the data from the Automation Gateway and display it with Google Gauges. It is very simple and it is periodically polling the data. GraphQL can also handle subscription, but then you need to setup a Websocket connection.

<html>
  <head>
   <script type="text/javascript" src="https://www.gstatic.com/charts/loader.js"></script>
   <script type="text/javascript">
      google.charts.load('current', {'packages':['gauge']});
      google.charts.setOnLoadCallback(drawChart);

      var data = null
      var options = null
      var chart = null

      function drawChart() {

        data = google.visualization.arrayToDataTable([
          ['Label', 'Value'],
          ['Tank 1', 0],
          ['Tank 2', 0],
          ['Tank 3', 0],
        ]);

        options = {
          width: 1000, height: 400,
          redFrom: 90, redTo: 100,
          yellowFrom: 75, yellowTo: 90,
          minorTicks: 5
        };

        chart = new google.visualization.Gauge(document.getElementById('chart_div'));

        chart.draw(data, options);
      }

      function refresh() {
        const request = new XMLHttpRequest();
        const url ='http://localhost:4000/graphql';
        request.open("POST", url, true);
        request.setRequestHeader("Content-Type", "application/json");
        request_data = {
            "query": `{ 
              Systems {
                unified1 {
                  HmiRuntime {
                    HMI_RT_5 {
                      Tags {
                        Tank1_Level { Value { Value } }
                        Tank2_Level { Value { Value } }
                        Tank3_Level { Value { Value } }                          
                      }
                    }
                  }
                }
              }
            }`
        }
        request.send(JSON.stringify(request_data));

        request.onreadystatechange = function() {
          if (this.readyState==4 /* DONE */ && this.status==200) {
            const result = JSON.parse(request.responseText).data
            const x = result.Systems      
            data.setValue(0, 1, x.unified1.HmiRuntime.HMI_RT_5.Tags.Tank1_Level.Value.Value);
            data.setValue(1, 1, x.unified1.HmiRuntime.HMI_RT_5.Tags.Tank2_Level.Value.Value);
            data.setValue(2, 1, x.unified1.HmiRuntime.HMI_RT_5.Tags.Tank3_Level.Value.Value);
            chart.draw(data, options);
          } 
        }
      }

      setInterval(refresh, 250)
    </script>

  </head>
  <body>
    <div id="chart_div" style="width: 400px; height: 120px;"></div>
    <!--<button name="refresh" onclick="refresh()">Refresh</button>-->
  </body>
</html>

How to log OPC UA tag values to Apache Kafka…

In this article we use the Frankenstein Automation Gateway to subscribe to one public available OPC UA server (milo.digitalpetri.com) and log tag values to Apache Kafka. Additionally we show how you can create a Stream in Apache Kafka based on the OPC UA values coming from the milo OPC UA server and query those stream with KSQL.

Setup Apache Kafka

We have used the all-in-one Docker compose file from confluent to quickly setup Apache Kafka and KSQL. Be sure that you set your resolvable hostname or IP address of your server in the docker-compose.yml file. Otherwise Kafka clients cannot connect to the broker.

KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://192.168.1.18:9092

Setup Frankenstein

Install Java 11 (for example Amazon Corretto) and Gradle for Frankenstein. Unzip Gradle to a folder and set your PATH variable to point to the bin directory of Gradle.

Then clone the source of Frankenstein and compile it with Gradle:

git clone https://github.com/vogler75/automation-gateway.git
cd automation-gateway/source/app
gradle build

There is a example config-milo-kafka.yaml file in the automation-gateway/source/app directory which you can use by setting the environment variable GATEWAY_CONFIG.

export GATEWAY_CONFIG=config-milo-kafka.yaml

In this config file we use a public Eclipse Milo OPC UA server. The Id of this connection is “milo“.

OpcUaClient:
  - Id: "milo"
    Enabled: true
    LogLevel: INFO
    EndpointUrl: "opc.tcp://milo.digitalpetri.com:62541/milo"
    UpdateEndpointUrl: false
    SecurityPolicyUri: http://opcfoundation.org/UA/SecurityPolicy#None
    UsernameProvider:
      Username: user1
      Password: password

Here is the configuration of the Kafka Logger where you can configure what OPC UA tags should be published to Kafka. In that case we use a OPC UA Browse Path and a wildcard to use all variables below one node.

Database:
  Logger:
    - Id: kafka1
      Type: Kafka
      Enabled: true
      Servers: server2:9092
      WriteParameters:
        QueueSize: 20000
        BlockSize: 10000
      Logging:
        - Topic: opc/milo/path/Objects/Dynamic/+

Start Frankenstein

export GATEWAY_CONFIG=config-milo-kafka.yaml
gradle run

Create a Stream in KSQL

Start a CLI session to KSQL on the host where the Kafka containers run:

docker exec -ti ksqldb-cli ksql http://ksqldb-server:8088

Create a stream for the Kafka “milo” topic

CREATE STREAM milo(
  browsePath VARCHAR KEY, 
  sourceTime VARCHAR, 
  value DOUBLE, 
  statusCode VARCHAR
) WITH (
  KEY_FORMAT='KAFKA',
  KAFKA_TOPIC='milo', 
  VALUE_FORMAT='JSON',
  TIMESTAMP='sourceTime',TIMESTAMP_FORMAT='yyyy-MM-dd''T''HH:mm:ss.nX'
);

Then you can execute a KSQL query to get the stream of values from the OPC UA server:

ksql> select browsepath, sourcetime, value from milo emit changes;
+---------------------------------------+---------------------------------------+---------------------------------------+
|BROWSEPATH                             |SOURCETIME                             |VALUE                                  |
+---------------------------------------+---------------------------------------+---------------------------------------+
|Objects/Dynamic/RandomInt32            |2021-05-02T11:29:04.405465Z            |1489592303                             |
|Objects/Dynamic/RandomInt64            |2021-05-02T11:29:04.405322Z            |-6.3980451035323023E+18                |
|Objects/Dynamic/RandomFloat            |2021-05-02T11:29:04.405350Z            |0.7255345                              |
|Objects/Dynamic/RandomDouble           |2021-05-02T11:29:04.405315Z            |0.23769088795602633                    |

Automation Gateway with Apache IoTDB…

The Frankenstein Automation Gateway can now write OPC UA tag values to the Apache IoTDB. Did some rough performance tests with 50 OPC UA servers and one IoTDB… the IoTDB is pretty impressive fast. Also the data model and terminology is interesting and it seems to fit good to a hirarchical structure in OPC UA.

In this lab I have connected 50 OPC UA servers (based on a .NET OPC UA server example) to Frankenstein. Each OPC UA server publishes 1000 tags of different type, so in summary we have 50000 tags connected to Frankenstein. The publish rate can be adjusted by setting an OPC UA tag. Sure, we do that via GraphQL over Frankenstein. On my commodity hardware I ended with writing about 250Khz to the IoTDB with an CPU load of ~200%. So, I assume the IoTDB is able to handle much more value changes per second.

Figured out that one DB Logger inside of Frankenstein roughly is able to handle 100000 events per second. We can spawn multiple DB Logger for scalabilty. Vert.X can then use multiple cores (Vert.X calls this pattern the Multi-Reactor Pattern to distinguish it from the single threaded reactor pattern).

Just to note: there is only a memory buffer implemented, so if the DB is down, then the values will be lost if the buffer runs out of space. But I think to handle such situations it would make sense to put Apache Kafka between the Gateway and the Database.

GraphQL Query to set the simulation interval:

query ($v: String) {
  Systems {
    opc1 { Demo { SimulationInterval { SetValue(Value: $v) } } }
    opc2 { Demo { SimulationInterval { SetValue(Value: $v) } } }
    opc3 { Demo { SimulationInterval { SetValue(Value: $v) } } }
    opc4 { Demo { SimulationInterval { SetValue(Value: $v) } } }
    opc5 { Demo { SimulationInterval { SetValue(Value: $v) } } }
    opc6 { Demo { SimulationInterval { SetValue(Value: $v) } } }
    opc7 { Demo { SimulationInterval { SetValue(Value: $v) } } }
    opc8 { Demo { SimulationInterval { SetValue(Value: $v) } } }
    opc9 { Demo { SimulationInterval { SetValue(Value: $v) } } }
    ...
  }
}
Query Variables: {"v": "250"}