Skip to main content
All CollectionsData and external interfaces
Railgenius - consuming real-time telematics data using Railnova's Generic Kafka Streaming API
Railgenius - consuming real-time telematics data using Railnova's Generic Kafka Streaming API

How to interface with Railnova Kafka to collect telematics data in real-time.

Updated over a year ago

Introduction

Kafka is the standard message broker at Railnova for server-to-server data streams and Railnova offers a Generic Kafka Streaming API to all Enterprise Customers under the following terms.

Why Kafka?

In comparison to most messaging systems such as AMQP and MQTT, Kafka has better throughput, built-in partitioning, replication, and fault-tolerance, making it a good solution for large-scale, highly-available message processing applications. We have hit performance bottlenecks with both MQTT and AMQP (in a server-to-server setting) and lack of redundancy with single-node MQTT and AMQP architectures in the past.

Another key feature of Kafka is that it offers message retention allowing the consumer to pull the message on-demand (within the retention period) and also allows the consumer to pull specific messages multiple times, in case of system breakdown or for replay purposes.

Kafka is an open-source protocol and doesn't depend on a specific cloud vendor. Kafka messages can be consumed very easily from Azure Event Hubs, AWS Kinesis or GCP dataflow. You can also host a Kafka service directly in you cloud environment Confluent or service provider Aiven You can view the Kafka documentation here and view Kafka clients in your favourite programming language here.

Pricing of the Generic Kafka Streaming API

The pricing of the Generic Kafka Streaming API service depends on the data volume and server resources consumed, so please contact our Sales team for a quote on the Generic Kafka Streaming API for your fleet data.

If you desire different service terms than the generic terms, we also offer custom APIs with custom SLAs and integration plans, subject to a specific commercial agreement. Please contact Sales team for any question related to custom APIs terms.

Broker information and connection

Our Kafka broker is accessible to all Railnova Enterprise Customers from the public internet and authentication is done using the SASL protocol. We provide a test environment alongside the production environment. The host and password are provided on request. Please contact support@railnova.eu for further information.

Test environment

Production environment

Broker host

Will be sent separately

Will be sent separately

User (for SASL)

Your company name

Your company name

Password (for SASL)

Will be sent separately

Will be sent separately

Schema registry host

Will be sent separately

Will be sent separately

Authentification protocol

SASL SCRAM or SASL Plain

SASL SCRAM or SASL Plain

Topic name

output-sharing-username

output-sharing-username

Topic Retention on the generic Kafka Service

24 Hours by default

24 Hours by default

CA certificate (needed for SASL authentication) also will be sent separately.

Data will be available in a single topic, partitioned by asset number (locomotive or trainset). The data should be consumed with a single consumer group at the time.

Message format and schema

The schemas can be recovered from our schema registry directly using the same user and password as for the broker or also be sent on request.

Messages are encoded using Avro and compressed with zstd.

compression.codec

zstd

Avro schema namespace

eu.railnova.supernova.analyst

Avro key schema name

AnalystKey

Avro value schema name

AnalystValue

Message Envelope

The envelope of the message is defined by the schema AnalystValue and here are its top-level key name and description.

Key name

Value type

Description

type

string

Type of the message (telematic, fault-code, …)

timestamp

string

ISO 8601 encoded date-time in UTC/Zulu time zone

asset

integer

Asset ID in Railnova

asset_uic

string

UIC of the asset, as defined in the admin of Railnova.

device

integer

Device ID in Railnova (may be null if the data does not come from a Railster)

is_open

bool

Used for the state of fault-codes and alerts

content

string

The payload of the message. In the internal Railnova format. It is encoded in JSON. It is described below.

The asset and asset_uic mapping to the asset name and additional data can be found through our API on the asset endpoint.

The content payload

The key content contains a JSON encoded object that is the payload of the message, specific to each message type.

This content of the payload inside a content fields might change in the future as it is depending on the incoming data from configurable IoT devices such as the Railster or third party data. No guarantee of schema continuity of the content payload is given under these Generic Kafka Streaming API.

The content payload can be explored via the "Telematics data" page at https://one.railnova.eu/telematics/#/

As an example, here is what you can typically expect from a message of type "position" for the content.

{
"gps_time": "2020-06-15 00:00:07",
"fix": 1,
"longitude": 5021011,
"course": 310,
"location": "Bièvre",
"period_km": 1.165,
"latitude": 49946116,
"speed": 101
}

Here is another example of a message from the Railster embedded rule engine:

{
"ZSG_Catenary_Voltage": 1.6,
"ZSG_IW_HB_Druck": 8.6,
"ZSG_IW_HLDruck": 4.92,
"ZSG_IW_IFahrSumme_scaled_A": 21.90234375,
"ZSG_IW_IFahrdr_1_scaled_A": 0,
"ZSG_IW_IFahrdr_scaled_A": 22.90234375,
"ZSG_IW_Weg": 31019,
"ZSG_IW_ZBK_Gesamt": 0,
"ZSG_IW_ZBK_eigene_Lok": 0,
"ZSG_IW_Zugnummer": 953,
… a hundred more symbols
}

Here is another example of an alarm from the Railster embedded rule engine:

{
"name": "Tueren auf Seite Rechts wurden Freigegeben bei Befehl Freigabe links",
}

Deprecation Policy under the Generic Kafka Streaming API

Under these Generic Kafka Streaming API terms, we want to ensure that our customers are informed and have time to update their systems in advance of the breaking changes while allowing normal feature and product development workflows at Railnova.


A feature release may deprecate certain features from previous releases ("a breaking change"). When release breaking changes, we will notify you before the release date of the breaking changes and ensure backwards compatibility for 6 months.

Changes considered "breaking" under the generic Kafka API are:

  • changing or removing existing fields of the envelope structure (AnalystValue schema),

  • removal of a pre-existing message_type

Code example

We provide an open-sourced code example in python that will allow you to retrieve your first message on our Kafka after following the README.md on GitHub.

Support

Do you still have questions? Go to the Railnova platform and click "Contact us" for help!

Did this answer your question?