0% found this document useful (0 votes)
241 views56 pages

Historian - For - Linux - User - Guide - v2.2.0

Uploaded by

MarcioWatanabe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
241 views56 pages

Historian - For - Linux - User - Guide - v2.2.0

Uploaded by

MarcioWatanabe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

User Guide

© 2019 General Electric Company


Contents
Historian for Linux - an Overview 1
Licensing 1
What is Historian for Linux? 1
Historian Container Architecture 2
Setup Historian for Linux Containers 5
Setup Historian for Linux on Predix Edge 5
Setup Historian for Linux on Generic Linux Distributions (Ubuntu, CentOS) 6
Historian Database 8
Historian Database Container Overview 8
Installing Historian Database Container 8
Migrating Data from Windows to Linux Historian 8
Running the Historian Database Container 9
Historian Database Container Environment Variables 9
Historian REST Query Service 13
Historian REST Query Service Container Overview 13
Installing Historian REST Query Service Container 13
Running the Historian REST Query Service Container 13
Historian REST Query Service Container Environment Variables 14
Historian REST Query API Example 15
Historian Web Admin Service 18
Historian Web Admin Service Overview 18
Installing and Running the Historian Web Admin Service 18
Historian Web Admin Service Container Environment Variables 18
Accessing the Historian for Linux Web Admin Service 20
Historian Tuner 21
Historian Tuner Container Overview 21
Tuner App Operations with examples 21
JSON File Content Example 28
Installing and Running the Historian Tuner Container 32
Historian Tuner Container Environment Variables 33
Historian Tuner REST API 33
Historian Server to Server Collector 34

ii User Guide
Historian Server to Server Collector Overview 34
Installing and Running Historian S2S Collector 34
Historian S2S Collector Container Environment Variables 35
Sample S2S Collector HistorianServers.reg file 35
Important notes for S2S Collector operations 35
Streaming data to Predix Time Series 36
Historian OPCUA DA Collectors 40
Historian OPCUA DA Collector Overview 40
Installing and Running Historian OPCUA DA Collector 40
Historian OPCUA DA Collector Container Environment Variables 40
Sample Historian OPCUA DA Collector HistorianServer.reg File 41
Sample Historian OPCUA DA Collector ClientConfig.ini File 41
Historian OPCUA DA Collector Capabilities 42
Secured OPCUA Collector Connectivity 43
Security for Historian for Linux container Ecosystem 44
Security for Historian for Linux container Ecosystem 44
Key differences between Historian for Linux and Windows Historian 47
Key differences between Windows and Linux Historian 47
Historian for Linux Client Libraries 49
Historian for Linux Libraries 49
Related Documentation 50
Troubleshoot Historian for Linux 51
General Troubleshooting Tips 51

iii
Copyright GE Digital
© 2019 General Electric Company.

GE, the GE Monogram, and Predix are either registered trademarks or trademarks of General Electric
Company. All other trademarks are the property of their respective owners.
This document may contain Confidential/Proprietary information of General Electric Company and/or its
suppliers or vendors. Distribution or reproduction is prohibited without permission.
THIS DOCUMENT AND ITS CONTENTS ARE PROVIDED "AS IS," WITH NO REPRESENTATION OR
WARRANTIES OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
WARRANTIES OF DESIGN, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. ALL OTHER
LIABILITY ARISING FROM RELIANCE UPON ANY INFORMATION CONTAINED HEREIN IS EXPRESSLY
DISCLAIMED.
Access to and use of the software described in this document is conditioned on acceptance of the End
User License Agreement and compliance with its terms.

iv © 2019 General Electric Company


Historian for Linux - an Overview

Licensing
Historian for Linux is presently licensed separately from Predix Edge. This documentation reflects the
technical aspects of using Historian in the Predix Edge context. For information on purchasing Historian
for Linux perpetual licenses, please contact your GE Digital Sales contact. We are working on providing
additional commercial models for providing a limited time series capability at the Edge, but these are
incomplete and not yet ready for publication.

What is Historian for Linux?


Historian for Linux is a high-performance timeseries database designed to store and retrieve time-based
information at a high speed. It runs on Linux platform using Docker technology.
Historian for Linux is a collection of several docker images such as Historian database, Historian REST
Query Service (for data query), Historian Web Admin Service and various Historian Collectors. These
docker images run and serve different purposes around Historian database.

Overview of Historian for Linux capabilities


Historian for Linux provides the following benefits:
• Time Series data archiving
• REST API for data query – Data query REST APIs are exactly the same as Predix time series REST API.
• Web Admin Console – – Historian Web Admin Console acts as an admin console of Historian for Linux
database.
• OAUTH2 integration – The Historian for Linux data query Rest API honors OAUTH2-based
authentication and authorization. Web Admin Console & Tuner also honor OAUTH2 based
authentication and authorization.
• Collector Toolkit library – Used for implementing collectors that ingest data into Historian for Linux
• User API Library – C library for programmatically adding, deleting, and configuring tags for collectors
• Server to Server Collector – Stream data of one Historian to another Historian or to Predix Time series.
It helps in data filtering.
• OPCUA DA Collector – Collects data from OPCUA DA Server and streams data to Historian.
• MQTT Collector – Subscribes to MQTT broker and streams data to Historian. This Collector helps to
integrate Historian for Linux to Data bus of Predix Edge.
• You can also change configuration properties of Historian database such as data collection rate,
archiving style (Day wise, Hour wise) by JSON file with help of Tuner service.
Limitations
Input for the Historian collector should be in Time series format.
The protocol adapters should use “flat_to_timeseries” block to translate the data to the required format
before adding the data to the MQTT broker.

Supported Operating System Platforms


Any x64 based Linux machine or Linux virtual machine with Docker installed.

© 2019 General Electric Company 1


Historian Container Architecture
Historian for Linux product is built with microservices architecture philosophy. Microservice architecture
is a minimalist approach to modular software development. Modularity is defined as the degree to which
a system's components may be separated and recombined.
That is the reason each Historian container performs its unique job and the containers talk to each other
for solving different use cases. For example, Historian Database container is responsible for storing
time series data to disk, Historian REST query service container is responsible for exposing REST API
for data query from Historian database. Users can choose to install all or a few Historian docker images
which best suits their use case.
The following diagram depicts the core components of Historian for Linux product, and it illustrates how
they interact with each other.

Historian for Linux provides the following containerized components:

2 © 2019 General Electric Company


Docker 1: Historian Database
Historian Database is a C++ based native time series archiver based on the GE Historian archiving
engine. The TCP/IP server listens on port 14000 and is configurable using Docker's port map
technology.
See Historian Database .
Docker 2: Historian REST Query Service
The Historian REST query service is a Java-based REST service, it offers REST APIs for data query from
Historian database. It uses OAUTH2 Server for authentication and authorization. The Historian REST
query service provides REST APIs that are similar to the Predix Time Series data query APIs for
querying data, which means that any analytics app built using Predix Time Series data query APIs can
work seamlessly with Historian for Linux product.
See Historian REST Query Service.
Docker 3: Historian Web Admin Service
Historian Web Admin Service hosts we based admin console for Historian database. Users can view &
edit properties of list of tags, list of collectors, list of data stores. It is a Tomcat based Web service,
which listens on port 9443 and uses OAUTH2 Server for authentication and authorization.
It also provides the ability to:
• Start and stop collectors
• Configure collectors
• Browse, add, delete, and rename tags, data stores, and archive files
• View status of connected collectors
• View the most recently collected data
See Historian Web Admin Service
Docker 4: Historian Tuner
Historian tuner helps to configure Historian database properties such data archiving style (Day wise,
Hour wise), Tag properties (like collection rate, conditional collection filtering). Tuner expects these
configuration changes in form of JSON payload in a file. It offers a REST API for uploading JSON file
from REST clients to Tuner container. It uses OAUTH2 Server for authentication and authorization
filtering.
See Historian Tuner
Docker 5: Historian Server to Server (S2S) Collector
The Historian S2S Collector streams data from one Historian database (source Historian) to another
Historian database (destination Historian) or to Predix Time series.
This collector software is good for cases where multiple Historian databases are deployed and data of
specific tags from one Historian database needs to be streamed to another Historian database with
some data filtering.
See Historian Server to Server Collector
Docker 6: Historian OPCUA DA Collector
The Historian OPCUA Collector connects to OPCUA DA (data access) server and can collect polled &
asynchronous data, it then streams to Historian database. This Collector can securely connect with
OPCUA DA (data access) server with certificate exchange.
See Historian OPCUA DA Collector.
Docker 7: Historian MQTT Collector
The Historian MQTT Collector connects to MQTT broker & subscribes for topic. The data should be in
Predix time series data format as JSON payload. It automatically adds tags to Historian database, and

© 2019 General Electric Company 3


streams the data to Historian database. Because of this collector Historian for Linux product is well
integrated with Predix Edge’s databus for consuming data. Predix Edge’s databus is MQTT broker.

4 © 2019 General Electric Company


Setup Historian for Linux Containers

Setup Historian for Linux on Predix Edge


Historian for Linux product containers can run on Predix Edge as well as on any generic Linux distributions
such as Ubuntu and CentOS.

Where do I get it?


Historian for Linux product Dockers are bundled in application bundles shown in the table below. The
application bundle and sample configurations described in the table are stored in the artifactory. Use the
following information to ensure you can access the files.
For GE Employees:
To access the Artifactory downloads via the links below, those using a GE email address must first be log
into Artifactory.
Note: If you attempt to download an Artifactory file without first logging into Artifactory, you will be
asked to Sign in first.

Procedure
1. Go to Artifactory.
2. Click the Log In button.
3. Click the SAML SSO icon.
4. Use your SSO to log in.
5. You can then return to the documentation link to download the file.

Next Steps
For Predix users:
To access Artifactory downloads from links in the Predix Edge documentation, you must first create an
account on predix.io. Your predix.io account sign in credentials will be used to access the Artifactory.
When you click an Artifactory download link, enter your predix.io username (email address) and password
in the Sign In dialog box.

Table 1: Server Performance Tags

Application Docker components Application Configuration

Historian Database and REST Sample


1. Historian Data Archiver
Query Service
2. Historian REST Query Service

Historian Web Admin Service Historian Web Admin Service Sample


(UAA)

Historian Web Admin Service Historian Web Admin Service Sample

Historian Tuner Historian Tuner Sample

Historian S2S Collector Historian Server to Server Collector Sample

© 2019 General Electric Company 5


Application Docker components Application Configuration

Historian OPCUA DA Collector Historian OPCUA DA Collector Sample

Historian MQTT Collector Historian MQTT Collector Sample

Installing a Historian for Linux Application

Next Steps
1. Download and install a Historian for Linux application.
2. Click the application link to download the application to your machine.
3. Upload the file to your Edge Manager Repository as a Predix Edge application.
4. Deploy the application to an enrolled Predix Edge device.
Configuring an Application
1. Configure a Historian for Linux application.
2. Download and extract the sample configuration ZIP for the application.
3. Modify the settings in the sample config file for your environment.
4. Re-zip the file.
5. Upload the new ZIP file to the Predix Edge Manager Repository as a Predix Edge configuration.
6. Deploy the configuration to the corresponding application running on your Predix Edge device.
Important: Historian for Linux product license is deployed as configuration of Historian Database
application. ZIP your Historian for Linux product License and apply configuration to Historian database
application for License activation.

Setup Historian for Linux on Generic Linux Distributions


(Ubuntu, CentOS)

Where do I get it?


Historian for Linux product dockers are bundled in application bundles shown in the table below. The
application bundle and sample configurations described in the table are stored in artifactory. Use the
following information to ensure you can access the files.
For GE Employees:
If you attempt to download an Artifactory file without first logging into Artifactory, you will be asked to
Sign in, which will not work.

Procedure
1. Go to Artifactory.
2. Click the Log In button.
3. Click the SAML SSO icon.
4. Use your SSO to log in.
5. You can then return to the documentation link to download the file.

Next Steps
For non- GE users :
Arriving soon...

6 © 2019 General Electric Company


Table 2:

Application Docker components Application Configuration

Historian Database and REST Sample


1. Historian Data Archiver
Query Service
2. Historian REST Query Service

Historian Web Admin Service Historian Web Admin Service Sample


(UAA)

Historian Web Admin Service Historian Web Admin Service Sample

Historian Tuner Historian Tuner Sample

Historian S2S Collector Historian Server to Server Collector Sample

Historian OPCUA DA Collector Historian OPCUA DA Collector Sample

Historian MQTT Collector Historian MQTT Collector Sample

Installing a Historian for Linux Application

Next Steps
1. Download and install a Historian for Linux application.
2. Click the application link to download the application to your machine.
3. Download all scripts “install.sh”, “run.sh”, “stop.sh”, “clean-data.sh” and “apply-config.sh”, “uninstall.sh”.
4. Copy application bundles which you want to install and all above scripts in same directory in you Linux
Host.
5. Run “install.sh”, this script will extract Docker image from the bundle and load it on your Linux Host.
6. Run “run.sh”, this script will create the necessary directories in your Linux Host and it will start the
applications as Docker containers. This script will mount the directories created in Linux Host with
Docker containers.
7. Run “stop.sh” to stop the Docker containers.
8. Run “clean-data.sh” cleans data of application.
9. Run “uninstall.sh” to remove the docker image from Linux Host. This script does not clean the data.
Configuring an Application
1. Configure a Historian for Linux application.
2. Download and extract the sample configuration ZIP for the application.
3. Modify the settings in the sample config file for your environment.
4. Re-zip the file and keep the zip file in same directory with apply-config.sh script in your Linux Host.
5. Run “apply-config.sh”, this script will unzip and copy the configuration to specific application directory.
6. Apply-config.sh restarts the application after applying configuration.
Important: Historian for Linux product license is deployed as configuration of Historian Database
application. ZIP your Historian for Linux product License and apply configuration to Historian database
application for License activation.

© 2019 General Electric Company 7


Historian Database

Historian Database Container Overview


Historian database is a C++ based native time series database. It is a TCP/IP Server. It listens on port
14000 and the listening port number is configurable using Docker's port map technology.
Time series data is archived in a proprietary binary files called IHA files.
It stores its metadata in a proprietary binary file called IHC file. The binary file stores information about
tags, connected collector properties, data store properties and archive files properties.
The directory in which IHC and IHA files are saved is volume-mounted on the Host file system so that
data remains persistent.
Important: The Historian container must have a valid license file for all enabled features, including high
tag count support. If a valid license is not provided Historian switches to demo mode (which supports only
32 tags) while starting the Docker container. Use the HS_LICENSE_FILE_PATH environment variable
to supply the absolute file path of the valid license file.

Installing Historian Database Container


You must have a x64 Linux machine with the Docker engine installed.

About This Task


The Docker image for the Historian database container is bundled with the Historian REST Query service
and can be downloaded from Artifactory.

Migrating Data from Windows to Linux Historian

About This Task


You can migrate data and metadata from an existing Windows-based Historian to a Linux-based Historian
Docker container.

Procedure
1. Set the HS_MODE_OF_OPERATION environment variable value to reload.
Note: Keep all Windows-based IHA and IHC files in the archive path.
2. Rename the IHC file to the host name of the Docker container. If the name of the IHC file from the
Windows machine is WIN-2BLPS4FOACM_Config.ihc and the host name of the Historian
database Docker container is "machine-01," the IHC file should be renamed as machine-01_
Config.ihc.
Note: There is no need to rename IHA files.
3. Start the Historian database Docker container with the HS_MODE_OF_OPERATION environment
variable set to reload.
Note: There is no need to set the HS_MODE_OF_OPERATION environment variable again for future
container restarts.

8 © 2019 General Electric Company


Running the Historian Database Container

About This Task


Historian database archives data in a binary file that resides in a configured directory. You must create a
directory for Historian database container in the host machine. Mount that directory to Historian
database's container file system, so that when Historian database container is stopped the data persists.

Procedure
1. In the host machine, create the following directory structure:

mkdir -p ~/edgedata
2. Run the following Docker command:

docker run -d –network host -v ~/EdgeData/:/data/ -h <Docker


Hostname> --name=<Name of container> <Image Name>:<Tag Name>

Example:

docker run -d –network host -v ~/edgedata:/data -h machine-01 --


name=machine-01 dtr.predix.io/predix-edge/ge-historian-
amd64:alpine3.6.v2.2.0

You can also refer to How to deploy it where a docker-compose.yml file can be used as guidance to run
this container on Predix Edge.
Important: The Historian database container must have a valid license file for all enabled features,
including high tag count support. If a valid license is not provided Historian switches to demo mode
(which supports only 32 tags) while starting the Docker container. Use the
HS_LICENSE_FILE_PATH environment variable to supply the absolute file path of the valid license
file.
Important: To ensure that Historian can reload the existing IHC and IHA files when the Historian
Docker container is stopped and restarted, you must not change the hostname for the Historian
container. In the above example, the host name is "machine-01." This is because Historian database
uses the hostname to keep the names of its IHC and IHA files (metadata and data files).

Historian Database Container Environment Variables


Environment variables are passed to the Docker container when using the docker run command, or by
the Docker compose yml file. Environment variables act as command line arguments for the Docker
container, which use the value of these environment variables to configure software running inside it.
For example, the HS_ARCHIVER_CREATE_TYPE environment variable, if set to Days, instructs the
Historian database running inside the Docker container to set the archive type to Days. That is, these
environment variables are set inside the Docker container.
Note: Environment variables can be only set through start-environments-db file for Histiroan database
and start-environments-qs file for Historian REST Query Service. This file should be present in the /config
direcory of Docker container.

© 2019 General Electric Company 9


Environment Variable Description Default Range
Value

HS_ARCHIVER_CREATE_T The archive file can be created in the following Days BySize/Days/Hours
YPE three ways:

• BySize – allows users to define the size


of the archive file, so when a new archive file
is created, a new archive file with an empty
space value of "HS_ARCHIVE_SIZE_IN_MB" is
created. When this file reaches the
maximum size specified, a new archive file
of the same size is created.
• Days – creates an archive file for archiving
data for the number of days defined in
HS_ARCHIVE_DURATION_IN_D
AYS. After completion of the specified
number of days, a new archive file is created
at 12:00 AM GMT.
• Hours – creates an archive file for
archiving data for the number of hours
defined in
HS_ARCHIVE_DURATION_IN_H
OURS. After completion of the specified
number of hours, a new archive file is
created.
Applicable on start up or at creation of new
datastore.

HS_DEFAULT_CYCLIC_ARCHIVING If this is set “true”, Cyclic Archiving of data starts. false true/false
It means after completion of hours mentioned
by value of
HS_CYCLIC_ARCHIVE_DURATION_HOURS, data
overwrite will start.

Applicable for Scada buffer datastore.

Sets default datastore to Scada buffer, on


setting this option to True.

HS_CYCLIC_ARCHIVE_DURATION_H Duration in hours, after which cyclic archiving 8760 0 to 8760


OURS (data over-write) starts.

HS_CREATE_OFFLINE_ARCHIVE Enables or disables the writing of past data until false true/false
January 1, 1970.

HS_ARCHIVE_ACTIVE_HOURS Calculate the number of hours in past time for 744 1 to hours till January 1,
which to allow writes. 1970

Applicable on start up or at creation of new


datastore.

10 © 2019 General Electric Company


Environment Variable Description Default Range
Value

HS_ARCHIVE_SIZE_IN_M This variable is used when “BySize” archiving 100 1 - 99999


B mode is selected, and defines the size of the
archive file (in MB) to create.

Applicable on start up or at creation of new


datastore.

HS_ARCHIVE_DURATION_ This variable is used when “Hourly” archiving 90 * 24 1 to 90 * 24


IN_HOURS mode is selected. This variable defines the
number of hours an archive file is used for
archiving data, and, upon completion of the
specified number of hours, a new archive file is
created.

Applicable on start up or at creation of new


datastore.

HS_ARCHIVE_DURATION_ This variable is used when “Daily” archiving 1 1 - 1440


IN_DAYS mode is selected. This variable defines the
number of days an archive file is used for
archiving data, and, upon completion of those
number of days, a new archive file is created.

Applicable on start up or at creation of new


datastore.

HS_FREE_SPACE_REQUIR Defines the free space required, in MB, for 500 1 - 999999999
ED_IN_MB archiver to work.
Note: Set this value, in
Applicable on start up or at creation of new MB, to be at least five
datastore. times the integral
multiple of the archive
size.

HS_USE_ARCHIVE_CACHI Historian database supports data caching, true true/false


NG which means that when data queries are
requested they are cached according to the
system RAM size available in the main memory.
The cache is released if the RAM is used within a
certain limit. This helps querying of data faster
for future requests.

Applicable on start up or at creation of new


datastore.

© 2019 General Electric Company 11


Environment Variable Description Default Range
Value

HS_LICENSE_FILE_PATH Absolute path of the License file of Historian for


Linux.

Important: The Historian database container


must have a valid license file for all enabled
features, including high tag count support. If a
valid license is not provided Historian switches
to demo mode (which supports only 32 tags)
while starting the Docker container. Use the
HS_LICENSE_FILE_PATH
environment variable to supply the absolute file
path of the valid license file.

For example, if you mount a /data/


edgedata directory with /data/ with
the Historian database container, you can keep
the license file in /data/edgedata/
historian-license on the host
machine. But you should set the environment
variable as
HS_LICENSE_FILE_PATH=/data/
historian-license because in a
Docker container context, the path is /data/
historian-license.

HS_MODE_OF_OPERATION Used for loading IHC/IHA files from a normal normal/reload


different Historian database (can also be from a
Windows-based Historian). Set the value to
reload if you want to load existing IHC/
IHA files from Historian database running in
one machine (Windows or Linux machine) to
Historian database in another machine.

HS_NUMBER_OF_LOG_FILES Maximum number of Log Files, once this value 100 1 to 100
exceeds, oldest file will be deleted to
accommodate new one.

HS_SIZE_OF_EACH_LOG_FILE Maximum size of one Log file in Mega Bytes. If 10 1 to 10


this value exceeds, new Log File will be created.

debug Used for turning debug logs on and off. When set false true/false
to false, debug logs are turned off.

HS_ALLOW_HELD_VALUE_QUERY Used for querying the held sample when Archive false true/false
compression is enabled

12 © 2019 General Electric Company


Historian REST Query Service

Historian REST Query Service Container Overview


The Historian REST query service is meant for data query from Historian database. It offers several APIs
for querying data like latest data point and data points between a time range. These APIs also allow users
to select lot of aggregation techniques like average, Minimum and maximum values. These REST APIs are
exactly the same as Predix Time Series REST APIs.
For more information, see the Predix Time Series service documentation at https://docs.predix.io/en-US/
content/service/data_management/time_series/ and the Time Series service API documentation at
https://www.predix.io/api.

Installing Historian REST Query Service Container


You must have a x64 Linux machine with the Docker engine installed.

About This Task


The Docker image for the Historian REST Query service container is bundled with the Historian database
and can be downloaded from Artifactory.

Running the Historian REST Query Service Container

About This Task


You can run the Historian REST Query Service in secure and unsecure mode. Secure mode means REST
clients have to pass OAUTH2 credentials. In unsecure mode, OAUTH2 authentication is disabled.

Procedure
To run the REST Query Service in unsecure mode, enter the following command:

docker run -d -v /data/qs:/data/ -v /config/qs:/config/ -p 8989:8989


<image_name>:<tag_name>

For example:

docker run -d -v /data/qs:/data/ -v /config/qs:/config/ -p 8989:8989


dtr.predix.io/predix-edge/ge-historian-rest-query-service-
amd64:ubuntu16.04.v2.2.0

Note: You can also refer to How to deploy it? where a docker-compose.yml file can be used as guidance to
run this container on Predix Edge.
Important: Setting the DISABLE_REST_QUERY_SECURITY environment variable to false means
this service will run in secure mode. In secure mode, the user has to pass OAUTH2 credentials.

© 2019 General Electric Company 13


Historian REST Query Service Container Environment Variables
Note: Environment variables can be only set through start-environments-qs file for Historian REST Query
Service. This file should be present in the /config direcory of Docker container.

Environment Variable Description Default Value Range

HISTORIAN_HOSTNAME Corresponds to the Historian localhost Historian database


database IP address. IP address.

HISTORIAN_MAX_TAG_QUERY Corresponds to the maximum 5000 The number of tags


number of tags retrieved from in the Historian
Historian database. database.

HISTORIAN_MAX_DATA_QUERY Corresponds to the maximum 10000 As many as you


number of data retrieved for one have in the
tag from the Historian database. Historian database.

DISABLE_REST_QUERY_SECURITY Toggles security. false • false – The


service runs in
secure mode,
which means
the user has to
pass the
OAUTH2
credentials.
• true – The
service runs in
unsecure mode.

See Running the


Historian REST
Query Service
Container on page
13 for examples.

ZAC_UAA_CLIENTID Client ID of OAUTH2 Server. NA NA

Used only if
DISABLE_REST_QUERY_S
ECURITY is set to false.

ZAC_UAA_CLIENT_SECRET Used only if NA NA


DISABLE_REST_QUERY_S
ECURITY is set to false.

ZAC_UAA_ENDPOINT URL of OAUTH2 Server. NA NA

Used only if
DISABLE_REST_QUERY_S
ECURITY is set to false.

14 © 2019 General Electric Company


Environment Variable Description Default Value Range

USE_PROXY Toggles whether or not a firewall is false • false –


present between OAUTH2 Server There is no
and the Historian REST Query firewall
Service. between
OAUTH2 Server
and the
Historian REST
Query Service
container.
• true – There
is a firewall
between
OAUTH2 Server
and the
Historian REST
Query Service
container.

PROXYURL Set if a proxy server is needed. Proxy URL and port


number.

Historian REST Query API Example


The Historian REST Query service container exposes port 8989 for querying the data that was read from
the Historian database.

Procedure
1. Create your query.
The header for your query must be in the following format:

Headers:
Authorization: Bearer <token from trusted issuer>
Predix-Zone-Id: <tenant>
Content-Type: application/json

© 2019 General Electric Company 15


Operation URL Method Request Reque
Body st
Head
ers

Get values for a tag http://<Predix Edge OS IP Address>: POST { “start”: Predix
8989/v1/datapoints “1h-ago”, -Zone-
“tags”: Id Can
[ { “name be any
”: “”, value.
“order”: Not
“desc” } ] validat
} ed.
Get all tags http://<Predix Edge OS IP Address>: GET None Predix
8989/v1/tags -Zone-
Id Can
be any
value.
Not
validat
ed.

2. Retrieve a token from the OAUTH2 server by using the following REST API:

curl -u <Client-Id>:<Client-secret> https://<ip-address>:8080/uaa/


oauth/token -d 'grant_type=client_credentials'

For example:

curl -u edgeclient:edgesecret https://10.10.10.10:8080/uaa/oauth/


token -d 'grant_type=client_credentials'

Refer to the below sample Docker Compose yml file to learn about environment variables for
supplying the OAUTH2 URL, credentials, proxy URL for the Historian REST query service container.

version: '3.0’
services:
historian-rest-query:
image: dtr.predix.io/predix-edge/ge-historian-rest-query-service-
amd64:ubuntu16.04.v1.3.0
environment:
- HISTORIAN_MAX_DATA_QUERY=10000
- HISTORIAN_MAX_TAG_QUERY=5000
- HISTORIAN_HOSTNAME=localhost
- DISABLE_REST_QUERY_SECURITY=false
- ZAC_UAA_CLIENTID=edgeclient
- ZAC_UAA_CLIENT_SECRET=edgesecret
- ZAC_UAA_ENDPOINT=http://10.10.10.10:8080/uaa
- USE_PROXY=false
- PROXYURL=
- USE_ZAC=false
- ZAC_ENDPOINT=historian
- ZAC_SERVICE_NAME=historian

network_mode: "host"
ports:
- "8989:8989”
depends_on:
- historian

historian-web-admin-console-uaa-security:

16 © 2019 General Electric Company


image: dtr.predix.io/predix-edge/ge-historian-webadmin-uaa-
amd64:alpine3.6.v1.1.0
environment:
- HISTORIAN_HOSTNAME=localhost
- HISTORIAN_MAX_TAG_QUERY=5000
- HV_UAA_CLIENT_ID= edgeclient
- HV_UAA_CLIENT_SECRET=edgesecret
- HV_UAA_SCHEME_AND_SERVER=http://10.10.10.10:8080/uaa
- HV_USE_PROXY=false
- HV_PROXY_PASSWORD=
- HV_PROXY_USERNAME=
- HV_PROXY_URL=
- HV_SKIP_SSL_VALIDATION=true
network_mode: "host"
ports:
- "9443:8443”
logging:
driver: "none"
depends_on:
- historian
volumes:
- /data/webAdminLogs:/data/opt/tomcat/logs/
historian:
image: dtr.predix.io/predix-edge/ge-historian-
amd64:alpine3.6.v1.3.0
environment:
- HS_AUTO_CREATE_ARCHIVES=true
- HS_OVERWRITE_OLD_ARCHIVES=false
- HS_ARCHIVER_CREATE_TYPE=Days
- HS_ARCHIVE_SIZE_IN_MB=100
- HS_ARCHIVE_DURATION_IN_HOURS=2160
- HS_ARCHIVE_DURATION_IN_DAYS=1
- HS_FREE_SPACE_REQUIRED_IN_MB=500
- HS_USE_ARCHIVE_CACHING=true
- HS_LICENSE_FILE_PATH=/data/historian-license
- debug=false
network_mode: "host"
expose:
- "14000"
volumes:
- /data/edgedata/:/data/

Note: Set the proxy URL only if there is a firewall between the OAUTH2 server and the Historian Web
Admin/REST query service container.

© 2019 General Electric Company 17


Historian Web Admin Service

Historian Web Admin Service Overview


The Historian Web Admin Service is a web user interface that allows you to monitor and control Historian
Archiver.

Installing and Running the Historian Web Admin Service


To install the Historian Web Admin Service, you must have a x64 Linux machine with the Docker engine
installed.
The Docker image of Historian Web Admin Service can be downloaded from Artifactory. Historian Web
Admin Service uses OAUTH2 Server for authenitication & autherizations, it can also run withount OAUTH2
by setting a username and a password.

Procedure
1. To run a username and password based Historian Web Admin Service, enter:

docker run –d p 9443:8443 -v /data/wa:/data/ -v /config/wa:/config/


<image_name>:<tag_name>

For example:

docker docker run -d –p 9443:8443 -v /data/wa:/data/ -v /config/wa:/


config/ dtr.predix.io/predix-edge/ge-historian-webadmin-
amd64:ubuntu16.04.v2.2.0

You can also refer to How to deploy it? where a docker-compose.yml file can be used as guidance to
run this container on Predix Edge.
2. To run Historian Web Admin Service with OAUTH2 server:

docker run –d p 9443:8443 -v /data/wa:/data/ -v /config/wa:/config/


<image_name>:<tag_name>

For example:

docker docker run -d -p 9443:8443 -v /data/wa:/data/ -v /config/wa:/


config/ dtr.predix.io/predix-edge/ge-historian-webadmin-uaa-
amd64:ubuntu16.04.v2.2.0

Historian Web Admin Service Container Environment Variables


Note: Environment variables can be only set through start-environments-wa file for Historian Web Admin
Service. This file should be present in the /config direcory of Docker container.

18 © 2019 General Electric Company


Environment Variable Description Default Value Range

HISTORIAN_HOSTNAME This value localhost IP address of Historian for


corresponds to the Linux.
Historian for Linux
archiver IP address.

HISTORIAN_MAX_TAG_QUERY This value 5000 As many tags as you have in


corresponds to the the Historian for Linux
maximum number of archiver
tags retrieved from
Historian for Linux
archiver.

HWA_ADMIN_USERNAME Value set by the test User name set by the admin.
Docker admin, while
running the Historian
Web Admin Service
Docker image. This is
used to access
Historian Web Admin
Service Web pages.

HWA_ADMIN_PASSWORD Value set by the password Password set by the admin.


Docker admin, while
running the Historian
Web Admin Service
Docker image. This is
used to access
Historian Web Admin
Service Web pages.

HV_UAA_CLIENT_ID UAA client ID.

HV_UAA_CLIENT_SECRET UAA client secret.

HV_UAA_SCHEME_AND_SERVER UAA URL For example:

https://UAA_zone_id>.predix-
uaa.run.aws-usw02-
pr.ice.predix.io" format="html"
scope="external"/>

HV_USE_PROXY Determines whether true • true – A firewall is present


there is a firewall between the UAA service
between the UAA and the Historian Web
service and the Admin Service.
Historian Web Admin • false – A firewall is not
Service. present between the UAA
service and the Historian
Web Admin Service.

HV_PROXY_USERNAME User name to access


the proxy URL. Keep it
empty if no user
name is required to
access the proxy.

© 2019 General Electric Company 19


Environment Variable Description Default Value Range

HV_PROXY_PASSWORD Password to access


the proxy URL. Keep it
empty if no password
is required to access
the proxy.

HV_PROXY_URL Proxy URL with port


number.

HV_SKIP_SSL_VALIDATION Switch to toggle SSL true true/false


validation to access
UAA.

Accessing the Historian for Linux Web Admin Service

Procedure
In a Web browser such as Chrome, IE, or Safari, enter the URL for the Historian Web Admin Service:
https://<ip_address>:9443/historian-visualization/hwa

20 © 2019 General Electric Company


Historian Tuner

Historian Tuner Container Overview


Historian Tuner is a very useful app for designing & automating Historian database administrative
operations. It can do operations such as:
• Changing Tag properties,
• Changing Data store properties,
• taking back up of archive files,
• restoring backed-up archive files,
• purging data.
All these operations that the Tuner App does can also be performed by Historian Web admin service. The
difference here is that the Tuner app accepts inputs through a JSON file to perform these operations.
Aloowing users to provide input via JSON File format empowers users to automate these operations.
Users can perform these operations at periodic intervals as per the need.
Tuner App exposes REST API to upload the JSON file. See the following illustration for an example.
If you want to take a back-up of last 7 days of Historian data, you can write a JSON file with the content
below:

{ "Historian Node": "10.181.213.175",


"Data Management": {
"Back Up": [
{
"Datastore Name": "User",
"Back Up Path":"/data/backup",
"Properties":
{
"Number Of Files":7
}
}
]
}
}

Tuner App Operations with examples


The following section provides a list of Tuner app operations with JSON File Content samples. The JSON
Key descriptions along with what the operation can do with the datastore.

To Create a Data Store


JSON File Content sample:

{
"Historian Node": "10.181.213.175",
"Data Management": {
"Create Datastore": [
{
"Datastore Name": "Turbine-4",
"Properties": {

© 2019 General Electric Company 21


"Default Datastore": true,
"Description": "Custom datastore for storing data of
Turbine-1"
}
}
]
}
}

JSON Key description


• Datastore Name: Can be a sequence of characters surrounded with ".
• Default Datastore true/false: True will set the Datastore as a default Datastore. False will set
theDatastore as a normal Datastore (Not Default).
What can you do with the operation?
This will create a data store with Default Datastore type as True. Can create multiple data stores by
providing proper details in the JSON file.

Purging of Data Store


JSON File Content sample:

{
"Historian Node": "10.181.213.175",
"Data Management": {
"Purge": [ { "Datastore Name": "Turbine-4" } ]
}
}
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Purge": [
{
"Datastore Name": "Turbine-10",
"Properties": {
"Archive File Names": [
"Turbine-10_historian-archiver_Archive046.iha",
"Turbine-10_historian-archiver_Archive1543363199.iha"
]
}
}
]
}
}

JSON Key description


• Datastore name: Can be a sequence of characters surrounded with ". Valid Datastore name must be
provided.
• Archive File Name: Can be a sequence of characters surrounded with ". Valid Archive names must be
provided.
What can you do with the operation?
This will delete Turbine-10_historian-archiver_Archive046.iha, Turbine-10_historian-
archiver_Archive1543363199.iha

22 © 2019 General Electric Company


Purging of Archives based on Time stamps

{
"Historian Node": "10.181.213.175",
"Data Management": {
"Purge": [
{
"Datastore Name": "User",
"Properties": {
"Start Time": 1543417800,
"End Time": 1543418220
}
}
]
}
}

JSON Key description


• Datastore name: Can be a sequence of characters surrounded with ". Valid Datastore name must be
provided.
• Start Time/End Time: Should be in epoch time format in seconds.
What can you do with the operation?
Data between these timestamps will get deleted. This will delete entire archives having/between
these timestamps.

Backup of Archive files using File Names

{
"Historian Node": "10.181.213.175",
"Data Management": {
"Back Up": [
{
"Datastore Name": "User",
"Back Up Path": "/data/backup",
"Properties": {
"Archive File Names": [
"User_historian-archiver_Archive1543449599"
]
}
}
]
}
}

JSON Key description


• Datastore name: Can be a sequence of characters surrounded with ". Valid Datastore name must be
provided.
• Backup path: Should be a valid path in context of Historian docker container.
• Archive file name: Should be valid archive names. You can provide multiple archives separating with
comma.
What can you do with the operation?
This will back up the provided archive file to the data/backup folder.

© 2019 General Electric Company 23


Backup of Archive Files using Number of Files

{
"Historian Node": "10.181.213.175",
"Data Management": {
"Back Up": [
{
"Datastore Name": "User",
"Back Up Path":"/data/backup",
"Properties":
{
"Number Of Files":2
}
}
]
}
}

JSON Key description


• Datastore name: Can be a sequence of characters surrounded with ". Valid Datastore name must be
provided.
• Backup path: Should be a valid path in context of archiver docker container.
• Number of files: Number of files to be backed up. Should be a numerical value.
What can you do with the operation?
This will back up the lat two archive files to the backup folder.

Backup of Archive Files using Start time and End Time

{
"Historian Node": "10.181.213.175",
"Data Management": {
"Back Up": [
{
"Datastore Name": "User",
"Back Up Path":"/data/backup",
"Properties":
{
"Start Time" :1540511999,
"End Time" :1540598399
}
}
]
}
}

JSON Key description


• Datastore name: Can be a sequence of characters surrounded with ". Valid Datastore name must be
provided.
• Backup path: Should be a valid path in context of archiver docker container.
• Start/End Time: Should be epoch time stamp.
What can you do with the operation?
Data between these timestamps will get backed up. This will backup entire archives having/between
these timestamps.

24 © 2019 General Electric Company


Restore
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Restore": [
{
"File Path": "/data/backup/User_historian-
archiver_Archive1543507756_Backup.zip",
"Archive Name": "User_historian-archiver_Archive1543507756",
"Datastore Name": "User"
}
]
}
}

JSON Key description


• File Path: Path of the backed-up file.
• Archive Name: Name to which data to be Restored.
• Datastore Name: Name of the datastore to which archive file needs to restore.
What can you do with the operation?
This will restore the backed-up files into specific Datastore.

Data Store options for Archive Type Hours/Days


"Datastore Name": "ScadaBuffer",
"Properties": {
"Archive Type": "Hours",
"Archive Duration": 10,
"Archive Active Hours": 10,
"Archive Default Backup Path": "/data/archiver/backupfiles/",
"Datastore Duration": 4
}

JSON Key description


• Archive Type: Valid values are Hours/Days/BySize.
• Archive Duration: Can be numerical Value.
• Archive Active Hours: Can be numerical Value.
• Archive Default Backup Path: Should be a valid path. 5. Datastore Duration: Can be numerical value.
p
What can you do with the operation?
This will set the Data Store properties as mentioned in the config file.

Data Store options for Archive Type BySize.


"Datastore Name": "DHSSystem",
"Properties": {
"Archive Type": "BySize",
"Archive Default Size(MB)": 200,
"Archive Active Hours": 744,
"Archive Default Backup Path": "/data/archiver/backupfiles/"
}

JSON Key description

© 2019 General Electric Company 25


• Archive Default Size(MB): Can be a numerical value. Rest Keys can be referred from above examples.
What can you do with the operation?
This will set the Data Store properties as mentioned in the config file for the Archive Type BySize.

Tag Options-Collection Properties


{
"Historian Node": "10.181.213.175",
"Config": {
"Tag Options": [
{
"Tag Pattern": "US-
TestTagsChange1.Objects.Demo.Dynamic.Scalar.Byte",
"Tag Properties": {
"Collection": {
"Collection": true,
"Collection Interval Unit": "sec",
"Collection Interval": 5,
"Collection Offset Unit": "sec",
"Collection Offset": 1,
"Time Resolution": "sec"
}
}
}
]
}
}

JSON Key description


• Collection: should be true/false.
• Collection Interval Unit: should be sec/min/hour or millisec
• Collection Offset Unit: should be sec or millisec.
• Collection Interval and Collection Offset: should be a numerical value.
Note: We can filter tags based on the tag names, Collector name and Data Store name. We just need
to replace Tag Pattern with Collector Name or Datastore Name.
What can you do with the operation?
This will set the tag properties as mentioned in the config file.

Tag Options-Compression Properties


{
"Historian Node": "10.181.213.175",
"Config": {
"Tag Options": [
{
"Tag Pattern": "US-
TestTagsChange1.Objects.Demo.Dynamic.Scalar.Byte",
"Tag Properties": {
"Collector Compression": {
"Collector Compression": true,
"Collector Deadband": "Percent Range",
"Collector Deadband Value": 80,
"Collector Compression Timeout Resolution": "min",
"Collector Compression Timeout Value": 10
}

26 © 2019 General Electric Company


}
}
]
}
}

JSON Key description


• Collector Compression: Should be true/false.
• Collector Deadband Value/Collector Compression Timeout Value: Should be numerical value.
• Collector Deadband: Percent Range/Absolute.
• Collector Compression Timeout Resolution: should be sec/min/hour or millisec.
Note: We can filter tags based on the tag names, Collector name and Datastore name. We just need
to replace Tag Pattern with Collector Name or Datastore Name.
What can you do with the operation?
This will set the compression properties as mentioned in the config app.

Tag Options-Scaling

{
"Historian Node": "10.181.213.175",
"Config": {
"Tag Options": [
{
"Tag Pattern": "US-
TestTagsChange1.Objects.Demo.Dynamic.Scalar.Byte",
"Tag Properties": {
"Scaling": {
"Hi Engineering Units": 100,
"Low Engineering Units": 0,
"Input Scaling": false,
"Hi Scale Value": 0,
"Low Scale Value": 0
}
}
}
]
}
}

Note: We can filter tags based on the tag names, Collector name and Data Store name. We just need to
replace Tag Pattern with Collector Name or Datastore Name.
What can you do with the operation?
This will set the scaling properties as mentioned in the Config file.

Tag Options-Condition Based Collection

{
"Historian Node": "10.181.213.175",
"Config": {
"Tag Options": [
{
"Tag Pattern": "US-
TestTagsChange1.Objects.Demo.Dynamic.Scalar.Byte",
"Tag Properties": {

© 2019 General Electric Company 27


"Condition Based Collection": {
"Condition Based": true,
"Trigger Tag": "US-
TestTagsChange1.Objects.Demo.Dynamic.Scalar.Boolean",
"Comparison": ">=",
"Compare Value": "50000",
"End Of Collection Marker": true
}
}
}
]
}
}

JSON Key description


• Trigger Tag: Should be a valid tag name.
• Comparison: =,<,<=,>,>=,!=
• End of Collection Marker: true/false
Note: We can filter tags based on the tag names, Collector name and Data Store name. We just need
to replace Tag Pattern with Collector Name or Datastore Name.
What can you do with the operation?
This will set the Condition based collection properties as mentioned in the Config file.

Tag Options- Using Tag Group


{
"Historian Node": "10.181.213.175",
"Config": {
"Tag Options": [
{
"Tag Group": [
"US-TestTagsChange1.Objects.Demo.Dynamic.Scalar.Boolean",
"US-TestTagsChange1.Objects.Demo.Dynamic.Scalar.Byte"
],
"Tag Properties": {
"Tag Datastore": "ScadaBuffer",
"Data Type": "Int16"
}
}
]
}
}

JSON Key description


Tag Group: Should be a valid tag name. Can provide any no. of tags.
What can you do with the operation?
This will set the Tag properties to the group of tags mentioned in the Tag Group section.

JSON File Content Example


{
"Historian Node": "10.181.212.175",

28 © 2019 General Electric Company


"Data Management": {
"Create Datastore": [
{
"Datastore Name": "Turbine-1",
"Properties": {
"Default Datastore": false,
"Description": "Custom datastore for storing data of
Turbine-1"
}
},
{
"Datastore Name": "Turbine-2",
"Properties": {
"Default Datastore": true,
"Description": "Custom datastore for storing data of
Turbine-2"
}
}
],
"Back Up": [
{
"Datastore Name": "User",
"Back Up Path": "/data/backup",
"Properties": {
"Archive File Names": [
"User_historian-archiver_Archive1540511999",
"User_historian-archiver_Archive1540598399"
]
}
},
{
"Datastore Name": "User",
"Back Up Path": "/data/backup",
"Properties": {
"Number Of Files": 2
}
},
{
"Datastore Name": "User",
"Back Up Path": "/data/backup",
"Properties": {
"Start Time": 1540511999,
"End Time": 1540598399
}
}
],
"Purge": [
{
"Datastore Name": "User",
"Properties": {
"Archive File Names": [
"User_historian-archiver_Archive1540511999",
"User_historian-archiver_Archive1540598399"
]
}
},
{
"Datastore Name": "Turbine-1"
},
{
"Datastore Name": "User",

© 2019 General Electric Company 29


"Properties": {
"Start Time": 1540511999,
"End Time": 1540598399
}
}
],
"Restore": [
{
"File Path": "/data/backup/User_historian-
archiver_Archive1540511999_Backup.zip",
"Archive Name": "User_historian-archiver_Archive1540511999",
"Datastore Name": "User"
},
{
"File Path": "/data/backup/User_historian-
archiver_Archive1540598399_Backup.zip",
"Archive Name": "User_historian-archiver_Archive1540598399",
"Datastore Name": "User"
}
]
},
"Config": {
"Datastore Options": [
{
"Datastore Name": "ScadaBuffer",
"Properties": {
"Archive Type": "Days",
"Archive Duration": 1,
"Archive Active Hours": 99999,
"Archive Default Archive Name": "ScadaBuffer_historian-
archiver_Archive",
"Archive Default Backup Path": "/data/archiver/archives/",
"Default Datastore": true,
"Datastore Duration": 48
}
},
{
"Datastore Name": "User",
"Properties": {
"Archive Type": "Hours",
"Archive Duration": 1,
"Archive Active Hours": 744,
"Automatically Create Archives": false,
"Overwrite Old Archives": true,
"Archive Default Backup Path": "/data/archiver/archives/"
}
},
{
"Datastore Name": "DS1",
"Properties": {
"Archive Type": "BySize",
"Archive Default Size(MB)": 100,
"Archive Active Hours": 744,
"Archive Default Backup Path": "<path>"
}
}
],
"Tag Options": [
{
"Tag Group": [ "Test-Boolean", "Test-Int16" ],
"Tag Properties": {

30 © 2019 General Electric Company


"Tag Datastore": "ScadaBuffer",
"Data Type": "Int16"
}
},
{
"Tag Pattern": "Demo.Dynamic.Scalar.*",
"Tag Properties": {
"Collection": {
"Collection": true,
"Collection Interval Unit": "min",
"Collection Interval": 10,
"Collection Offset Unit": "sec",
"Collection Offset": 0,
"Time Resolution": "sec"
},
"Condition Based Collection": {
"Condition Based": true,
"Trigger Tag": "SampleTrigger",
"Comparison": ">=",
"Compare Value": "50000",
"End Of Collection Marker": true
},
"Collector Compression": {
"Collector Compression": true,
"Collector Deadband": "Percent Range",
"Collector Deadband Value": 80,
"Collector Compression Timeout Resolution": "min",
"Collector Compression Timeout Value": 10
},
"Archive Compression": {
"Archive Compression": true,
"Archive Deadband": "Percent Range",
"Archive Deadband Value": 80,
"Archive Compression Timeout Resolution": "min",
"Archive Compression Timeout Value": 10
},
"Scaling": {
"Hi Engineering Units": 1000,
"Low Engineering Units": 0,
"Input Scaling": false,
"Hi Scale Value": 0,
"Low Scale Value": 0
},
"Tag Datastore": "DS1"
}
},
{
"Collector Name": "EdgeMQTT",
"Tag Properties": {
"Collection": {
"Collection": true,
"Collection Interval Unit": "sec",
"Collection Interval": 10,
"Collection Offset Unit": "sec",
"Collection Offset": 0,
"Time Resolution": "sec"
},
"Archive Compression": {
"Archive Compression": true,
"Archive Deadband": "Absolute",
"Archive Deadband Value": 5,

© 2019 General Electric Company 31


"Archive Compression Timeout Resolution": "min",
"Archive Compression Timeout Value": 15
},
"Tag Datastore": "TestDS"
}
},
{
"Datastore Name": "DS1",
"Tag Properties": {
"Collection": {
"Collection": true,
"Collection Interval Unit": "min",
"Collection Interval": 10,
"Collection Offset Unit": "sec",
"Collection Offset": "0",
"Time Resolution": "sec"
},
"Condition Based Collection": {
"Condition Based": true,
"Trigger Tag": "SampleTrigger",
"Comparison": ">=",
"Compare Value": "50000",
"End of Collection Marker": true
},
"Collector Compression": {
"Collector Compression": true,
"Collector Deadband": "Percent Range",
"Collector Deadband Value": 80,
"Collector Compression Timeout Resolution": "min",
"Collector Compression Timeout Value": 10
},
"Archive Compression": {
"Archive Compression": true,
"Archive Deadband": "Percent Range",
"Archive Deadband Value": 80,
"Archive Compression Timeout Resolution": "min",
"Archive Compression Timeout Value": 10
},
"Tag Datastore": "DS1"
}
}
]
}
}

Installing and Running the Historian Tuner Container


To install the Historian Tuner, you must have a x64 Linux machine with the Docker engine installed.

Procedure
The Docker image of Historian Tuner can be downloaded from Artifactory.

Next Steps
Running the Historian Tuner Container

32 © 2019 General Electric Company


Next Steps

docker run –d -p 9000:9000 -v /data/tuner:/data/ -v /config/tuner:/


config/ <image_name>:<tag_name>

docker run -d –p 9000:9000 -v /data/tuner:/data/ -v /config/tuner:/


config/ dtr.predix.io/predix-edge/ge-historian-tuner-
amd64:ubuntu16.04.v2.2.0

Historian Tuner Container Environment Variables

Note: Environment variables can be only set through start-environments-tuner file for Historian Tuner.
This file should be present in the /config direcory of Docker container.

Environment Variables Description Default value Range

HS_LOG_TO_FILE Set to true, if logs need to be re- false true/false


directed to file, else logs go to
docker console.

HS_NUMBER_OF_LOG_FILES Maximum number of Log Files, 10 1 to 100


once this value exceeds, oldest
file will be deleted to
accommodate new one.

HS_SIZE_OF_EACH_LOG_FILE_ Maximum size of one Log file in 10 1 to 10


IN_MB Mega Bytes. If this value
exceeds, new Log File will be
created.

TUNER_SECURE Toggles security true true/false

OAUTH2_CLIENT_ID UAA Client ID Empty valid value

OAUTH2_CLIENT_SECRET UAA Client secret Empty valid value

OAUTH2_URL UAA URL Empty valid value

https_proxy Proxy URL with port number. Empty valid value

Historian Tuner REST API

URL Request Parameters Request Headers

http://<Predix Edge OS IP Address>: -F 'uploadFile=@<Absolute path of JSON Authorization: Bearer <token from
9000/upload FILE>' trusted issuer>

For example
– curl -F 'uploadFile=@ C:\workstation\historian-config.json' http://10.181.212.287:9000/upload

© 2019 General Electric Company 33


Historian Server to Server Collector

Historian Server to Server Collector Overview


The Historian Server to Server (S2S) Collector allows you to collect data and messages from source
Historian Server to destination Historian or Predix Cloud.
GE recommends installing the S2S Collector in Source Historian machine. Configuration in this manner
allows the store and forward capabilities of data collectors to preserve collected data even if the collector
and destination archiver become disconnected. The S2S Collector can also run as a stand-alone
component where both the source and destination Historian databases are on remote machines.

Installing and Running Historian S2S Collector


You must have an x64 Linux machine with the Docker engine installed.

Procedure
1. Create a directory in the Host file system to store the persistent data of the collector.

mkdir -p /data/historian-s2s-collector-1
mkdir -p /config/historian-s2s-collector-1

The collector performs store and forward, so data is stored in the disk.
2. Run the following docker command:

docker run -d -v /data/historian-s2s-collector-1:/data/ -v /config/


historian-s2s-collector-1:/config/
-e HISTORIAN_REGFILE=/data/HistorianServers.reg <Image Name>

For example:

docker run -d -v /data/historian-s2s-collector-1:/data/ -v /config/


historian-s2s-collector-1:/config/
-e HISTORIAN_REGFILE=/data/HistorianServers.reg dtr.predix.io/predix-
edge/ge-historian-s2s-collector-amd64:ubuntu16.04.v2.2.0

The Docker image of Historian S2S Collector can be downloaded from Artifactory. You can also refer to
How to deploy it where a docker-compose.yml file can be used as guidance to run this container on
Predix Edge.
Important: To start multiple instances of the collector, each S2S collector docker should have unique
InterfaceName in HistorianServers.reg file.
Important: If destination Historian is running on Predix Edge, you must publish port 14000 of
Historian databse docker of destination Historian. This is because S2S collector may be running in
some other machine (S2S collector and destination Historian databse may be not joining same docker
private network). You must also publish port 14000 of Historian database docker of source Histroian, if
S2S collector and source Historian do not join the same docker private network.

34 © 2019 General Electric Company


Historian S2S Collector Container Environment Variables
Note: Environment variables can be only set through start-environments-s2s file for Historian S2S
Collector. This file should be present in the /config direcory of Docker container.

Environment Variable Description

HISTORIAN_REGFILE Path of “HistorianServers.reg” file. This file contains


configuration settings like hostname/IP address of
Destination Historian (“HistorianNodeName”), like
hostname/IP address of Source Historian (“General3”),
name of collector (“InterfaceName”, it should be
unique).

HS_NUMBER_OF_LOG_FILES Maximum number of Log Files. Once this value exceeds,


oldest file will be deleted to accommodate new one. Its
default value is 100, and range is from 1 to 100.

HS_SIZE_OF_EACH_LOG_FILE Maximum size of one Log file in Mega Bytes. If this value
exceeds, new Log File will be created. Its default value
10, and range is from 1 to 10.

Sample S2S Collector HistorianServers.reg file


[HKEY_LOCAL_MACHINE\Software\GE Digital\iHistorian\Services]
"DebugMode"=dword:00

[HKEY_LOCAL_MACHINE\Software\GE Digital\iHistorian\Services
\ServerToServerCollector]
"HistorianNodeName"="ip-of-dest-edge-machine"
"InterfaceName"="historian-s2s-collector-1"
"DefaultTagPrefix"="historian-s2s-collector-1."
"General3"="historian-archiver"

[HKEY_LOCAL_MACHINE\Software\Intellution, Inc.\iHistorian
\Services]
"LogFilePath"="."

Important notes for S2S Collector operations


When you add a tag by choosing from the S2S Collector browse list, only certain tag properties are copied
from the source tag to the destination tag. If you intend to copy raw samples from the source to the
destination, after you add the tag, be sure to set these properties to their desired values. See Tag
Properties Copied to the Destination Tag described below.
Important tag properties that do not automatically copy over when you add the tag include:
• Input scaling settings
Since the output of the source tag is the input to the destination tag, you actually want to match the
EGU limits on the source to input limits on the destination, if you are using Input Scaling.

© 2019 General Electric Company 35


• Timestamp resolution
Make sure that the timestamp resolution properties match. For example, do not use the second
timestamp resolution on the destination tag, if your source tag uses millisecond timestamp resolution.
If your source tag uses millisecond timestamp resolution, then you also want to set your destination
tag to also use millisecond timestamp resolution.
The following table describes the tag properties in the Historian Administrator Tags screen that are
copied when the destination tag is created via select from the browse. If a property is not listed in this
table, it is not copied.
Tab Name Properties Copied

General Description
EGUDescription

Collection Data Type


DataLength

Scaling0 HiEGU
LoEGU
InputScaling
HiScale
LoScale

Compression ArchiveCompression
ArchiveDeadband(%)

Streaming data to Predix Time Series


Historian S2S collector needs a list of tags in the form of XML file. Data of these tags will be streamed to
Predix Time Series by S2S Collector. We call this XML file Offline configuration.
Offline Configuration also helps you to define the configuration properties of a collector (Taglist, Tag
properties, and collector interface properties). This feature is particularly useful when collectors connect
to the Predix Time Series.
The path to the Offline Tag Configuration File is provided in the collector registry as:
OfflineTagConfigurationFile (Data Type: String).

Creating Offline Configuration XML file

About This Task


It is recommended that you add the Collector property section above the Tag property section in your
offline configuration XML file.

Procedure
1. Add the following collector interface properties to the top of your configuration XML file.
The following is an example for the Server to Server Collector interface properties:

<Import>
<Collectors>
<Collector Name="<Collector Name>">
<InterfaceType>ServerToServer</InterfaceType>

36 © 2019 General Electric Company


<InterfaceGeneral1>10</InterfaceGeneral1>
......
</Collector>
</Collectors>

2. Add your TagList and Tag properties to your XML file.

<Collectors>
...
</Collectors>

<TagList Version="1.0.71">

<Tag>
<Tagname>simCollector1</Tagname>
<SourceAddress>Result = CurrentValue("SJC1GEIP05.Simulation00002")</
SourceAddress>
...
</Tag>

<Tag>
<Tagname>simCollector2</Tagname>
<SourceAddress>Result = CurrentValue("SJC1GEIP05.Simulation00002")</
SourceAddress>
...
</Tag>
...
</TagList>
</Import>

3. Add the closing </Import> tag to the end of your XML file.

Sample OfflineConfiguration.xml file

<Import>
<TagList Version="1.0.71">
<Tag>
<Tagname>HistS2SInt16</Tagname>
<SourceAddress>Result = CurrentValue("US-
TestTagsChange1.Objects.Demo.Dynamic.Scalar.Int16")</SourceAddress>
<DataType>SingleInteger</DataType>
<CollectionType>Unsolicited</CollectionType>
<TimeResolution>Milliseconds</TimeResolution>
<CollectionInterval>1000</CollectionInterval>
<Description>US-
TestTagsChange1.Objects.Demo.Dynamic.Scalar.Int16</Description>
<HiEngineeringUnits>200000.00</HiEngineeringUnits>
<InputScaling>false</InputScaling>
<InterfaceCompression>1</InterfaceCompression>
<InterfaceDeadbandPercentRange>80.00</
InterfaceDeadbandPercentRange>
<InterfaceCompressionTimeout>30000</
InterfaceCompressionTimeout>t>
<TimeStampType>Source</TimeStampType>
<CalculationDependencies>
<CalculationDependency>US-
TestTagsChange1.Objects.Demo.Dynamic.Scalar.Int16</
CalculationDependency>
</CalculationDependencies>
<SpikeLogic>1</SpikeLogic>

© 2019 General Electric Company 37


<LoScale>0</LoScale>
<HiScale>32767.00</HiScale>
<NumberOfElements>0</NumberOfElements>
</Tag>
</TagList>
</Import>

Sample HistorianServers.reg file for streaming to Predix Time Series

[HKEY_LOCAL_MACHINE\Software\GE Digital\iHistorian\Services]
"DebugMode"=dword:00

[HKEY_LOCAL_MACHINE\Software\GE Digital\iHistorian\Services
\ServerToServerCollector]
"HistorianNodeName"="cloud|wss://gateway-predix-data-services.run.aws-
usw02-pr.ice.predix.io/v1/stream/messages|configServer=none|
identityissuer=<identity-issuer>|clientid=<client-id>|
clientsecret=<client-secret>|zoneid=<zone-id>|
dpattributes={"key1":"value1","key2":"value2","key3":"value3"}|
proxy=<proxy>"
"InterfaceName"=" historian-s2c-collector-1"
"General3"=" historian-archiver"
"OfflineTagConfigurationFile"="/conf/ S2S_Offline_Config.xml "

[HKEY_LOCAL_MACHINE\Software\Intellution, Inc.\iHistorian\Services]
"LogFilePath"="."

Predix Time Series Information Fields in HistorianServers.reg file


Following fields are added to HistorianServers.reg file to provide Predix Time series ingestion URL and
Predix UAA credentials.

Field Description Contact

Cloud Destination Address The URL of a data streaming endpoint exposed by the Predix Time Your Predix Time Series
Series instance to which the data should go. Typically, it starts with administrator can provide this URL.
“wss://”.

Identity Issuer The URL of an authentication endpoint for the collector to Your Predix Time Series
authenticate itself and acquire necessary credentials to stream to administrator can provide this URL.
the Predix Time Series. Typically, it starts with https:// and ends with
“/oauth/token”.

Client ID This field identifies the collector when interacting with the Predix Your Predix Time Series
Time Series. This is equivalent to the User name in many administrator can provide this
authentication schemes. The client must exist in the UAA identified information.
by the Identity Issuer, and the system requires that the
timeseries.zones. {ZoneId}.ingest and timeseries.zones.
{ZoneId}.query authorities are granted to the client for the Predix
Zone ID specified.

Client Secret This field stores the secret to authenticate the collector. This is Your Predix Time Series
equivalent to Password in many authentication schemes. administrator can provide this
information.

38 © 2019 General Electric Company


Field Description Contact

Zone ID Because the Predix system hosts many instances of the Time Series Your Predix Time Series
service, the Zone ID uniquely identifies the one instance to which the administrator can provide this
collector will stream data. information.

Proxy If the collector is running on a network where proxy servers are used Your local IT administrator can
to access web resources outside of the network, then proxy server provide the proxy server
settings must be provided. This field identifies the URL of the proxy information.
server to be used for both the authentication process and for
streaming data. However, it does not affect the proxy server used by
Windows when establishing secure connections. As a result, you
should still properly configure the proxy settings for the Windows
user account under which the collector service runs.

© 2019 General Electric Company 39


Historian OPCUA DA Collectors

Historian OPCUA DA Collector Overview


The Historian OPCUA DA Collector collects data from any OPCUA Compliant OPCUA DA Server. The
Historian OPCUA DA Collector automatically determines the capability of OPCUA Server to which it is
connected and collects data and ingest the data to Historian database.

Installing and Running Historian OPCUA DA Collector

Before You begin


You must have an x64 Linux machine with the Docker engine installed. The Docker image of Historian
OPCUA DA Collector can be downloaded from Artifactory.

Procedure
1. Create a directory in the Host file system to store the persistent data of the collector.

mkdir -p /data/opcuacollector
mkdir -p /config/opcuacollector
2. Run the following command to run the OPCUA collector docker.

docker run –d -e HISTORIAN_REGFILE=<Registry File Path> -e


OPCUA_INI_FILE=<Client Configuration INI File path>
<image_name>:<tag_name>

Example:

docker run -d -v /data/opcuacoll:/data/ -v /config/opcuacoll:/


config/
-e HISTORIAN_REGFILE=/data/HistorianServers.reg
-e OPCUA_INI_FILE=/config/ClientConfig.ini dtr.predix.io/predix-edge/
ge-historian-opcua-collector-amd64:ubuntu16.04.v2.2.0

Historian OPCUA DA Collector Container Environment Variables


Note: Environment variables can be only set through start-environments-opcua file for Historian OPCUA
DA Collector. This file should be present in the /config direcory of Docker container.

40 © 2019 General Electric Company


Environment Variable Description

HISTORIAN_REGFILE Path of “HistorianServers.reg” file.

This file contains configuration settings like –

Hostname/IP address of Destination Historian (“HistorianNodeName”),

OPCUA Server URI(“General1”),

Enable Secure connection(“Genearl2),

name of collector (“InterfaceName”, it should be unique) etc.

OPCUA_INI_FILE Path of ClientConfig.ini file. It contains Client Certificate details with their
locations.

Sample Historian OPCUA DA Collector HistorianServer.reg File


[HKEY_LOCAL_MACHINE\Software\GE Digital\iHistorian\Services
\OPCUACollector]
"HistorianNodeName"="historian-archiver"
"InterfaceName"="OPCUAlinux_OPCUACollector"
"DefaultTagPrefix"="US-TestTagsChange1."
"General1"="opc.tcp://10.181.214.185:48010"
"General2"="true"

Sample Historian OPCUA DA Collector ClientConfig.ini File


[UaClientConfig]
ApplicationName =OPCUACollector

; TrustCertificate value (only used in secured connection):


; 0 (No trust),
; 1 (Trust temporarily)
; 2 (Default, trust permanently and copy the server certificate into
the client trust list)
TrustCertificate =2
CertificateTrustListLocation =/data/pkiclient/trusted/certs/
CertificateRevocationListLocation =/data/pkiclient/trusted/crl/
IssuersCertificatesLocation =/data/pkiclient/issuers/certs/
IssuersRevocationListLocation =/data/pkiclient/issuers/crl/
ClientCertificate =/data/pkiclient/own/certs/domain.der
ClientPrivateKey =/data/pkiclient/own/private/
domain.key
RetryInitialConnect =true
AutomaticReconnect =true

Note:
• Place HistorianServer.reg file and ClientConfig.ini file in the Config directory.
• If User does not provide its own certificate & key pair, Historian OPCUA DA Collector generates its own
certificate.
• General2 key value if set false in HistorianServers.reg file then this collector connects to OPCUA DA
server in unsecure mode (without any certificate exchange).

© 2019 General Electric Company 41


Historian OPCUA DA Collector Capabilities
The following table outlines the capabilities of the Historian OPCUA DA Collector.

Feature Capability

Browse Source For Tags Yes (on OPCUA Servers that support browsing)

Browse Source For Tag Attributes Yes

Polled Collection Yes

Minimum Poll Interval 100 ms

Unsolicited Collection Yes.


If you are using an Historian OPCUA DA Collector with unsolicited data
collection and have collector compression disabled, all new values
should produce an exception. When the Historian OPCUA DA Collector is
doing unsolicited collection, the deadband percentage is determined by
the collector deadband percent. You can only configure the collector
deadband percent by enabling compression.

Timestamp Resolution 100ms

Accept Device Timestamps Yes

Floating Point Data Yes

Integer Data Yes

String Data Yes

Binary Data Yes

Python Expression Tags No

Note: You must set Time Assigned by field to Source if you have unsolicited tags getting data from an
Historian OPCUA DA Collector.

Tag Attributes Available on Browse


The following table outlines the tag attributes available when browsing.

Attribute Capability

Tag name Yes

Source Address Yes

Engineering Unit Description Yes, varies by OPCUA Server Vendor.

Data Type Yes. See Selecting Data Types.

Hi Engineering Units Yes, varies by OPCUA Server Vendor.

Lo Engineering Units Yes, varies by OPCUA Server Vendor.

Hi Scale Yes

Lo Scale Yes

Is Array Tag No

Note: While some of these attributes are queried on a browse, they are not shown in the browse
interface. These attributes are used when adding a tag, but you will not be able to see whether or not all
attributes come from the server.

42 © 2019 General Electric Company


Selecting Data Types
The following table lists the data types recommended for use with Historian.

OPCUA Data Type Recommended Data Type in Historian

I1 - 16 bit signed integer Single Integer

I4 - 32 bit signed integer Double Integer

R4 - 32 bit float Single Float

R8 - 64 bit double float Double Float

UI2 - 16 bit unsigned single integer Unsigned Single Integer

UI4 - 32 bit unsigned double integer Unsigned Double Integer

UI8 - 64 bit unsigned quad integer Unsigned Quad Integer

I8 - 64 bit quad integer Quad Integer

BSTR Variable String

BOOL Boolean

I1 - 8 bit single integer Byte

Note: The Historian OPCUA Collector requests data from the OPCUA DA server in the native data type.
Then the Historian OPCUA DA collector converts the received value to a Historian Data Type before
sending it to the Historian database.

OPCUA Group Creation


To increase performance, it is recommended that you limit the number of OPCUA groups created by the
Historian system. To do this, consider grouping Historian tags (collected by the Historian OPCUA DA
Collector) using the least amount of collection intervals possible.

Secured OPCUA Collector Connectivity


When Secured Connectivity is enabled between the OPCUA DA server and collector, the Client Certificate
must be added to the OPCUA DA Server's trusted list of certificates as follows:
1. Start the Historian OPCUA DA Collector in secured mode. The Historian OPCUA DA Collector will not
start.
2. Locate the OPCUA DA local or remote server installation location on the machine where the OPCUA
server is running. Typically, this can be found under "C:\ProgramFiles\OPCUA Server Name\" in
Windows Host.
3. Locate the folder named "rejected" under the OPCUA DA server installation path. If you are unable to
locate it, check the OPCUA DA Server manual for assistance.
4. The client certificate is in the "rejected" folder. Copy and paste this certificate into the OPCUA Server's
trusted list of certificates. The OPCUA DA Server manual indicates the folder where the trusted
certificates are located.
5. Restart the Historian OPCUA DA Collector.

© 2019 General Electric Company 43


Security for Historian for Linux container
Ecosystem

Security for Historian for Linux container Ecosystem


This section explains the security mechanism for Historian for Linux container Ecosystem. The main
objective here is to protect Historian databse which is the custodian of data. The security is
implemented in two tiers:
• Tier 1- Docker private network.
To understand security mechanism of this layer, you must first know about the network ports on
which various Historian containers are LISTENING.

Docker private network is a technology which enables a group of Docker containers to perform
network communication with each other. The ports on which applications of this groups are
LISTENING is available for viewing-only to member applications of this specific Docker private
network. If any Docker container wants to expose its port outside of the Docker private network, that
port needs to be exposed.
In the diagram above there are four network servers in Historian container ecosystem:
1. Historian database LISTENS on port 14000
2. Web admin service LISTENS on port 9443
3. Tuner LISTENS on port 9000
4. REST query service LISTENS on port 8989

44 © 2019 General Electric Company


Here, ports 9443, 9000 and 8989 are exposed to outside of docker private network, so that clients
of web-admin, Tuner & REST query service can interact with these applications from outside of the
machine or docker private network.
Only port 14000 is not exposed. The Historian database’s port 14000 (TCP/IP port) is secured via
docker private network. The members of this specific docker private network like Tuner, web-
admin, REST query service, Collectors (MQTT, OPCUA) can only connect to port 14000 of Historian
database.
• Tier 2 (OAUTH2 mechanism)
◦ As explained above, the ports 9443 (web-admin), 9000(Tuner) & 8989 (REST Query service) are not
protected in Tier-1 (Docker private network) security layer.
◦ In order to protect these ports, Tuner, REST Query Service has leveraged OAUTH2 authentication
and authorization mechanism..

In the diagram above, it is clear that Historian database does not have any OAUTH2
authentication or authorization mechanism for itself. It is through Web-admin, Tuner and REST
Query service applications that the user interacts with the Historian database.
For example, it is via the REST Query service, the REST clients query Historian database. That is,
these group of applications validate the user’s authentication and authorization. If the user is valid
only then these applications connect to the Historian database on the user's request for access.
Hence, Historian database is the ultimate resource (the analogy here can be to that of a Vault in a
Bank) we want to protect, while Web-admin, Tuner and REST Query service are acting as
Resource owners (analogy with Guard of Vault in Bank) and are Guards of Historian database.
To provision Tier-2(OAUTH2 mechanism) security layer, the users must set up an OAUTH2 server in
their premise or can also leverage Predix UAA service (OAUTH2 Server on Predix Cloud).
Note: Historian for Linux product does not provide OAUTH2 Server.
Web-admin, Tuner and REST Query service offer Docker environment variables by which the
users can provide OAUTH2 credentials to these docker containers, so that these applications can
validate the token from the specified OAUTH2 Server.

© 2019 General Electric Company 45


Docker Enviroenment variables for providing OAUTH2 credentials to Tuner, Webadmin &
REST Query service:

For Tuner
----------------------
TUNER_SECURE=true
OAUTH2_CLIENT_ID=my-uaa-client
OAUTH2_CLIENT_SECRET
= my-uaa-secret OAUTH2_URL= https://28649aab-0fd3-456c-
baea-335d1b907668.predix-uaa.run.aws-usw02-pr.ice.predix.io
https_proxy=http://my-proxy.ge.com:80

For REST Query Service


----------------------
DISABLE_REST_QUERY_SECURITY=false
- ZAC_UAA_CLIENTID=my-uaa-client
- ZAC_UAA_CLIENT_SECRET=my-uaa-secret - ZAC_UAA_ENDPOINT
=https://28649aab-0fd3-456c-baea-335d1b907668.predix-uaa.run.aws-
usw02-pr.ice.predix.io
- USE_PROXY=true
- PROXYURL=http://my-proxy.com:80

For Web-admin
----------------------
HV_UAA_CLIENT_ID=my-uaa-client
HV_UAA_CLIENT_SECRET
=my-uaa-secret HV_UAA_SCHEME_AND_SERVER=https://28649aab-0fd3-456c-
baea-335d1b907668.predix-uaa.run.aws-usw02-pr.ice.predix.io
HV_USE_PROXY=true
HV_PROXY_URL= http://my-proxy.com:80

46 © 2019 General Electric Company


Key differences between Historian for Linux and
Windows Historian

Key differences between Windows and Linux Historian


The following table lists the key differences between Windows and Linux Historian:

Feature Historian for Linux Historian Windows

Predix Time series style REST APIs Yes No

Tuner App (for configuring Historian Yes No


data base)

Array data types No Yes

User defined data types (Custom No Yes


structure)

Enumeration data type No Yes

Collector redundancy No Yes

Alarm & Event Archiver No Yes

Diagnostic Manager for detecting faulty No Yes


Collectors & Clients

Proficy common Licensing No Yes

expose data as per OPCHDA Server No Yes


standards

Mirroring No Yes

Alerts & messages are not verbal No Yes

Historian Windows style REST APIs No Yes

Collector portfolio of Windows and Linux Historian


Important: All Windows Historian collectors can connect to Linux Historian database and vice versa. The
list below explains which collectors can run on Windows and/or Linux Operating systems.

Collectors Linux Host Microsoft Windows Host

OPCUA Collector Yes Yes

Server-to-Server Collector Yes Yes

Server-to-Cloud Collector Yes Yes

MQTT broker Collector Yes No

Windows Performance Collector Not applicable Yes

OPC Collector No Yes

OPC Historical Data Access Collector No Yes

Calculation Collector No Yes

OSI PI Collector No Yes

Ifix Collector No Yes

© 2019 General Electric Company 47


Collectors Linux Host Microsoft Windows Host

CygNet Collector No Yes

Wonderware (Schneider Electric) No Yes


Collector

File Collector No Yes

Machine Edition View Data Collector No Yes

Clients that cannot run on Linux Host


Important: Clients listed below can connect to Linux Historian database and operate but they cannot run
on Linux Host.
• VB Admin Console
• ihSQL Client
• Excel Add-in
• Trend Client

48 © 2019 General Electric Company


Historian for Linux Client Libraries

Historian for Linux Libraries

The libraries available below are stored in Artifactory. Use the following information to ensure you can
access the files.
For GE Employees
To access the Artifactory downloads via the links below, those using a GE email address must first be log
into Artifactory.
Note: If you attempt to download an Artifactory file without first logging into Artifactory, you will be
asked to Sign in, which will not work.
1. Go to Artifactory.
2. Click the Log In button.
3. Click the SAML SSO icon.
4. Use your SSO to log in.
5. You can then return to the documentation link to download the file.
For Predix Users
To access Artifactory downloads from links in the Predix Edge documentation, you must first create an
account on predix.io. Your predix.io account sign in credentials will be used to access Artifactory.
When you click an Artifactory download link, enter your predix.io username (email address) and password
in the Sign In dialog.

Collector Toolkit
The Collector Toolkit is a C++ based library you can use to write custom Historian collectors. You have to
write the source code to handle interactions with respective sources (for example, OPCUA, Modbus, and
so on). Collector interaction with GE Historian and features like store and forward and auto-reconnect to
Historian is automatically handled by this library.
The library is compiled in the following Linux distributions:
• Ubuntu 16.04 x64
You can download the libraries from the following locations:
• Ubuntu 16.04 x64

User API
The user API is a C library used for adding, deleting, and updating tags for a collector. Usually you can
manage tags using the Historian Web Admin console, but alternatively, you can use this library to do this
programatically.
The library is compiled in the following Linux distributions:
• Ubuntu 16.04 x64
You can download the libraries from the following locations:
• Ubuntu 16.04 x64

© 2019 General Electric Company 49


Related Documentation

Historian
• Historian Collector Toolkit
• Historian User API

Predix Time Series Service


• Predix Time Series Service
• Predix Time Series Service API Documentation

Predix UAA Service


• Predix User Account and Authentication Security Service

50 © 2019 General Electric Company


Troubleshoot Historian for Linux

General Troubleshooting Tips


The following are general issues you may experience when using Historian for Linux.

Historian for Linux Starts in Demo Mode


Cause
The Historian for Linux container must have a valid license file for all enabled features, including high tag
count support. If a valid license is not provided Historian switches to demo mode (which supports only 32
tags) while starting the Docker container.
Solution
Use the HS_LICENSE_FILE_PATH environment variable to supply the absolute file path of the valid
license file.
For example, if you mount a /data/edgedata directory with /data/ with Historian archiver Docker,
you can keep the license file in /data/edgedata/historian-license on the host machine. But
you should set the environment variable as HS_LICENSE_FILE_PATH=/data/historian-
license because in a Docker container context the path is /data/historian-license.

Web Admin Service Fails to Start


Cause
The Web Admin Service is a web application hosted in a Tomcat Server inside a Docker container.
Sometimes the host machine has its own Tomcat Server running and bound to a standard Tomcat port
like 8080 or 8443, so when the Web Admin Docker container is started, a port conflict occurs between the
Tomcat Server running inside the Docker container and the Tomcat Server running directly on the host
machine.
Solution
Stop the Tomcat service of the host machine.
As a protective measure, in docker-compose.yml file of Historian webadmin , port 9443 of Host machine is
mapped to port 8443 of Historian web-admin docker, so that historian web-admin docker does not
conflicts with 8443 of any other service in host machine.

Historian S2S Collector fails to connect to destination & Source Historian


Cause
Historian Archiver docker is a TCP/IP server listening on port 14000. S2S collector is a Historian client
which connects to port 14000 of machine where Historian Archiver docker is running. Usually we do not
publish port 14000 of Historian Archiver, that’s why Historian client dockers (such as S2S Collector) who
does not join the docker network of Historian Archiver, will not be able to reach to port 14000.
Solution
Publish port 14000 of Historian Archiver, if you want Historian clients to be connected to Archiver from
outside of Historian Archiver’s docker network.

© 2019 General Electric Company 51


Detailed Logs of Historian Collector, S2S Collector & Historian Archiver
Cause
Historian Archiver, Historian Collector & S2S Collector logs does not flow to “docker logs”, hence it’s tough
to analyse the operations logs of these softwares.
Solution
Historian archiver logs are available in “<mounted host’s data directory with docker container>/archiver/
Logs”. Historian Collector logs are available in “<mounted host’s data directory with docker container>”.
Historian S2S Collector logs are available in “<mounted host’s data directory with docker container>”. In
future we are working on bringing the operations log of the software on docker logs.

52 © 2019 General Electric Company

You might also like