Historian - For - Linux - User - Guide - v2.2.0
Historian - For - Linux - User - Guide - v2.2.0
ii User Guide
Historian Server to Server Collector Overview 34
Installing and Running Historian S2S Collector 34
Historian S2S Collector Container Environment Variables 35
Sample S2S Collector HistorianServers.reg file 35
Important notes for S2S Collector operations 35
Streaming data to Predix Time Series 36
Historian OPCUA DA Collectors 40
Historian OPCUA DA Collector Overview 40
Installing and Running Historian OPCUA DA Collector 40
Historian OPCUA DA Collector Container Environment Variables 40
Sample Historian OPCUA DA Collector HistorianServer.reg File 41
Sample Historian OPCUA DA Collector ClientConfig.ini File 41
Historian OPCUA DA Collector Capabilities 42
Secured OPCUA Collector Connectivity 43
Security for Historian for Linux container Ecosystem 44
Security for Historian for Linux container Ecosystem 44
Key differences between Historian for Linux and Windows Historian 47
Key differences between Windows and Linux Historian 47
Historian for Linux Client Libraries 49
Historian for Linux Libraries 49
Related Documentation 50
Troubleshoot Historian for Linux 51
General Troubleshooting Tips 51
iii
Copyright GE Digital
© 2019 General Electric Company.
GE, the GE Monogram, and Predix are either registered trademarks or trademarks of General Electric
Company. All other trademarks are the property of their respective owners.
This document may contain Confidential/Proprietary information of General Electric Company and/or its
suppliers or vendors. Distribution or reproduction is prohibited without permission.
THIS DOCUMENT AND ITS CONTENTS ARE PROVIDED "AS IS," WITH NO REPRESENTATION OR
WARRANTIES OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
WARRANTIES OF DESIGN, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. ALL OTHER
LIABILITY ARISING FROM RELIANCE UPON ANY INFORMATION CONTAINED HEREIN IS EXPRESSLY
DISCLAIMED.
Access to and use of the software described in this document is conditioned on acceptance of the End
User License Agreement and compliance with its terms.
Licensing
Historian for Linux is presently licensed separately from Predix Edge. This documentation reflects the
technical aspects of using Historian in the Predix Edge context. For information on purchasing Historian
for Linux perpetual licenses, please contact your GE Digital Sales contact. We are working on providing
additional commercial models for providing a limited time series capability at the Edge, but these are
incomplete and not yet ready for publication.
Procedure
1. Go to Artifactory.
2. Click the Log In button.
3. Click the SAML SSO icon.
4. Use your SSO to log in.
5. You can then return to the documentation link to download the file.
Next Steps
For Predix users:
To access Artifactory downloads from links in the Predix Edge documentation, you must first create an
account on predix.io. Your predix.io account sign in credentials will be used to access the Artifactory.
When you click an Artifactory download link, enter your predix.io username (email address) and password
in the Sign In dialog box.
Next Steps
1. Download and install a Historian for Linux application.
2. Click the application link to download the application to your machine.
3. Upload the file to your Edge Manager Repository as a Predix Edge application.
4. Deploy the application to an enrolled Predix Edge device.
Configuring an Application
1. Configure a Historian for Linux application.
2. Download and extract the sample configuration ZIP for the application.
3. Modify the settings in the sample config file for your environment.
4. Re-zip the file.
5. Upload the new ZIP file to the Predix Edge Manager Repository as a Predix Edge configuration.
6. Deploy the configuration to the corresponding application running on your Predix Edge device.
Important: Historian for Linux product license is deployed as configuration of Historian Database
application. ZIP your Historian for Linux product License and apply configuration to Historian database
application for License activation.
Procedure
1. Go to Artifactory.
2. Click the Log In button.
3. Click the SAML SSO icon.
4. Use your SSO to log in.
5. You can then return to the documentation link to download the file.
Next Steps
For non- GE users :
Arriving soon...
Next Steps
1. Download and install a Historian for Linux application.
2. Click the application link to download the application to your machine.
3. Download all scripts “install.sh”, “run.sh”, “stop.sh”, “clean-data.sh” and “apply-config.sh”, “uninstall.sh”.
4. Copy application bundles which you want to install and all above scripts in same directory in you Linux
Host.
5. Run “install.sh”, this script will extract Docker image from the bundle and load it on your Linux Host.
6. Run “run.sh”, this script will create the necessary directories in your Linux Host and it will start the
applications as Docker containers. This script will mount the directories created in Linux Host with
Docker containers.
7. Run “stop.sh” to stop the Docker containers.
8. Run “clean-data.sh” cleans data of application.
9. Run “uninstall.sh” to remove the docker image from Linux Host. This script does not clean the data.
Configuring an Application
1. Configure a Historian for Linux application.
2. Download and extract the sample configuration ZIP for the application.
3. Modify the settings in the sample config file for your environment.
4. Re-zip the file and keep the zip file in same directory with apply-config.sh script in your Linux Host.
5. Run “apply-config.sh”, this script will unzip and copy the configuration to specific application directory.
6. Apply-config.sh restarts the application after applying configuration.
Important: Historian for Linux product license is deployed as configuration of Historian Database
application. ZIP your Historian for Linux product License and apply configuration to Historian database
application for License activation.
Procedure
1. Set the HS_MODE_OF_OPERATION environment variable value to reload.
Note: Keep all Windows-based IHA and IHC files in the archive path.
2. Rename the IHC file to the host name of the Docker container. If the name of the IHC file from the
Windows machine is WIN-2BLPS4FOACM_Config.ihc and the host name of the Historian
database Docker container is "machine-01," the IHC file should be renamed as machine-01_
Config.ihc.
Note: There is no need to rename IHA files.
3. Start the Historian database Docker container with the HS_MODE_OF_OPERATION environment
variable set to reload.
Note: There is no need to set the HS_MODE_OF_OPERATION environment variable again for future
container restarts.
Procedure
1. In the host machine, create the following directory structure:
mkdir -p ~/edgedata
2. Run the following Docker command:
Example:
You can also refer to How to deploy it where a docker-compose.yml file can be used as guidance to run
this container on Predix Edge.
Important: The Historian database container must have a valid license file for all enabled features,
including high tag count support. If a valid license is not provided Historian switches to demo mode
(which supports only 32 tags) while starting the Docker container. Use the
HS_LICENSE_FILE_PATH environment variable to supply the absolute file path of the valid license
file.
Important: To ensure that Historian can reload the existing IHC and IHA files when the Historian
Docker container is stopped and restarted, you must not change the hostname for the Historian
container. In the above example, the host name is "machine-01." This is because Historian database
uses the hostname to keep the names of its IHC and IHA files (metadata and data files).
HS_ARCHIVER_CREATE_T The archive file can be created in the following Days BySize/Days/Hours
YPE three ways:
HS_DEFAULT_CYCLIC_ARCHIVING If this is set “true”, Cyclic Archiving of data starts. false true/false
It means after completion of hours mentioned
by value of
HS_CYCLIC_ARCHIVE_DURATION_HOURS, data
overwrite will start.
HS_CREATE_OFFLINE_ARCHIVE Enables or disables the writing of past data until false true/false
January 1, 1970.
HS_ARCHIVE_ACTIVE_HOURS Calculate the number of hours in past time for 744 1 to hours till January 1,
which to allow writes. 1970
HS_FREE_SPACE_REQUIR Defines the free space required, in MB, for 500 1 - 999999999
ED_IN_MB archiver to work.
Note: Set this value, in
Applicable on start up or at creation of new MB, to be at least five
datastore. times the integral
multiple of the archive
size.
HS_NUMBER_OF_LOG_FILES Maximum number of Log Files, once this value 100 1 to 100
exceeds, oldest file will be deleted to
accommodate new one.
debug Used for turning debug logs on and off. When set false true/false
to false, debug logs are turned off.
HS_ALLOW_HELD_VALUE_QUERY Used for querying the held sample when Archive false true/false
compression is enabled
Procedure
To run the REST Query Service in unsecure mode, enter the following command:
For example:
Note: You can also refer to How to deploy it? where a docker-compose.yml file can be used as guidance to
run this container on Predix Edge.
Important: Setting the DISABLE_REST_QUERY_SECURITY environment variable to false means
this service will run in secure mode. In secure mode, the user has to pass OAUTH2 credentials.
Used only if
DISABLE_REST_QUERY_S
ECURITY is set to false.
Used only if
DISABLE_REST_QUERY_S
ECURITY is set to false.
Procedure
1. Create your query.
The header for your query must be in the following format:
Headers:
Authorization: Bearer <token from trusted issuer>
Predix-Zone-Id: <tenant>
Content-Type: application/json
Get values for a tag http://<Predix Edge OS IP Address>: POST { “start”: Predix
8989/v1/datapoints “1h-ago”, -Zone-
“tags”: Id Can
[ { “name be any
”: “”, value.
“order”: Not
“desc” } ] validat
} ed.
Get all tags http://<Predix Edge OS IP Address>: GET None Predix
8989/v1/tags -Zone-
Id Can
be any
value.
Not
validat
ed.
2. Retrieve a token from the OAUTH2 server by using the following REST API:
For example:
Refer to the below sample Docker Compose yml file to learn about environment variables for
supplying the OAUTH2 URL, credentials, proxy URL for the Historian REST query service container.
version: '3.0’
services:
historian-rest-query:
image: dtr.predix.io/predix-edge/ge-historian-rest-query-service-
amd64:ubuntu16.04.v1.3.0
environment:
- HISTORIAN_MAX_DATA_QUERY=10000
- HISTORIAN_MAX_TAG_QUERY=5000
- HISTORIAN_HOSTNAME=localhost
- DISABLE_REST_QUERY_SECURITY=false
- ZAC_UAA_CLIENTID=edgeclient
- ZAC_UAA_CLIENT_SECRET=edgesecret
- ZAC_UAA_ENDPOINT=http://10.10.10.10:8080/uaa
- USE_PROXY=false
- PROXYURL=
- USE_ZAC=false
- ZAC_ENDPOINT=historian
- ZAC_SERVICE_NAME=historian
network_mode: "host"
ports:
- "8989:8989”
depends_on:
- historian
historian-web-admin-console-uaa-security:
Note: Set the proxy URL only if there is a firewall between the OAUTH2 server and the Historian Web
Admin/REST query service container.
Procedure
1. To run a username and password based Historian Web Admin Service, enter:
For example:
You can also refer to How to deploy it? where a docker-compose.yml file can be used as guidance to
run this container on Predix Edge.
2. To run Historian Web Admin Service with OAUTH2 server:
For example:
HWA_ADMIN_USERNAME Value set by the test User name set by the admin.
Docker admin, while
running the Historian
Web Admin Service
Docker image. This is
used to access
Historian Web Admin
Service Web pages.
https://UAA_zone_id>.predix-
uaa.run.aws-usw02-
pr.ice.predix.io" format="html"
scope="external"/>
Procedure
In a Web browser such as Chrome, IE, or Safari, enter the URL for the Historian Web Admin Service:
https://<ip_address>:9443/historian-visualization/hwa
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Create Datastore": [
{
"Datastore Name": "Turbine-4",
"Properties": {
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Purge": [ { "Datastore Name": "Turbine-4" } ]
}
}
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Purge": [
{
"Datastore Name": "Turbine-10",
"Properties": {
"Archive File Names": [
"Turbine-10_historian-archiver_Archive046.iha",
"Turbine-10_historian-archiver_Archive1543363199.iha"
]
}
}
]
}
}
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Purge": [
{
"Datastore Name": "User",
"Properties": {
"Start Time": 1543417800,
"End Time": 1543418220
}
}
]
}
}
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Back Up": [
{
"Datastore Name": "User",
"Back Up Path": "/data/backup",
"Properties": {
"Archive File Names": [
"User_historian-archiver_Archive1543449599"
]
}
}
]
}
}
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Back Up": [
{
"Datastore Name": "User",
"Back Up Path":"/data/backup",
"Properties":
{
"Number Of Files":2
}
}
]
}
}
{
"Historian Node": "10.181.213.175",
"Data Management": {
"Back Up": [
{
"Datastore Name": "User",
"Back Up Path":"/data/backup",
"Properties":
{
"Start Time" :1540511999,
"End Time" :1540598399
}
}
]
}
}
Tag Options-Scaling
{
"Historian Node": "10.181.213.175",
"Config": {
"Tag Options": [
{
"Tag Pattern": "US-
TestTagsChange1.Objects.Demo.Dynamic.Scalar.Byte",
"Tag Properties": {
"Scaling": {
"Hi Engineering Units": 100,
"Low Engineering Units": 0,
"Input Scaling": false,
"Hi Scale Value": 0,
"Low Scale Value": 0
}
}
}
]
}
}
Note: We can filter tags based on the tag names, Collector name and Data Store name. We just need to
replace Tag Pattern with Collector Name or Datastore Name.
What can you do with the operation?
This will set the scaling properties as mentioned in the Config file.
{
"Historian Node": "10.181.213.175",
"Config": {
"Tag Options": [
{
"Tag Pattern": "US-
TestTagsChange1.Objects.Demo.Dynamic.Scalar.Byte",
"Tag Properties": {
Procedure
The Docker image of Historian Tuner can be downloaded from Artifactory.
Next Steps
Running the Historian Tuner Container
Note: Environment variables can be only set through start-environments-tuner file for Historian Tuner.
This file should be present in the /config direcory of Docker container.
http://<Predix Edge OS IP Address>: -F 'uploadFile=@<Absolute path of JSON Authorization: Bearer <token from
9000/upload FILE>' trusted issuer>
For example
– curl -F 'uploadFile=@ C:\workstation\historian-config.json' http://10.181.212.287:9000/upload
Procedure
1. Create a directory in the Host file system to store the persistent data of the collector.
mkdir -p /data/historian-s2s-collector-1
mkdir -p /config/historian-s2s-collector-1
The collector performs store and forward, so data is stored in the disk.
2. Run the following docker command:
For example:
The Docker image of Historian S2S Collector can be downloaded from Artifactory. You can also refer to
How to deploy it where a docker-compose.yml file can be used as guidance to run this container on
Predix Edge.
Important: To start multiple instances of the collector, each S2S collector docker should have unique
InterfaceName in HistorianServers.reg file.
Important: If destination Historian is running on Predix Edge, you must publish port 14000 of
Historian databse docker of destination Historian. This is because S2S collector may be running in
some other machine (S2S collector and destination Historian databse may be not joining same docker
private network). You must also publish port 14000 of Historian database docker of source Histroian, if
S2S collector and source Historian do not join the same docker private network.
HS_SIZE_OF_EACH_LOG_FILE Maximum size of one Log file in Mega Bytes. If this value
exceeds, new Log File will be created. Its default value
10, and range is from 1 to 10.
[HKEY_LOCAL_MACHINE\Software\GE Digital\iHistorian\Services
\ServerToServerCollector]
"HistorianNodeName"="ip-of-dest-edge-machine"
"InterfaceName"="historian-s2s-collector-1"
"DefaultTagPrefix"="historian-s2s-collector-1."
"General3"="historian-archiver"
[HKEY_LOCAL_MACHINE\Software\Intellution, Inc.\iHistorian
\Services]
"LogFilePath"="."
General Description
EGUDescription
Scaling0 HiEGU
LoEGU
InputScaling
HiScale
LoScale
Compression ArchiveCompression
ArchiveDeadband(%)
Procedure
1. Add the following collector interface properties to the top of your configuration XML file.
The following is an example for the Server to Server Collector interface properties:
<Import>
<Collectors>
<Collector Name="<Collector Name>">
<InterfaceType>ServerToServer</InterfaceType>
<Collectors>
...
</Collectors>
<TagList Version="1.0.71">
<Tag>
<Tagname>simCollector1</Tagname>
<SourceAddress>Result = CurrentValue("SJC1GEIP05.Simulation00002")</
SourceAddress>
...
</Tag>
<Tag>
<Tagname>simCollector2</Tagname>
<SourceAddress>Result = CurrentValue("SJC1GEIP05.Simulation00002")</
SourceAddress>
...
</Tag>
...
</TagList>
</Import>
3. Add the closing </Import> tag to the end of your XML file.
<Import>
<TagList Version="1.0.71">
<Tag>
<Tagname>HistS2SInt16</Tagname>
<SourceAddress>Result = CurrentValue("US-
TestTagsChange1.Objects.Demo.Dynamic.Scalar.Int16")</SourceAddress>
<DataType>SingleInteger</DataType>
<CollectionType>Unsolicited</CollectionType>
<TimeResolution>Milliseconds</TimeResolution>
<CollectionInterval>1000</CollectionInterval>
<Description>US-
TestTagsChange1.Objects.Demo.Dynamic.Scalar.Int16</Description>
<HiEngineeringUnits>200000.00</HiEngineeringUnits>
<InputScaling>false</InputScaling>
<InterfaceCompression>1</InterfaceCompression>
<InterfaceDeadbandPercentRange>80.00</
InterfaceDeadbandPercentRange>
<InterfaceCompressionTimeout>30000</
InterfaceCompressionTimeout>t>
<TimeStampType>Source</TimeStampType>
<CalculationDependencies>
<CalculationDependency>US-
TestTagsChange1.Objects.Demo.Dynamic.Scalar.Int16</
CalculationDependency>
</CalculationDependencies>
<SpikeLogic>1</SpikeLogic>
[HKEY_LOCAL_MACHINE\Software\GE Digital\iHistorian\Services]
"DebugMode"=dword:00
[HKEY_LOCAL_MACHINE\Software\GE Digital\iHistorian\Services
\ServerToServerCollector]
"HistorianNodeName"="cloud|wss://gateway-predix-data-services.run.aws-
usw02-pr.ice.predix.io/v1/stream/messages|configServer=none|
identityissuer=<identity-issuer>|clientid=<client-id>|
clientsecret=<client-secret>|zoneid=<zone-id>|
dpattributes={"key1":"value1","key2":"value2","key3":"value3"}|
proxy=<proxy>"
"InterfaceName"=" historian-s2c-collector-1"
"General3"=" historian-archiver"
"OfflineTagConfigurationFile"="/conf/ S2S_Offline_Config.xml "
[HKEY_LOCAL_MACHINE\Software\Intellution, Inc.\iHistorian\Services]
"LogFilePath"="."
Cloud Destination Address The URL of a data streaming endpoint exposed by the Predix Time Your Predix Time Series
Series instance to which the data should go. Typically, it starts with administrator can provide this URL.
“wss://”.
Identity Issuer The URL of an authentication endpoint for the collector to Your Predix Time Series
authenticate itself and acquire necessary credentials to stream to administrator can provide this URL.
the Predix Time Series. Typically, it starts with https:// and ends with
“/oauth/token”.
Client ID This field identifies the collector when interacting with the Predix Your Predix Time Series
Time Series. This is equivalent to the User name in many administrator can provide this
authentication schemes. The client must exist in the UAA identified information.
by the Identity Issuer, and the system requires that the
timeseries.zones. {ZoneId}.ingest and timeseries.zones.
{ZoneId}.query authorities are granted to the client for the Predix
Zone ID specified.
Client Secret This field stores the secret to authenticate the collector. This is Your Predix Time Series
equivalent to Password in many authentication schemes. administrator can provide this
information.
Zone ID Because the Predix system hosts many instances of the Time Series Your Predix Time Series
service, the Zone ID uniquely identifies the one instance to which the administrator can provide this
collector will stream data. information.
Proxy If the collector is running on a network where proxy servers are used Your local IT administrator can
to access web resources outside of the network, then proxy server provide the proxy server
settings must be provided. This field identifies the URL of the proxy information.
server to be used for both the authentication process and for
streaming data. However, it does not affect the proxy server used by
Windows when establishing secure connections. As a result, you
should still properly configure the proxy settings for the Windows
user account under which the collector service runs.
Procedure
1. Create a directory in the Host file system to store the persistent data of the collector.
mkdir -p /data/opcuacollector
mkdir -p /config/opcuacollector
2. Run the following command to run the OPCUA collector docker.
Example:
OPCUA_INI_FILE Path of ClientConfig.ini file. It contains Client Certificate details with their
locations.
Note:
• Place HistorianServer.reg file and ClientConfig.ini file in the Config directory.
• If User does not provide its own certificate & key pair, Historian OPCUA DA Collector generates its own
certificate.
• General2 key value if set false in HistorianServers.reg file then this collector connects to OPCUA DA
server in unsecure mode (without any certificate exchange).
Feature Capability
Browse Source For Tags Yes (on OPCUA Servers that support browsing)
Note: You must set Time Assigned by field to Source if you have unsolicited tags getting data from an
Historian OPCUA DA Collector.
Attribute Capability
Hi Scale Yes
Lo Scale Yes
Is Array Tag No
Note: While some of these attributes are queried on a browse, they are not shown in the browse
interface. These attributes are used when adding a tag, but you will not be able to see whether or not all
attributes come from the server.
BOOL Boolean
Note: The Historian OPCUA Collector requests data from the OPCUA DA server in the native data type.
Then the Historian OPCUA DA collector converts the received value to a Historian Data Type before
sending it to the Historian database.
Docker private network is a technology which enables a group of Docker containers to perform
network communication with each other. The ports on which applications of this groups are
LISTENING is available for viewing-only to member applications of this specific Docker private
network. If any Docker container wants to expose its port outside of the Docker private network, that
port needs to be exposed.
In the diagram above there are four network servers in Historian container ecosystem:
1. Historian database LISTENS on port 14000
2. Web admin service LISTENS on port 9443
3. Tuner LISTENS on port 9000
4. REST query service LISTENS on port 8989
In the diagram above, it is clear that Historian database does not have any OAUTH2
authentication or authorization mechanism for itself. It is through Web-admin, Tuner and REST
Query service applications that the user interacts with the Historian database.
For example, it is via the REST Query service, the REST clients query Historian database. That is,
these group of applications validate the user’s authentication and authorization. If the user is valid
only then these applications connect to the Historian database on the user's request for access.
Hence, Historian database is the ultimate resource (the analogy here can be to that of a Vault in a
Bank) we want to protect, while Web-admin, Tuner and REST Query service are acting as
Resource owners (analogy with Guard of Vault in Bank) and are Guards of Historian database.
To provision Tier-2(OAUTH2 mechanism) security layer, the users must set up an OAUTH2 server in
their premise or can also leverage Predix UAA service (OAUTH2 Server on Predix Cloud).
Note: Historian for Linux product does not provide OAUTH2 Server.
Web-admin, Tuner and REST Query service offer Docker environment variables by which the
users can provide OAUTH2 credentials to these docker containers, so that these applications can
validate the token from the specified OAUTH2 Server.
For Tuner
----------------------
TUNER_SECURE=true
OAUTH2_CLIENT_ID=my-uaa-client
OAUTH2_CLIENT_SECRET
= my-uaa-secret OAUTH2_URL= https://28649aab-0fd3-456c-
baea-335d1b907668.predix-uaa.run.aws-usw02-pr.ice.predix.io
https_proxy=http://my-proxy.ge.com:80
For Web-admin
----------------------
HV_UAA_CLIENT_ID=my-uaa-client
HV_UAA_CLIENT_SECRET
=my-uaa-secret HV_UAA_SCHEME_AND_SERVER=https://28649aab-0fd3-456c-
baea-335d1b907668.predix-uaa.run.aws-usw02-pr.ice.predix.io
HV_USE_PROXY=true
HV_PROXY_URL= http://my-proxy.com:80
Mirroring No Yes
The libraries available below are stored in Artifactory. Use the following information to ensure you can
access the files.
For GE Employees
To access the Artifactory downloads via the links below, those using a GE email address must first be log
into Artifactory.
Note: If you attempt to download an Artifactory file without first logging into Artifactory, you will be
asked to Sign in, which will not work.
1. Go to Artifactory.
2. Click the Log In button.
3. Click the SAML SSO icon.
4. Use your SSO to log in.
5. You can then return to the documentation link to download the file.
For Predix Users
To access Artifactory downloads from links in the Predix Edge documentation, you must first create an
account on predix.io. Your predix.io account sign in credentials will be used to access Artifactory.
When you click an Artifactory download link, enter your predix.io username (email address) and password
in the Sign In dialog.
Collector Toolkit
The Collector Toolkit is a C++ based library you can use to write custom Historian collectors. You have to
write the source code to handle interactions with respective sources (for example, OPCUA, Modbus, and
so on). Collector interaction with GE Historian and features like store and forward and auto-reconnect to
Historian is automatically handled by this library.
The library is compiled in the following Linux distributions:
• Ubuntu 16.04 x64
You can download the libraries from the following locations:
• Ubuntu 16.04 x64
User API
The user API is a C library used for adding, deleting, and updating tags for a collector. Usually you can
manage tags using the Historian Web Admin console, but alternatively, you can use this library to do this
programatically.
The library is compiled in the following Linux distributions:
• Ubuntu 16.04 x64
You can download the libraries from the following locations:
• Ubuntu 16.04 x64
Historian
• Historian Collector Toolkit
• Historian User API