Compose Setup and User Guide
Compose Setup and User Guide
TM
Qlik Compose
December 2024
Last updated: December 17, 2024
Copyright © 1993-2024 QlikTech International AB. All rights reserved.
HELP.QLIK.COM
© 2024 QlikTech International AB. All rights reserved. All company and/or product names may be
trade names, trademarks and/or registered trademarks of the respective owners with which they
are associated.
Contents
1 What's new? 8
1.1 Features and enhancements introduced in Compose December 2024 8
Support for OAuth authentication with Databricks 8
Newly supported database versions 8
Security enhancements 9
1.2 Features and enhancements first introduced in Compose November 2023 Service
Release 1 9
Support for bulk generation of data warehouse and workflow tasks 9
Support for basic validations when generating data mart tasks 9
Support for editing data warehouse tasks 9
Other enhancements 10
1.3 Features and enhancements introduced in Compose November 2023 Initial Release10
Support for choosing the task mode for new data warehouse tasks 10
Support for using a separate schema for data mart tables in Amazon Redshift 10
Azure Synapse Analytics enhancements 11
Snowflake enhancements 11
Other enhancements to Data Warehouse projects 12
Enhancements to Data Lake projects 13
2 Introduction 14
2.1 Data warehouse projects 14
Data warehouse projects architecture 14
Key features 15
2.2 Data lake projects 15
Easy data structuring and transformation 15
Continuous updates 15
Historical data store 15
Data lake project architecture 16
3 Qlik Compose installation and setup 17
3.1 Preparing your system for Compose 17
Hardware prerequisites 17
Software and network prerequisites 18
Required permissions for the Compose service 18
Reserved system names 18
3.2 Installing or upgrading Compose 18
Installation Instructions 19
Upgrade Instructions 19
3.3 Installing and upgrading Compose silently 20
Silently installing Compose 20
Silently upgrading Compose 21
Silently uninstalling Compose 21
3.4 Determining the required number of database connections 22
3.5 Accessing Qlik Compose 22
4 Security considerations 24
4.1 Setting up HTTPS for the Compose console 24
Checking if an SSL certificate is installed 24
Using the self-signed certificate 25
1 What's new?
This section describes the new and enhanced features in Compose December 2024.
In addition to these release notes, customers who are not upgrading from the latest GA
version are advised to review the release notes for all versions released since their
current version.
Customers should also review the Replicate release notes in ≤ Qlik Community for information
about the following:
If you have a Compose task that connects to Databricks Compute Platform using OAuth
authentication, and you want to monitor the task in Qlik Enterprise Manager, you will
need a patched version of Qlik Enterprise Manager. Without the patch, all Compose
tasks will be disabled and a json deserialization error will be displayed. To obtain the
patch, contact ≤ Qlik Support. Alternatively, you can monitor your tasks directly in
Compose.
Security enhancements
The following components that ship with Compose have been updated to address known
vulnerabilities:
Other enhancements
Compose CLI data mart processing enhancement
Support for the --timeout -1 parameter was added to the Compose CLI mark_reload_
datamart_on_next_run command. This parameter overrides the server call's default timeout in
seconds and can be used to prevent timeouts when processing very large data marts.
UI enhancement
You can now sort columns in the Manage Data Storage Tasks and Monitor Details windows.
Snowflake enhancements
l To align with the updated behavior of Snowflake on AWS auto-increment columns, newly
added auto-increment columns will use the new ORDERED modifier, as needed.
l It is now possible to limit the number of data warehouse task runs checked by the data mart
task. To do this, set the following Compose environment variable:
qlk__MissingSatIDsLatestRuns
qlk__PersistDenormForFctT
qlk__PersistPreselForFctT
l HEAP staging tables support: Two environmental variables have been added: "qlk__
FullLoadStagingTablesAsHeap" and "qlk__CDC_StagingTablesAsHeap". Set these variables
to 'true' or '1', to create the staging tables as HEAP tables for Full Load or CDC tasks
respectively.
l Added the ability to set the statistics threshold for data mart ETL: Now there are two
statistics thresholds for Synapse that can be set by the user using the following system
environment variables:
1. For the data warehouse ETL, use "qlk__UpdateStatisticsPercentageDwh"
This is used for updating the statistics of the Hub and satellite tables.
2. For the data mart ETL, use "qlk__UpdateStatisticsPercentageDma"
This is used for updating the statistics for the fact and dimension tables.
Notes:
l Values should be between 0 and 100. A value less than 0 will be converted to 0; in this
case, the command to update the statistics will be skipped.
l A value exceeding 100 will be converted to 100.
l If a value cannot be interpreted as an integer, the default value (20) will be used.
l If this variable is not present, then the default value (20) will be used.
l The "UpdateStatisticsPercentage" system environment variable is no longer
supported.
l The JDBC and ODBC additional properties will no longer be overridden: On the first
deployment, Compose copies all the connection parameters including JDBC and ODBC
additional properties. On subsequent deployments, the parameters will not be overridden in
the target environment.
l Improved performance: Revised ELT statements to reduce number of statements and
improve performance running against Synapse including:
l Skipping statements when not needed (based on run-time metadata)
l Combining multiple statements into a single one
l Managing Staging table (create/insert/index) based on runtime metadata
Snowflake enhancements
In some environments, using the 'DISTINCT' keyword for Snowflake might cause
performance degradation. If this is the case, you can suppress the 'DISTINCT'
keyword by setting the environmental variable "qlk__DisableCteDistinct" to either
'1' or 'true'.
l Reduced Snowflake storage costs by adding support for Transient Tables: In previous
versions, Compose would create TSTG and TTMP objects in Snowflake during ELT
processes, which would increase customers' data storage costs. From this version, Compose
will create Snowflake Transient Tables for temporary data storage during ETL processes,
thereby significantly reducing costs.
l Key pair authentication: Snowflake key pair authentication is now supported.
Key pair authentication is supported in both standard and advanced mode, and
with both JDBC and ODBC.
l Data mart obsolete indication: Optimized implementation of the data mart obsolete
indication.
l Transactional data mart performance: Performance improvements were made to
transactional data marts.
l Optimized the method for updating Type 2 dimensions: Before generating the ETL for
this, you first need to set the environmental system variable 'qlk__NewPreselectDim' to either
'1' or 'true'.
l Expressions: Added the option to evaluate NULL when testing an expression.
l Migration performance: Improved performance with Qlik Compose migration operations.
l Data mart export/import: Exporting and importing data marts now includes the "Table
Creation Modifiers" column. This will enable you to customize the fact or dimension table
creation modifiers.
Notes:
l If the column value is empty, the project default will be used.
l The project default value is not included in the export/import.
l Optimization of dropping and creating tables in an empty schema: From this version,
when a schema does not exist, Compose will try to create it (and return an error if it fails).
Additionally, if the new schema is empty, Compose will not try to drop tables from the
previous schema.
l Mappings for target columns not mapped to source: A new option has been added to the
Task Settings: When a data warehouse column is unassigned. The new option enables you
to set unassigned columns to NULL or to use a previous column value.
2 Introduction
Qlik Compose provides an all-in-one purpose built automation solution for creating an agile data
warehouse and/or ingesting data from multiple sources to your data lake for further downstream
processing. To this end, Qlik Compose offers two project types: Data Warehouse and Data Lake.
This introduction will take a closer look at how these projects can help your organization overcome
the hurdles typically faced when confronted with the challenge of setting up and maintaining an
agile data warehouse, or when faced with challenge of ingesting data from multiple source to a
single analytics-ready storage system.
Qlik Compose data warehouse projects allows you to automate these traditionally manual,
repetitive data warehouse tasks: design, development, testing, deployment, operations, impact
analysis, and change management. Qlik Compose automatically generates the task statements,
data warehouse structures, and documentation your team needs to efficiently execute projects
while tracking data lineage and ensuring integrity. Using Qlik Compose, your IT teams can respond
fast – in days – to new business requests, providing accurate time, cost, and resource estimates.
Then once projects are approved, your IT staff can finally deliver completed data warehouses, data
marts, and BI environments in far less time.
Key features
The comprehensive set of automation features in our Qlik Compose solution simplifies data
warehousing projects. It eliminates the cumbersome and error-prone manual coding required by
legacy data warehouse design and implementations’ many repetitive steps. In addition, our solution
includes the operational features your business needs for ongoing data warehouse and data mart
maintenance.
Continuous updates
Be confident that your ODS and HDS accurately represent your source systems.
l Use change data capture (CDC) to enable real-time analytics with less administrative and
processing overhead.
l Efficiently process initial loading with parallel threading.
l Leverage time-based partitioning with transactional consistency to ensure that only
transactions completed within a specified time are processed.
l New rows are automatically appended to HDS as data updates arrive from source systems.
l New HDS records are automatically time-stamped, enabling the creation of trend analysis
1. Land: The source tables are loaded into the Landing Zone using Qlik Replicate or other third-
party replication tools.
When using Qlik Replicate to move the source table to the Landing Zone, you can define
either a Full Load replication task or a Full Load and Store Changes task to constantly
propagate the source table changes to the Landing Zone in write-optimized format.
2. Store: After the source tables are present in the Landing Zone, Compose auto-generates
metadata based on the data source(s). Once the metadata and the mappings between the
tables in the Landing Zone and the Storage Zone have been finalized, Compose creates and
populates the Storage Zone tables in read-optimized format, ready for consumption by
downstream applicaitons.
It should be noted that even though setting up the initial project involves both manual and
automatic operations, once the project is set up, you can automate the tasks by designing a
Workflow in Compose and/or utilizing the Compose scheduler.
Note that as Qlik Replicate serves as a data (and metadata) provider for Qlik Compose, you also
need to install Replicate in your organization. For a description of the Replicate installation
procedure, refer to the Qlik Replicate Setup and User Guide.
In this section:
Before installing Compose, make sure that the following prerequisites have been met:
Hardware prerequisites
The following table lists the required hardware for varied deployment scales:
Extra-
Basic Large
Component Large
System System
System
Memory 8 GB 16 GB 32 GB
Network 1 Gb 10 Gb Two 10 Gb
On Windows Server 2012 R2, TLS 1.2 should be turned on by default. If it is not,
refer to the Microsoft online help for instructions on how to turn it on.
For information on supported databases and browsers, see Support matrix (page 405).
Thus, a table named qlK__MyTable or a column named QLK__MyColumn would not be permitted.
Installation Instructions
For best performance when using cloud-based databases (such as, Snowflake) as your
data source or data warehouse, it is strongly recommended to install Qlik Compose on a
machine (such as Amazon EC2) located in the same region as your database instance.
To install Compose:
As part of the installation, a new Windows Service called Qlik Compose is created.
6. Open the Qlik Compose console as described in Accessing Qlik Compose (page 22).
When you first open the Qlik Compose Console, you will be prompted to register
an appropriate license. Register the license that you received from Qlik.
Upgrade Instructions
Depending on your existing Compose version, you may also need to perform additional
version-specific upgrade tasks. It is therefore strongly recommended to review the
release notes for the new version before upgrading.
Compose runs a check to verify the termination of tasks and processes before
running an upgrade. If any processes are found to be still running, the installation
will be aborted.
Before commencing the installation, make sure that the prerequisites have been met.
See Preparing your system for Compose (page 17).
The following topics describe how silently install, upgrade, and uninstall Compose:
1. From the directory containing the Compose setup file, run the following command (note that
this will also install Compose):
Qlik_Compose_<version.number>.exe /r /f1<my_response_file>
where:
<my_response_file> is the full path to the generated response file.
Example:
Qlik_Compose_<version.number>.exe /r /f1C:\Compose_install.iss
2. To change the default installation directory, open the response file in a text editor and edit
the first szDir value as necessary.
3. To change the default data directory, edit the third szDir value as necessary.
4. Save the file as <name>.iss, e.g. Compose_install_64.iss.
Syntax:
<Compose_setup_file> /s /f1<my_response_file> [/f2<LOG_FILE>]
Example:
[ResponseResult]
ResultCode=0
1. Open a command prompt and change the working directory to the directory containing the
Compose setup file.
2. Issue the following command (where <my_response_file> is the path to the response file you
created earlier):
Syntax:
<COMPOSE_KIT> /s /f1<my_response_file> [/f2<LOG_FILE>]
Example:
[ResponseResult]
ResultCode=0
The process is the same as for silently installing Compose. For instructions, see Silently installing
Compose (page 20)
1. For each task, determine the number of connections it can use during runtime. This value
should be specified in the Advanced tab in the Manage Data Warehouse Tasks Settings
window (Data Warehouse projects) or in the Manage Storage Tasks Settings window (Data
Lake projects). When determining the number of required connections, various factors need
to be taken into account including the number of tables, the size of the tables, and the
volume of data. It is therefore recommended to determine the required number of
connections in a Test environment.
2. Calculate the number of connections needed by all tasks that run in parallel. For example, in a
Data Lake project, if three data storage tasks run in parallel, and each task requires 5
connections, then the number of required connections will be 15.
Similarly, in a Data Warehouse project, if a workflow contains two data warehouse tasks that
run in parallel and each task requires 5 connections, then the minimum number of required
connections will be 10. However, if the same workflow also contains two data mart tasks
(that run in parallel) and the sum of their connections is 20, then the minimum number of
required connections will be 20.
3. Factor in the connections required by the Compose Console. To do this, multiply the
maximum number of concurrent Compose users by three and then add to the sum of Step 2
above. So, if the number of required connections is 20 and the number of concurrent
Compose users is 4, then the total would be:
20 + 12 = 32
The person logged in to the computer where you are accessing the Console must be an
authorized Qlik Compose user. For more information, see Managing user permissions
(page 392).
1. To access the Qlik Compose Console from the machine on which it is installed, select All
Programs > Qlik Compose > Qlik Compose Console from the Windows Start menu. To
access the Qlik Compose Console from a remote browser, type the following address in the
address bar of your Web browser
https://<ComputerName>/qlikcompose/
Where <ComputerName> is the name or IP address of the computer on which Compose is
installed.
2. If no server certificate is installed on the Compose machine, a page stating that the
connection is untrusted will be displayed. This is because when Compose detects that no
server certificate is installed, it installs a self-signed certificate. Since the browser has no
way of knowing whether the certificate is safe, it displays this page. For more information,
see Setting up HTTPS for the Compose console (page 24).
3. When prompted for your password, enter your domain username and password.
4 Security considerations
During normal operation, Qlik Compose needs to access databases and storage systems for the
purpose of reading and writing data and metadata.
This section describes the procedure you should follow to ensure that any data handled by Qlik
Compose will be completely secure.
In this section:
As Compose uses the built-in HTTPS support in Windows, it relies on the proper setup of the
Windows machine it runs on to offer HTTPS access. In most organizations, the IT security group is
responsible for generating and installing the SSL server certificates required to offer HTTPS. It is
strongly recommended that the machine on which Compose is installed already has a valid SSL
server certificate installed and bound to the default HTTPS port (443).
https://<ComputerName>/qlikcompose/
Where <ComputerName> is the name or IP address of the computer on which Compose is installed.
It should be noted that browsers do not consider the certificate to be valid because it was not
signed by a trusted certificate authority (CA). When connecting with a browser to a server that uses
a self-signed certificate, a warning page is shown such as this one in Chrome:
The warning page informs you that the certificate was signed by an unknown certificate authority.
All browsers display a similar page when presented with a self-signed certificate. If you know that
the self-signed certificate is from a trusted organization, then you can instruct the browser to trust
the certificate and allow the connection. Instructions on how to trust the certificate vary between
browsers and even between different versions of the same browser. If necessary, refer to the help
for your specific browser.
Some corporate security policies prohibit the use of self-signed certificates. In such
cases, it is incumbent upon the IT Security department to provide and install the
appropriate SSL server certificate (as is the practice with other Windows products such
as IIS and SharePoint). If a self-signed certificate was installed and needs to be
removed, then the following command can be used:
Note that after the self-signed certificate is deleted, connections to the Qlik Compose
machine will not be possible until a valid server certificate is installed. Should you want
to generate a new self-signed certificate (to replace the deleted certificate), simply
restart the Qlik Compose service.
See also Setting up HTTPS for the Compose console (page 24).
Before starting, make sure that the following prerequisites have been met:
l The replacement certificate must be a correctly configured SSL PFX file containing both the
private key and the certificate.
l The common name field in the certificate must match the name browsers will use to access
the machine.
Syntax:
¢ netsh http add sslcert ipport=0.0.0.0:443 certhash=[YOUR_CERTIFICATE_THUMBPRINT_
WITHOUT_SPACES] appid={4dc3e181-e14b-4a21-b022-59fc669b0914}
Example:
Syntax:
¢ netsh http add sslcert ipport=[::]:443 certhash=[YOUR_CERTIFICATE_THUMBPRINT_WITHOUT_
SPACES] appid={4dc3e181-e14b-4a21-b022-59fc669b0914}
Example:
Under normal circumstances, you should not need to set the hostname. However, on some
systems, connecting using HTTPS redirects to localhost. If this occurs, set the hostname of the
Compose machine by running the command shown below.
Command syntax
ComposeCtl.exe configuration set --address address
Where:
Example
ComposeCtl.exe configuration set --address MyHostName
Command syntax
ComposeCtl.exe configuration set --https_port port_number
Where:
--https_port is the HTTPS port number of the Compose server machine. The default HTTPS port is
443.
Example
ComposeCtl.exe configuration set --https_port 442
You can force the Compose Web UI and/or the Compose REST API connections to use HSTS (HTTP
Strict Transport Security). To do this, run the commands described below.
All commands should be run from as Admin from the product bin folder.
Enabling HSTS
Command syntax
ComposeCtl.exe configuration set --static_http_headers header_list --rest_http_headers header_
list
Parameters
Parameter Description
Example
ComposeCtl.exe configuration set --static_http_headers "Strict-Transport-Security:max-
age=31536000; includeSubDomains;" --rest_http_headers "Strict-Transport-Security":"max-
age=31536000; includeSubDomains;"
Disabling HSTS
You can also revert to regular HTTPS connections.
Command syntax
ComposeCtl.exe configuration set --static_http_headers ""|--rest_http_headers ""
Parameters
Parameter Description
--rest_http_headers Use this parameter to revert the headers required to connect using
the API.
Example
Disable static_http_headers
Disable rest_http_headers
Using Kerberos SSO, users can seamlessly log into Compose and administrators can completely
externalize and centrally manage users or group memberships using their existing Kerberos
infrastructure.
If the Kerberos protocol fails, Compose will try to log in using NTLM authentication. If
NTLM authentication is not enabled in the system, an error will be returned.
The master key is encrypted by a user key, which in turn, is derived from a master password
entered by the user. By default, the Master User Password is randomly generated by Compose. The
best practice, however, is to change the Master User Password, as this will allow Compose projects
and configuration settings to be imported to another machine without needing to re-enter the
project credentials.
It may also be convenient to use the same Master User Password within a trusted environment. In
other words, if the same administrators control both the production and the testing environments,
using the same Master User Password in both environments will facilitate the transfer of projects
with credentials between the testing and production environments.
The Master User Password must be a minimum of 32 characters. You can either use your
own password or run the genpassword utility described below to generate a password for
you. Note also that the password can only contain alphanumeric characters (i.e. it
cannot contain special keyboard characters such as # or @).
<product_dir>\bin
If you add the --prompt parameter to the command and omit the --password
parameter, the CLI will prompt you for the password. When you enter the
password, it will be obfuscated. This is especially useful if you do not want
passwords to be retained in the command prompt history.
Syntax:
ComposeCtl.exe masterukey set --prompt
If you add the --prompt parameter to the command and omit the --password and --
current-password parameters, the CLI will prompt you for the required passwords.
When you enter the passwords, they will be obfuscated. This is especially useful if
you do not want passwords to be retained in the command prompt history.
Syntax:
ComposeCtl.exe masterukey set --prompt
In this section:
For information on which endpoints can be used in a Replicate task that lands data for Compose,
see Supported data warehouses (page 406).
Configuring multiple Replicate tasks with the same landing zone is not supported.
The steps below highlight the settings that are required when using Qlik Replicate with Compose.
For a full description of setting up tasks in Qlik Replicate, please refer to the Qlik Replicate Help.
Prerequisites
l When Oracle is defined as the source endpoint in the Replicate task, full supplemental
logging should be defined for all source table columns that exist on the target and any source
columns referenced in filters, data quality rules, lookups, and expressions.
l When using Replicate November 2023 or later and Amazon Redshift as your data warehouse,
you must define a global transformation rule in Replicate that converts BOOLEAN data types
to VARCHAR(1). Otherwise, an error will occur during the data warehouse task. For
information on defining global transformation rules, see Starting the Global Transformation
Rules wizard in the Replicate help.
after image only. Note that this should only be done if the Replicate task is dedicated for use
with Compose.
l As Compose requires a full after-image to be able to perform Change Processing, the
following Replicate source endpoints are not directly supported (as they do not provide a full
after-image):
l SAP HANA (log based)
l Salesforce
1. Open Qlik Replicate and in the New Task dialog, do one of the following:
l To enable Full Load and Change Processing replication, enable the Full Load and
Store Changes options (the Apply Changes option should not be enabled).
l To enable Full Load only replication, enable the Full Load replication option only.
l To enable Change Processing replication only, make sure that only the Store Changes
option is enabled. Note that this option should only be selected if the Full Load tables
and data already exist in the landing zone.
l To enable Change Processing for lookup tables that already exist in the landing zone
and are not part of the Compose model, enable the Apply Changes option only. Note
that such a task should be defined in addition to the Full Load and Store Changes
replication task described above. For more information on updating standalone lookup
tables, see Using lookup tables that do not have a task for CDC mapping (page 213).
2. Open the Manage Endpoint Connections window and define a source and target endpoint.
The target endpoint must be the database where you want Compose to create the data
warehouse.
3. Add the endpoints to the Qlik Replicate task and then select which source tables to replicate.
4. This step is not relevant if you selected the Apply Changes or Full Load replication option
only. In the Task Settings' Store Change Setting tab, make sure that Store Changes in is set
to Change tables.
5. In the Task Settings’ Target Metadata tab, specify a Target table schema name.
6. If a Primary Key in a source table can be updated, it is recommended to turn on the DELETE
and INSERT when updating a primary key column option in Replicate's task settings'
Change Processing Tuning tab. When this option is turned on, history of the old record will
not be preserved in the new record. Note that this option is supported from Replicate
November 2022 only.
7. Run the task. Wait for the Full Load replication to complete and then continue the workflow in
Compose as described in the Data warehouse project tutorial (page 107) below and inAdding
and managing data warehouse projects (page 36).
Replicate allows you to define global transformations that are applied to source/Change
tables during task runtime. The following global transformations, however, should not be
defined (as they are not compatible with Compose tasks):
In this section:
l Data Warehouse - for ingesting data from multiple sources and creating analytics-ready data
marts.
l Data Lake - for ingesting data from multiple sources and moving it to a storage system for
analytics.
This topic guides you through the steps required to set up a data warehouse project. For
instructions on setting up a Data Lake project, see Adding data lake projects (page 290).
You can set up as many projects as you need, although the ability to actually run tasks is
determined by your Compose license.
The following names are reserved system names and cannot be used as project
names: CON, PRN, AUX, CLOCK$, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7,
COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8 and LPT9.
3. Select Data Warehouse as your project type and then click Finish.
4. The project panels will be displayed.
5. Add at least one source database and a data warehouse as described in Setting up Landing
Zone and Data Source connections (page 133)and Setting up a data warehouse connection
(page 111) respectively.
6. Create a model as described in Creating and managing the model (page 156).
7. Set up the data warehouse as described in Creating and managing the data warehouse
(page 194).
8. Set up the data mart as described in Creating and managing data marts (page 230).
Project management actions are performed in the main Compose window. To switch
from a specific project to the main window, click the downward arrow to the right of the
project name and then select All Projects from the drop-down menu.
See also: Project deployment (page 45) (Data Warehouse projects) and
Project deployment (page 299) (Data Lake projects).
Project settings
You can change the project settings according to your needs.
1. Open your project as described in Managing and monitoring projects (page 292).
2. Click the downward arrow to the right of the project name and select Settings from the drop-
down menu.
The Settings window opens, displaying the following tabs:
l General tab (page 39)
l Naming tab (page 40)
l Environment tab (page 41)
l Table creation modifiers tab (page 43)
General tab
In this tab, the following settings are available:
Miscellaneous
l Generate DDL scripts but do not run them: By default, Compose executes the CREATE,
ADJUST and DROP statements immediately upon user request. When you select this option,
Compose will only generate the scripts but not execute them. This allows you to review and
edit the scripts before they are executed.
For example, if you want your data warehouse/storage tables to contain partitions, you will
need to edit the CREATE statement to create the partitions.
You can view, copy and download the DDL scripts as described in Viewing and downloading
DDL scripts (page 100).
When this option is selected, you need to do the following to see the results:
l After running the scripts, clear the metadata cache as described in Clearing
the data warehouse metadata cache (page 229).
l When this option is selected, you need to press [F5] (i.e. refresh the page)
in order for the web console to display the updated list of tables. This can
be done either before running the scripts (recommended) or after running
the scripts. Note that until you refresh the browser, the information in the
web console will only be partially updated.
l Ignore Mapping Data Type Validation: By default, Compose issues a validation error when a
landing table is mapped to a logical entity with a different data type. You can select this
option to allow the mapping of different data types. Note that you should only select this
option if you need to map landing table data types to compatible (though not identical)
logical entity data types.
l Write metadata to the TDWM tables in the data warehouse:
When this option is selected (the default unless Amazon Redshift is the data warehouse),
Compose writes the metadata for the data warehouse tables to the following tables:
<schema>.TDWM_Tables and <schema>.TDWM_COLUMNS.
Centralizing the metadata in two dedicated tables makes it easier for external metadata tools
to analyze the metadata. The metadata is also written to the local Compose repository, so
clearing this option (if performance issues are encountered) will not affect Compose
functionality in any way.
l Do not display the default workflows in the monitor: Select this option if you want to
prevent the default workflows from being executed.
Dates
l Lowest Date: The value stored in the "From Date" column. This is the date when the version
started.
l Highest Date: The value stored in the "To Date" column. This is the date when the version
ended.
For existing objects, Compose will not be able to determine a source record's timestamp if both of
the following are true:
Naming tab
In this tab, you can change the default "From Date" and "To Date" column names, as well as the
prefixes and suffixes used to identify tables, views, and columns.
If you change the prefix or suffix of existing tables (e.g. data warehouse tables), you
need to drop and create the data warehouse and data mart tables.
Suffix for Replicate The suffix used to identify Replicate Change Tables in the landing
Change Tables zone of the data warehouse.
Name Description
Prefix for data The prefix used to identify tables in the Data Warehouse.
warehouse tables
Prefix for data The prefix used to identify views in the Data Warehouse.
warehouse views
Suffix for archived The suffix used to identify archived Change Tables in the specified
Replicate Change Tables database.
Prefix for data mart The prefix used to identify tables in the data mart.
tables
Suffix for exception mart The suffix used to identify error tables in the data warehouse. These
tables tables contain data that was rejected by a data quality rule.
Suffix for hub tables The suffix used to identify hub tables in the Data Warehouse. Hub
tables contain History Type 1 columns. History Type 1 column do not
contain any version history as opposed to History Type 2 columns
that do.
Suffix for satellite tables The suffix used to identify satellite tables in the Data Warehouse.
Satellite tables contain History Type 2 columns. History Type 2
columns keep a history of the data version by adding a new row
whenever the data is updated.
"From Date" column The name of the "From Date" column. This column is added to tables
name that contain attributes (columns) with a History Type 2. The column
is used to delimit the range of dates for a given record version.
"To Date" column name The name of the "To Date" column. This column is added to tables
that contain attributes (columns) with a History Type 2. The column
is used to delimit the range of dates for a given record version.
Environment tab
In this tab, you can:
l Specify information about your environment, part of which will be displayed as a banner at
the top of the window when you open the project.
l Determine the number of database connection to open concurrently.
l Environment type: Select one of the following types according to your environment type:
Development, Test, Acceptance, Production, Other. This information will not be displayed
in the banner.
l Environment title: Specify a title for your environment. The title will be displayed in the
banner at the top of the console.
l Project title: Specify a title for your project. The project title will be shown in the console
banner. If both an Environment Title and a Project Title are defined, the project title will be
displayed to the right of the environment title.
l The Project title option requires Compose August 2021 Patch Release 12 or
later.
l When a project is deployed to a new environment, the environment title and
environment type in the new environment will not be overridden.
The following image shows the banner with both an Environment title and a Project title:
The banner text is shown without the Environment title and Project title labels.
This provides greater flexibility as it allows you add any banner text you like,
regardless of the actual label name. For example, specifying Project
owner: Mike Smith in the Project title field, will display that text in the banner.
The environment properties can be exported and imported to a new project, but cannot
be imported to an existing project.
Task recovery
You can set SQL state classes and error codes, on the occurrence of which, a task will be retried.
l Maximum retry count: The number of times to retry a task before exiting with failure.
Increasing the number of retries will impact system resources. Therefore, only increase the
default value if you expect tasks to recover after the default number of retries.
l Interval between retry attempts (sec): The time to wait between retry attempts. Increasing
the interval will consume more system resources. Therefore, only increase the default value
if it is critical that the task recover as soon as possible.
l Retry on these SQL state classes: The default is 08 (connection exceptions). You can add
additional classes as desired. Classes should be separated with a comma.
Example: 08,22,2F
l Retry also on these error codes: The default is 1205 (which occurs when a table is locked
by another process). You can add additional error codes as desired. Error codes should be
separated with a comma.
Example: 1205,2020,233
In the Table creation modifiers tab, you can append table creation modifiers as SQL parts to the
CREATE TABLE statement. You can set table creation modifiers for both data warehouse tables and
for data mart tables. In the data warehouse, separate modifiers can be set for Hub and Satellite
tables while in the data mart, separate modifiers can be set for fact and dimension tables. Once set,
all tables will be created using the specified modifiers, unless overridden at the entity level.
1. Select the Custom option for any of the available table types.
2. Click the Edit button to open the Table Creation Modifier editor.
3. Enter the SQL parts you want to append to the CREATE TABLE statement.
4. Optionally, but strongly recommended, validate the SQL in an external validation tool that
supports your specific database and version.
Compose does not provide any way of validating your SQL. Therefore, make sure
to validate the SQL before deploying in a production environment.
Limitations
If you change an existing table creation modifier, you will not be prompted to adjust affected tables
when validating the model. If you want to apply the change to existing tables, dropping and
recreating all tables might not be an issue in a development environment. However, in a production
environment (where dropping and recreating all tables might not be a viable option), you will need
to adjust the tables outside of Compose.
For an explanation of how to define table creation modifiers for individual data warehouse tables,
see Defining Table Creation Modifiers (page 183)
For an explanation of how to define table creation modifiers for individual fact tables, see Example
of a Valid Table Creation Modifier (page 247).
For an explanation of how to define table creation modifiers for individual dimension tables, see
Example of a Valid Table Creation Modifier (page 252).
Resetting projects
You can reset projects as required. This can be useful in the project development stage as it allows
you to easily delete unwanted project elements.
To reset a project:
1. Open your project as described in Managing and monitoring projects (page 292).
2. Click the downward arrow to the right of the project name and select Reset Project from the
drop-down menu.
The Reset Project window opens.
3. Select which elements to reset according to your project type.
l Model (Entities, Relationships, Attribute Domains), mappings, and data mart
definitions
For more information on models, see Creating and managing the model (page 156) .
l Reusable transformations
For information on the reusable transformations, seeDefining reusable
transformations (page 193).
l Global mappings
For more information on global mapping, see Managing global mappings (page 163).
l Data warehouse and data mart tables
For more information on data warehouses and data marts, see Creating and managing
the data warehouse (page 194) and Creating and managing the data warehouse (page
194) respectively.
l Command tasks
For more information on data warehouses and data marts, see Creating and managing
command tasks (page 265).
l DDL Scripts
For more information on DDL scripts, see Project settings (page 38) and Viewing and
downloading DDL scripts (page 100).
l Drop Archive Tables
For more information on Archive Tables, see Defining landing zones (page 142)
4. Click Reset Project and then click Yes when prompted to confirm your request.
Project deployment
Project deployment packages can be used to back up projects or migrate projects between
different environments (e.g. testing to production). As a deployment package is intended to be
deployed in a new environment, it contains the Data Warehouse and data source definitions, but
without any passwords. The deployment package also does not contain any data from the Data
Warehouse or data mart, only the metadata. The deployment package also contains the project
metadata and mapping information, which should be consistent with the landing zone tables in the
new environment.
For a complete list of objects contained in the deployment package, see Exporting a project (page
79).
A ZIP file containing a JSON file (i.e. the project settings) and a readme.txt file will be saved to your
browser's default download location. The ZIP file name is in the following format: <Project_Name>_
deployment_<Date>__<Time>.zip
The readme.txt file contains the following information about the deployment package: project
name, export date, exporter user name, deployment version, and description.
Deploying packages
This section explains how to deploy a project deployment package. This section explains how to
deploy a project deployment package. You can only deploy packages to an existing project.
Therefore, before deploying a project, create a new project with the user name and password
required for connecting to the Data Warehouse database and the Landing Zone database (if
defined) in the new environment. In addition, the Landing Zone databases in the target project must
have the same display name (defined in the Compose console) as the corresponding databases in
the source project. Note that as database settings are usually environment specific, the database
settings in the target project will not be overwritten by those of the source project.
Deploying a project between different database types is not supported. For example,
you cannot create a package in SQL server and deploy it to an Oracle database.
When deploying, Compose does not override existing connection parameters. This enables you to
easily migrate projects from test to production, for example, without needing to change user
names, passwords or IP addresses.
If preferred, you can create an empty project and provide the required credentials after
the deployment completes. In this case, an error message prompting you for the missing
credentials will be displayed after the deployment completes.
To deploy a project:
1. Copy the ZIP file created in Creating deployment packages (page 46) to a location that is
accessible from the Compose machine.
2. Open Compose and choose one of the following methods:
l In the main Compose window, select the desired project. Then, click the Deployment
toolbar button and select Deploy from the drop-down menu.
l In the project window, select Deployment > Deploy from the project drop-down
menu.
The Deploy window opens.
3. Either drag and drop the file on the window.
OR
Click Select and browse to the location of the deployment package. In the Open window,
either double-click the deployment package ZIP file or select the file and click OK.
The package details will be displayed.
4. Click Deploy to deploy the package. When prompted to replace the existing project, confirm
the operation. The project will be deployed.
When deploying a project defined with multiple Replicate Servers to any of the following:
Then the Landing Zone settings from the source project will be used, but the missing
databases will be created without a password and Replicate Server. These will need to
be configured manually.
Migrating Compose objects as CSV files, provides a level of granularity that is not available when
using the standard project export and deployment options. Instead of importing the entire project,
you can import specific objects and then apply periodic updates as needed.
In this section:
Overview
The ability to export object definitions to a CSV file and then import them to another environment
provides many benefits, enabling:
l Migration of data from a custom database table and/or Excel sheets to Compose
l Data to be reviewed by business analysts who are not able to (or do not want to) access
Compose
l Synching with third-party tools that output data in CSV format
l Comparison of versions in order to review changes
l Resolving of object-specific issues in a development environment and then deploying to the
production environment, even when they are not completely in sync
l Sharing of resources between projects in the same environment
l Granular version management of specific objects such as mappings
Manually typing these definitions into Compose would be a laborious and time-consuming
undertaking; using the CLI however, this can be done in a matter of minutes. Following the initial
import, customers who need to apply selective updates to the target environment (such as adding
attributes with their descriptions), can do so using the Compare and Apply CLI commands.
1. Run the export command to output the source object(s) (to CSV files).
2. Run the import command to bring the objects into the target environment.
3. Following changes to the source environment, run the export command to output the source
object(s) to (CSV files).
4. Run the Compare command to see the differences between the exported objects and the
corresponding target objects.
Alternatively, run the Compare command on the source environment to see the differences
between previous and current source project versions; then determine which changes needs
to be migrated to the target project. This approach is useful if changes were made directly to
the target project as it allows you to retain the custom changes while still applying changes
to other objects.
5. Review the changes and make any edits as necessary.
6. Run the Apply command to apply the changes to the target environment.
7. Periodically repeat steps 3-6 as necessary.
An understanding of the CSV file structure and their impact on the target environment is crucial, not
only for customers who wish to create these files manually, but also for ensuring the import/apply
operations succeed with the expected results. For this reason, this section first discusses the CSV
file structure of the supported objects and only then provides instructions for performing the actual
CLI operations.
Stored objects
Several objects may have multi-lines, commas, and other such complexities.
Example:
${x1}*${x2}
Example:
x1:unit price;x2:quantity
Migrating models
Migrating a Compose Model allows you to:
Model objects
A Compose Model consists of the following objects:
l Entities (entities.csv)
l Attributes (attributes.csv)
l Attributes Domain (attributesDomain.csv)
l Relationships (relationships.csv)
During the export process, each of these objects is exported to a separate CSV file.
You can either import the Model in its entirety or only specific elements, according to your needs.
You can also manually create a CSV file containing a Model element (or edit an existing file) and
then import it to a Compose project.
l The Model must be valid before you can export it to or import it from a CSV file.
For details, see Validating the model (page 165).
l CSV files must be in a valid format. For details, see Valid CSV File Formats.
l Only a user with Model privileges can import the Model.
Non-privileged users can import just the mappings. For details, see the SCOPE
parameter in the command for importing a model.
l Replacing an existing object is not supported. For example, if the Products entity
already exists in the Model, you cannot import an entities.csv file that contains an
entity called Products.
For example, the attribute named CustomerDesignatedID will be replaced by the relationship
where:
l ID is the name of the attribute in the parent entity and
l Customer is the prefix of the relationship and
l Designated is the prefix of the parent attribute.
Note that attributes marked as relationships will be skipped when imported from
attributes.csv as they must derive their data type from the Attributes Domain.
If Attribute Domain is missing in the attributes.csv file, there must be a data type.
Attribute domains that differ but have the same name will be appended with the suffix _
01.
If column is If value is
Column Name Required Comments
missing missing
History Type No Key - Type 1, Not Reject Values are Type 1 or Type 2.
key - type 2
Yes (Type 2) and No (Type 1)
are also allowed.
Expression No No expressions No -
in any attribute expression in
any attribute
If column is If value is
Column Name Required Comments
missing missing
Varchar(50)
Decimal(10,2)
Relationships
Relationships CSV mapping rules
If value is
Column Name Required If column is missing Comments
missing
The specific
attribute value
may be empty
as well.
If value is
Column Name Required If column is missing Comments
missing
Migrating mappings
Migrating mappings allows you to:
l Export mapping metadata and mappings from a Compose project to CSV files. Mapping
metadata will be exported to mappingsMetadata.csv while mappings will be exported to
mappings.csv. The former shows the table mappings while the latter show the column
mappings.
l Import new mappings that do not exist in the current Compose project.
l Reuse the same mappings across several projects or Compose installations.
l The export of mappings is allowed for users with the Viewer security role.
l The order of writing the mapping metadata is according to metadata name alphabetically
(e.g. Map_Orders appears after Map_Customers).
l The order of writing a mapping is according to target columns (same as Model ordinal).
l Source columns which are not mapped to anything will not appear in the exported file.
l All target columns will appear in the mappings even if they were not mapped.
l If required, you can import one CSV file at a time: mappingsMetadata.csv or mappings.csv.
l Column order has no meaning; only column names (case insensitive).
l When importing metadata, Compose validates that the targets exist in the Model. Source
columns are not validated on import.
l If the source schema, table, view or query don't exist, they will be validated after the import.
l Importing mappings will fail in the following scenarios:
l If the target entity doesn't exist in the Model.
l If the compose database object doesn't exist.
l If the mapping attribute doesn't exist in the Model.
Northwind on MySQL
Schema1.Orders
Lookup No No lookups No -
Condition expression in
Value that attribute
x:$Lookup$.a;y:$Landing$.CustomerID
Lookup No No lookups No -
Result Value expression in
that attribute
Migrating tasks
You can migrate data warehouse tasks, data mart tasks, and custom ETLs (tasks) from one
environment to another, while preserving custom objects in the target environment. This is
especially useful for customers who wish to incrementally updated production environments with
new versions from the test environment.
l <specified export folder>/taskCustomEtl.csv - Lists any enabled custom ETLs defined for
the data warehouse task.
l <specified export folder>/taskSettings.csv - Contains details of the task settings defined
for each of the data warehouse tasks.
l <specified export folder>/tasks.csv - Lists the data warehouse tasks.
l <specified export folder>/taskDataWarehouseTables.csv - Lists the data warehouse tables
and properties.
l <specified export folder>/taskMappings.csv - Lists the mappings used in the task.
l <specified export folder>/SQL/DW_Custom_ETL_<custom ETL name>.SQL - One SQL file
for each custom ETL.
Considerations
Export considerations
l Parameters will be written to CSV files in alphabetical order (as they appear in the web
console).
Import considerations
l Importing tasks or custom ETLs will override any existing objects with the same names.
l Importing logical entities or mappings that do not exist in the target project will result in
failure.
Tasks
Header If column is If column exists but value
Mandatory Comments
name missing is empty
Change Tables
Only
Task mappings
Header If column is If column exists but value is
Mandatory Comments
name missing empty
l For each custom ETL, Compose will export/import an SQL file to:
<specified export folder>/SQL
l The file name will be DW_Custom_ETL_<custom ETL name>.SQL
l If you wish to edit the file name, make sure that it only contains the following
characters: A-Z, 0-9, underscore (_), or space. On import, any other character will
be replaced with an underscore.
l For each custom ETL, Compose will export/import an SQL file to:
<specified export folder>/<data mart name>/SQL
l The file name will be DM_Custom_ETL_<custom ETL name>.SQL
l If you wish to edit the file name, make sure that it only contains the following
characters: A-Z, 0-9, underscore (_), or space. On import, any other character will
be replaced with an underscore.
Example:
datamarts.csv
datamart1\facts.csv
datamart1\FactDimensionsLinks.csv
datamart1\dimensions.csv
datamart1\factattributes.csv
datamart1\dimensionattributes.csv
datamart2\facts.csv
datamart2\factdimensions.csv
datamart2\dimensions.csv
datamart2\factattributes.csv
datamart2\dimensionattributes.csv
When exporting a data mart, the following objects will not be included: View schema,
database name, and schema name. As these objects are environment-specific, they
need to be set them up manually after importing the data mart to the target environment
(unless you wish to user the defaults from the data warehouse).
Facts CSV
The facts.csv consists of one row per fact table.
OrderDetail.OrderHeader.ModifiedD
ate
Root entity Yes Reject Reject The root entity used. For example, if
the fact is a denormalization of order
details and orders, it will contain
"orders"
Fact As No Enable the Enable the Boolean: Accepts the values TRUE
Type 1 option option or FALSE
Dimensions CSV
The dimensions.csv consists of one row per dimension.
Must be formatted as an
expression
Root entity Yes Reject Reject The root entity used. For
example, if the fact is a
denormalization of order
details and orders, it will
contain "orders"
Referenced Only for The data mart does not Means that it is not Name in the
dimension referenced contain a referenced a referenced referencing
name dimensions. dimension. dimension. data mart.
On export, the order is determined by the attributes order. On import, the order is determined by
the read order.
If column
If column is exists but
Header name Required Comments
missing value is
empty
On export, the order is determined by the attributes order. On import, the order is determined by
the read order.
Expression No All attribute- All attribute- See Stored objects (page 49).
Params parameter parameter
mapping is mapping is
trivial (same trivial (same
name) name)
For information on reusable transformations, see Defining reusable transformations (page 193).
Reusable transformations
The ReusableTransformation.csv file includes a row per reusable transformation parameter and is
described in the table below. Note that some reusable transformations may have no parameters.
General guidelines
It's important to take note of the following:
l On export:
l The order in which the parameters will be written is determined by their order in the
web console
l The order of reusable transformations in the CSV file is alphabetical
l Reusable transformations are considered part of the model, which means users need
the import model permission in order to import them.
l Column order has no meaning; only column names (case insensitive)
l All CSVs are optional
l The CSV files do not contain or rely on internal object IDs, rather, they rely on object
names.
l On import, existing reusable transformations will be overridden by the information from the
CSV file.
To import specific entities, use the CSV mechanism described in Migrating models (page
50).
Exporting objects
Run the following command from the Compose bin directory:
Command syntax
ComposeCli.exe export_csv --project project_name --outfolder folder
Parameters
Parameter Description
--outfolder The name of the target folder for the CSV files.
Example
ComposeCli.exe export_csv --project myproject --outfolder c:\MyCFDWProject
Importing objects
As CSV files do not have versions, Compose cannot know which Compose version the
CSV file being imported originated from. If the default behavior has changed between
versions, the rule is that the default of the new version will always be applied. For
example, in the Compose May 2021 version, the option to update the fact with changes
to Type 2 data warehouse entities is now enabled by default. In previous versions, facts
were not updated with changes to Type 2 data warehouse entities and there was no
option to change this behavior. Therefore, continuing with our example, if you want this
option to be disabled by default, you would need to add the column for that setting (Fact
As Type 1) to the fact.csv file and set the value to "FALSE" before importing.
Before performing any import operations, it is strongly recommended to review the topic(s) that
discuss CSV file structure and the impact of missing columns and/or values on the target
environment. For example, before importing Data Marts, review the Migrating Data Marts topic.
Command syntax
ComposeCli.exe import_csv --project project_name [--infolder folder] [--scope
model|mappings|datawarehouse|DatawarehouseTasks|datamarts] [–-filetype objecttype] [–-infile
filename]
Parameters
Parameter Description
--infolder The full path to the folder containing the CSV files. This parameter
is only required if you want to import all files.
Parameter Description
--filetype The type of CSV file to import. When this parameter is omitted, all
objects will be imported.
l AttributesDomain
l Entities
l Attributes
l Relationships
l MappingsMetadata
l Mappings
l ReusableTransformation
l ReusableTransformationParams
l DataMarts
l Tasks
l TaskSettings
l TaskMappings
l TaskDataWarehouseTables
l TaskCustomETLs
l CustomETLs
--infile The full path to the file to import when using non-default file
names. This must be specified together with the --filetype
parameter described above.
Examples
Import all CSV files
Comparing objects
Comparing the model to be imported with the existing project model allows you to view and
optionally edit the proposed changes before applying them.
When you run the Compare command, the structure of the CSV output files will be identical to the
export output files, but with the addition of Change Type and Action columns at the end. Note that
as the Apply command only supports ADD operations, the only value in the Change Type column
will be ADD.
If a column or value was deleted in the source environment, the Action column will contain the
value "IGNORE". This tells Compose not to delete the corresponding column/value in the target
environment when the Apply command is run. You can delete the "IGNORE" before running the
Apply command to force the deletion of the corresponding column/value in the target environment.
However, as the Apply command does not support DELETE operations, deleting the "IGNORE"
before running the Apply command will have no effect.
Compare guidelines:
Command syntax
ComposeCli.exe compare_csv --project project_name --infolder folder --changes_folder folder [-
-create_files_even_when_no_diff]
Parameters
Parameter Description
Example
ComposeCli.exe compare_csv --project ProjectEmpty --infolder "C:\1" --changes_folder "C:\2" --
create_files_even_when_no_diff
Applying objects
Once you are satisfied with the proposed changes, you can then apply them to your project.
Apply guidelines:
l If a column is missing on ADD, then the action will be as described in the topic discussing the
object elements.
l By default, all changed object definitions will be added. To filter out rows or columns, edit the
outputted CSV file as needed. For example, to only apply changes to a specific Data Mart,
delete all of the other Data Marts' rows.
l Any non-standard field name headings, should be renamed in the CSV to the Compose
standard.
l Column order is insignificant as the columns will be ordered by name (case insensitive).
l For Boolean fields, accepted values are Yes/No, True/False, 1/0 (case insensitive)
l Data type is Compose's logical type.
l Relationship details override the underlying attributes information (e.g. History Type, Is Key,
etc.).
l If the source schema, table, view or query does not exist, it will be validated after the Apply
operation .
Command syntax
ComposeCli.exe apply_csv --project project_name --changes_folder folder
Parameters
Parameter Description
Example
ComposeCli.exe apply_csv --project ProjectEmpty --changes_folder "C:\1"
Under normal circumstances, use the deployment options described in Project deployment (page
45) to export and import projects. For deployment automation or control by another tool, you can
use the command line interface (CLI) to perform such tasks.
To export or import a project or project configuration, you first need to change the
default Master User Password.
For more information on changing the master user password, see Changing the master
user password (page 30).
See also: Moving projects from the test environment to the production environment
(page 86) and Import/export scenarios - When is a password required? (page 87)
Before running any command, you must run the Connecting to Qlik Compose server (page 78)
command.
To get help when using the command line, you can run the Help command. For example, for help
about exporting a project, issue the following command:
In this section:
Command syntax
ComposeCli.exe connect [--url connection_url]
Where:
--url is the connection URL to the system where the server is running.
Example
ComposeCli.exe connect --url https://mymachine.mydomain/qlikcompose
Exporting a project
You can use the export_project_repository CLI to export a project.
l Databases
l Model definitions (entities and attributes)
l Mappings
l Custom ETLs
l Data warehouse tasks
l Data mart definitions
Existing data warehouse tables and generated tasks are not exported. Notifications and
schedules are also not exported as they are considered to be environment-specific.
Command syntax
ComposeCli.exe export_project_repository --project project_name --outfile output_file [--is_
without_credentials or --without_environment_specifics] [--password password] [--master_user_
password master_user_password]
Parameters
Parameter Description
--outfile The path to and name of the output file. This file is in
JSON format (e.g. C:\file.json).
Parameter Description
Parameter Description
Example
Export project without a password
Importing a project
You can use the import_project_repository CLI to import a project. If you import to an existing
project, all of the project settings, except the project configuration items will be overridden. For
information on the project configuration items, see Exporting the project configuration (page 84).
l Databases
l Model definitions (entities and attributes)
l Mappings
l Custom ETLs
l Data warehouse tasks
l Data mart definitions
Command syntax
ComposeCli.exe import_project_repository --project project_name --infile input_file [--
password password] [--is_without_credentials] [--override_configuration] [--dont_backup_
existing_project] [--autogen]
Parameters
Parameter Description
--infile The full path to the input file, including the file name.
This file is in JSON format (e.g. C:\file.json).
Parameter Description
<product_dir>\data\projects\<project_name>_
backup_<timestamp>
Example
ComposeCli.exe import_project_repository --project MyProject --infile file.json --password
MyPassword --override_configuration --autogen
For information about migrating projects, see Moving projects from the test environment to the
production environment (page 86).
Command syntax:
ComposeCli.exe export_project_repository_config --project project_name --outfile output_file
[--is_without_credentials] [--password password] [--master_user_password master_user_password]
Parameters
Parameter Description
--outfile The path to and name of the output file. This file is in JSON format
(e.g. C:\file.json).
--is_without_credentials Use this parameter to specify that you want to export the project
settings without the encrypted fields. When importing to a new
project, you will need to manually enter the project passwords (in
the Compose database connection settings) after the import
completes. In addition to eliminating the need to specify a
password when exporting or importing the project, the is_without_
credentials parameter also allows the project to be used in every
Compose installation, regardless of its master user password. It is
also useful in the event that you would like to keep the existing
passwords in the target environment (e.g. when exporting from a
testing environment to an existing project in the production
environment).
Parameter Description
--master_user_password The master user password defined for the source machine. When
used, this parameter must be used together with the password
parameter. Use the master_user_password > parameter if you want
to encrypt the credentials in the exported project, but do not want
the source master password to be used in a different environment.
In such a case, when you import the project to an environment that
has a different master password, you will only need to specify the
password qualifier.
Example
Export project configuration without a password
Before you can import the project configuration, you must first run the import_project_
repository command described in Importing a project (page 81).
Command syntax:
ComposeCli.exe import_project_repository_config --project project_name --infile input_file [--
password password] [--is_without_credentials]
Parameters
Parameter Description
Parameter Description
--infile The full path to the input file, including the file name. This file is in
JSON format (e.g. C:\file.json).
--is_without_credentials Use this parameter to specify to import the project settings without
the encrypted fields. In this case, you will need to manually enter
the project passwords in the Compose database connection
settings.
Example
ComposeCli.exe import_project_repository_config --project MyProject --infile file.json --
password MyPassword
The data source and data warehouse display names must be identical in both the testing
and the production environments.
To perform the initial migration from the testing environment to the production environment:
1. Export the project from the test environment as described in Exporting a project (page 79).
2. Import the test project to the production environment as described in Importing a project
(page 81).
3. Edit the connection settings to point to the production data source and data warehouse.
For more information, see Setting up Landing Zone and Data Source connections (page
133)and Setting up a data warehouse connection (page 111)respectively.
4. Configure notifications and scheduling as needed.
For more information, see Scheduling tasks (page 272) and Notifications (page 274)
respectively.
1. Export the project from the test environment as described in Exporting a project (page 79).
2. Import the test project to the production environment as described in Importing a project
(page 81).
In all scenarios, if you import a project to an existing project, the credentials of the
existing projects are preserved (as they are part of the project configuration).
Example 1: Moving a project or project configuration between two Compose machines without
retaining the project credentials.
This is useful when importing to a new project that will have different project credentials.
In such a scenario, simply add the is_without_credentials parameter to either the export or the
import command.
Example 2: Moving a project or project configuration between two Compose machines that
have the same Master User Password.
In such a scenario, neither the export command nor the import command need to include a
password. If you do not want the source and target projects to have the same credentials (for
database connectivity, etc.), then you also need to specify the is_without_credentials parameter in
either the export or the import command.
Example 3: Moving a project or project configuration between two Compose machines that
have a different Master User Password, but without revealing the Master User Password of
the source machine.
In such a scenario, the export command must include the password and master_user_password
parameters while the import command must include the password parameter. The same password
(specified with the password parameter) must be used for both export and import.
Example 4: Moving a project or project configuration between two Compose machines that
have a different Master User Password.
In such a scenario, the export command does not need to include a password, but the import
command should specify the Master User Password of the source machine (using the password
parameter).
Environment variables allow developers to build more portable expressions, custom ETLs, and
Compose configurations, which is especially useful when working with several environments such
as DTAP (Development, Testing, Acceptance and Production). Different environments (for example,
development and production) often have environment-specific settings such as database names,
schema names, and Replicate task names. Variables allow you to easily move projects between
different environments without needing to manually configure the settings for each environment.
This is especially useful if many settings are different between environments. For each project, you
can use the predefined environment variables or create your own environment variables.
Database and schema name variables are supported with the following objects:
l Reusable transformations
l Custom ETLs (Data warehouse and Data marts)
l Mappings lookups
l Mappings and model expressions
l Data mart settings
In this section:
l Exported CSV objects that are associated with predefined environment variables (for
example, a mapping database and schema) cannot contain user-defined environment
variables.
l $$${database.Data Warehouse.userName}
l $$${database.Data Warehouse.encryptedPassword}
l $$${database.Data Warehouse.odbcString}
l $$${database.Data Warehouse.jdbcString}
l $$${database.Data Warehouse.warehouse}
l $$${database.Data Warehouse.database}
l $$${database.Data Warehouse.datawarehouseSchema}
l $$${database.Data Warehouse.datamartSchema}
For an explanation of data warehouse settings, see Setting up a data warehouse connection (page
111)
As there can be several landing zones, the middle part of the variable name (landing1
below) is the landing zone name and should be replaced with the actual name of your
Landing Zone.
l $$${database.landing1.archiveSchema}
l $$${database.landing1.replicateTask}
l $$${database.landing1.source.userName}
l $$${database.landing1.source.encryptedPassword}
l $$${database.landing1.source.odbcString}
l $$${database.landing1.source.jdbcString}
l $$${database.landing1.source.database}
l $$${database.landing1.source.datawarehouseSchema}
For an explanation of landing zone settings, see Defining landing zones (page 142).
As there can be several data marts, the middle part of the variable name (datamart1
below) is the data mart name and should be replaced with the actual data mart name.
For an explanation of data mart settings, see Modifying data mart settings (page 262).
As there are usually multiple mappings and columns, the middle part of the variable
name (mapping1.column1 below) are the mapping and column names and should be
replaced with the actual mapping and column name.
l $$${mapping.mapping1.column1.lookup.schema}
For an explanation of lookup settings, see Using lookup tables (page 212).
As there are usually multiple mappings, the middle part of the variable name (mapping1
below) is the mapping name and should be replaced with the actual mapping name.
l $$${mapping.mapping1.schema}
For an explanation of mappings settings, see Editing column mappings (page 207).
l Before running any environment_variables commands, run the Connect command to establish
a connection to the Compose Server. For more information on this command, see
Connecting to Qlik Compose server (page 78).
l Variable names should be specified in the CLI without dollar signs or curly brackets. So, for
example, assuming the name of your landing database is MyLanding,
$$${database.MyLanding.database} should be specified as database.MyLanding.database.
l When setting a variable with the Compose CLI, any variable names and values with spaces
(both user-defined and predefined) must be specified in quotation marks. This means, for
example, that all data warehouse variables must be specified with quotation marks as their
names always contain a space (for example, "database.Data Warehouse.serverName").
l When setting a Boolean value, use --boolVal instead of --val. For example, to set
database.Data Warehouse.connectionInputModeStandard to "True", specify:
--var "database.Data Warehouse.connectionInputModeStandard" --boolVal true
1. a. Create a deployment package or export the project using the Compose CLI.
See also: Project deployment (page 45) and Exporting a project (page 79).
2. In the target environment:
a. Deploy the project if you created a deployment package or import the project if you
exported it using the Compose CLI.
See also: Project deployment (page 45) and Importing a project (page 81).
b. Copy the JSON file created in the source environment to your preferred location.
c. Edit the JSON file and replace the variable values with the values you want to
propagate to the target environment.
See also: Working with environment variables (page 88).
d. Run the following command to propagate the JSON file variables to the Compose user
interface (replacing projectName with the name of your Compose project and
JsonFileLocation with the full path of your edited JSON file):
ComposeCli environment_variables --command setALL --project projectName --
jsonFile JsonFileLocation
Example:
You can also set any variable by running the set command. This is
especially useful if you do not want the JSON file to include passwords. In
this case, you would need to run the following command (shown as an
example):
ComposeCli environment_variables --command set --project MyProject --
var encryptedPassword --val g56g56y563%
When you set a password with the CLI, Compose encrypts the password
first and then sets it.
e. Run the following command to apply the predefined environment variables and
complete the process (replacing projectName with the name of your Compose project):
ComposeCli environment_variables --command applyPredefined --project projectName
User-defined variables should have the following format in the Compose user interface:
$$${myVariable}
l Expressions - in model, mappings (column expressions, data quality and filters) or data mart
l Lookup conditions and expressions
l Custom ETLs
UPDATE
"ROSIE"."DWH2"."TSTG_Employees"
SET
"Title" = $$${myVar}
WHERE "EmployeeID" < 150
Method 1:
Set each variable individually by running the following command (replacing
projectName with the name of your Compose project, varName with the variable name,
and value with the variable value):
ComposeCli environment_variables --command set --project projectName --var
varName --val value
Example:
Method 2:
Add the user-defined variables directly to the JSON file as described in Manually
editing the JSON file (page 97) below.
c. Generate the task(s) with the user-defined variables.
d. If you set the user-defined variables with the CLI, run the following command to write
the user-defined variables to a JSON file (replacing projectName with the name of your
Compose project and JsonFileLocation with the full path of your JSON file):
ComposeCli environment_variables --command writeCLISet --project projectName --
jsonFile JsonFileLocation
e. Create a deployment package or export the project using the Compose CLI.
See also: Project deployment (page 45) and Exporting a project (page 79).
2. In the target environment:
a. Deploy the project if you created a deployment package or import the project if you
exported it using the Compose CLI.
See also: Project deployment (page 45) and Importing a project (page 81).
b. Copy the JSON file created in the source environment to your preferred location.
c. Edit the JSON file and replace the variable values with the values you want to appear
in the target environment.
See also: Working with environment variables (page 88).
d. Run the following command to propagate the JSON file variables to the Compose user
interface (replacing projectName with the name of your Compose project and
JsonFileLocation with the full path of your edited JSON file)::
ComposeCli environment_variables --command setALL --project projectName --
jsonFile JsonFileLocation
Each time Compose writes to the JSON file, it overwrites the existing content. Therefore,
when working with both user-defined and predefined variables, you need to specify the
path to two different JSON files. For convenience, you can then merge the two files into
a single JSON file while taking care to use the format described in Manually editing the
JSON file (page 97) below.
When working with both user-defined and predefined environment variables, the flow is as follows:
c. In the Compose user interface, add user-defined variables to supported objects (for
example, a Custom ETL in a data warehouse task), and save your settings.
UPDATE
"ROSIE"."DWH2"."TSTG_Employees"
SET
"Title" = $$${myVar}
WHERE "EmployeeID" < 150
Method 1:
Set each variable individually by running the following command (replacing
projectName with the name of your Compose project, varName with the variable name,
and value with the variable value):
ComposeCli environment_variables --command set --project projectName --var
varName --val value
Example:
Method 2:
Add the user-defined variables directly to the JSON file as described in Manually
editing the JSON file (page 97) below.
e. Generate the task with the user-defined variable(s).
f. If you set the user-defined variables with the CLI, run the following command to write
the user-defined variables to a JSON file (replacing projectName with the name of your
Compose project and JsonFileLocation with the full path of the JSON file that you want
to contain your user-defined variables):):
ComposeCli environment_variables --command writeCLISet --project projectName --
jsonFile userDefinedJsonFileLocation
g. Create a deployment package or export the project using the Compose CLI.
See also: Project deployment (page 45) and Exporting a project (page 79).
2. In the target environment:
a. Deploy the project if you created a deployment package or import the project if you
exported it using the Compose CLI.
See also: Project deployment (page 45) and Importing a project (page 81).
b. Copy the JSON file created in the source environment to your preferred location.
c. Edit the JSON file and replace the variable values with the values you want to appear
in the target environment.
See also: Working with environment variables (page 88).
d. Run the following command to propagate the JSON file variables to the Compose user
interface (replacing projectName with the name of your Compose project and
JsonFileLocation with the full path of your edited JSON file)::
ComposeCli environment_variables --command setALL --project projectName --
jsonFile JsonFileLocation
Removing a predefined variable will reset it to its previous value. To prevent errors, make
sure when you remove user-defined variables, to also edit or remove the custom ETL or
expression where the variable is located.
Example:
The JSON file is split into two sections: “predefinedVariables” and “userDefinedVariables”. When
you edit the JSON file, make sure to put predefined variables in the “predefinedVariables” section
and user-defined variables in the “userDefinedVariables” section. In addition, make sure to use
standard JSON escaping conventions, as shown in the following example:
{
“predefined”: {
"databases.Data Warehouse.serverName": "myhostname"
},
“userDefined”: {
"variable": "value",
"var2": "val2"
}
Editing and saving the JSON file does not automatically set and apply the variables. To
do this, run the 'SetAll' and 'ApplyPredefined' commands. If the JSON file also contains
user-defined variables, you will need to generate the associated tasks as well. See
below for details.
1. Run the following command to propagate the JSON file variables to the Compose user
interface (replacing projectName with the name of your Compose project and
JsonFileLocation with the full path of your edited JSON file):
ComposeCli environment_variables --command setALL --project projectName --jsonFile
JsonFileLocation
Example:
Running the setAll command will remove any existing user-defined environment
variables from the user-interface, and reset any modified predefined environment
variables that are not included in the JSON file to their previous values.
2. If the JSON file contains predefined environment variables, run the following command
(replacing projectName with the name of your Compose project):
ComposeCli environment_variables --command applyPredefined --project projectName
3. If the JSON file contains user-defined environment variables, generate the associated task
(s).
Command syntax
ComposeCli.exe generate_project --project project_name [--database_already_adjusted] [--
stopIfDatamartsNeedRecreation]
Parameters
Parameter Description
Parameter Description
Example
ComposeCli.exe generate_project --project MyProject --stopIfDatamartsNeedRecreation
Certain limitations apply when adjusting the data mart. For more information, see Auto-adjust
limitations and considerations (page 260).
You can export a project to a zip file for record keeping and sharing offline. The project is exported
as HTML files which can be easily printed to PDF using the print toolbar button in the HTML page.
1. Open the project as described in Managing and monitoring projects (page 292).
2. Click the downward arrow to the right of the project name and select Generate Project
Documentation from the drop-down menu.
A zip file with the name of the project and a timestamp of when the documentation was
generated will be created (e.g. MyComposeProject_documentation_03_22_2016__15_01_
10.zip). Depending on your browser settings, the file will either be automatically downloaded
to your browser’s Downloads folder or you will be prompted to save it.
3. To view the documentation, extract the contents of the zip file and then open the index.html
file.
A browser tab will open displaying the documentation categories in the left pane.
4. Navigate through the documentation using the tree in the left pane and the breadcrumbs
above the documentation.
DDL scripts must be run from the data warehouse database and schema or data
warehouse database and data mart schema, depending on the DDL script and the
platform type (for example, in Oracle there are no schemas just the database).
For more information on the Create DDL scripts only option, see Project settings (page 38).
1. Open your project as described in Managing and monitoring projects (page 292).
2. Click the downward arrow to the right of the project name and select Show DDL Scripts
from the drop-down menu.
The DDL Script Files window opens.
3. To view a script, select the desired script in the Script Files pane on the left. The script will
be displayed on the right.
4. To download a script, select the desired script in the Script Files pane on the left. Then click
the download button in the top right of the window.
5. To search for an element in the script, start to type in the search box. All strings that match
the search query will be highlighted blue.
You can navigate between search query matches using the arrows to the right of the search
box. Use the right and left single arrows to navigate matches sequentially. Use the right and
left double arrows to jump to the last and first match respectively.
6. To reset the search, either delete the search query or click the "x" to the right of the search
box.
Project versioning
Compose provides built-in project version control using the Git engine. Version control enables
Compose developers to commit project revisions to both a local and a remote Git repository. If a
mistake is made, Compose developers can easily roll back to earlier versions of the project while
minimizing disruption to all team members.
Revisions only store metadata and mapping information. After you revert to a saved
revision, you will need to recreate the data warehouse and data mart tables.
To prevent conflicts, each Compose project should have its own Git repository.
1. From the project drop-down menu, select Version Control > Settings.
The Version Control Settings - Git window opens.
The Local Commits area shows the local root folder where project revisions are committed.
The first time a project revision is committed, Compose creates a JSON file with the current
project settings. The <project_name>.json file is archived to a ZIP file (<project_name>_
deployment.zip), which is located in a project-specific folder under the source-control folder.
2. To enable commits to a remote Git database, select Enable remote commits and then
provide the following information:
l URL - The address of the remote Git database.
l User name - Your user name for accessing the remote Git database.
l Password - Your Personal Access Token for accessing the remote Git database.
Committing projects
You can commit a project using the console or using the CLI:
1. From the project drop-down menu, select Version Control > Commit.
The Commit - <Project_Name> window opens.
2. Enter a message in the Message box and optionally select the Remote push check box.
Note that the Remote push check box will be disabled if the Enable remote commits option
described above is not selected.
Command syntax
ComposeCli.exe commit --project project_name [--message message] [--remote]
Parameters
Parameter Description
Example
ComposeCli.exe commit --project MyProject --remote
1. From the project drop-down menu, select Version Control > Revisions history.
The Revision History - <Project_Name> window opens.
By default, the last 10 revisions are shown. You can change this number by selecting one of
the available options from the Show drop-down list.
2. Optionally, use the Search box to find a specific revision.
3. Select the desired revision and then click the Deploy to Revision toolbar button.
4. When prompted to confirm the operation, click Yes.
The existing project will be replaced.
5. Click Close to close the Revision History - <Project_Name> window.
1. From the project drop-down menu, select Version Control > Revisions history.
The Revision History - <Project_Name> window opens.
By default, the last 10 revisions are shown. You can change this number by selecting one of
the available options from the Show drop-down list.
2. Optionally, use the Search box to find a specific revision.
3. Select the desired revision and then click the Download Revision as Package toolbar
button.
The package will be saved as a ZIP file in your browser's default download location.
As a prerequisite to creating a diagnostics package, the project must have at least one
database connection configured.
In this section:
High-level flow
Setting up data warehouse project typically consists of the following stages (simplified):
1. In Qlik Replicate, define a task that replicates the source tables to a landing zone in the data
warehouse.
2. In Compose:
a. Configure access to your data warehouse.
b. Configure access to your data sources.
c. Use the "Discover" option to auto-generate a model from the source tables or import
an existing model that was created in ERwin. You can even create the model manually
if you prefer.
d. Once your model is ready, create the data warehouse tables and populate them with
the source data.
e. Create a data mart from the data warehouse tables.
f. Populate the data mart tables.
Console elements
This section will familiarize you with the elements that comprise the Qlik Compose UI.
To open Qlik Compose, from the Windows Start menu, select All Programs > Qlik Compose > Qlik
Compose Console.
Management view
The Qlik Compose Console opens in Management view.
Designer view
When you add a new project or open an existing project, the console switches to Designer view. If
you are in Monitor view (see below), you can switch back to Designer view by clicking the Designer
tab in the top right of the console.
For more information, see Creating and managing the model (page 156).
l Data Warehouse - Create the data warehouse tables, generate the task statements, and run
data warehouse tasks.
For more information, see Creating and managing the data warehouse (page 194).
l Data Mart - Define data marts, create the data mart tables, generate the task statements,
and run data mart tasks.
For more information, see Creating and managing data marts (page 230).
In Designer view, each of the panels has a bar below the panel name. The bar can be empty, half-
filled or completely filled, according to the current configuration status of the panel properties, as
follows:
Monitor view
To switch to Monitor view, click the Monitor tab in the top right of the console.
In Monitor view, you can view the status of data warehouse and data mart tasks and schedule their
execution, either individually or as a workflow.
For more information, see Controlling and monitoring tasks and workflows (page 267).
l Qlik Compose installed according to the instructions in Qlik Compose installation and setup
(page 17).
l The Northwind.MDF sample database attached to Microsoft SQL Server.
An easy-to-follow set of instructions for downloading and installing Northwind.MDF can be
found at the following website:
http://businessimpactinc.com/install-northwind-database/
l Define an empty database on Microsoft SQL Server (e.g. northwind_dwh) and make a note of
its name. This will serve as the target Data Warehouse for the Northwind.MDF source tables.
l Microsoft SQL Server Native Client 11.0 installed on the Compose machine.
1. Define and run a replication task in Qlik Replicate as described in Defining a Qlik Replicate
task (page 34).
2. Open Qlik Compose.
3. Add a data warehouse project as described in steps 1-3 of Adding data warehouse projects
(page 36).
4. In the Databases panel, perform the following steps to define your data warehouse:
a. Click Manage. The Manage Databases window opens.
b. Click the Add New Database link or the New toolbar button. The New Data
Warehouse window opens.
c. In the New Data Warehouse window:
l In the Name field, specify a display name for your data warehouse.
l From the Type drop-down list, select Microsoft SQL Server.
l In the Server Name field, specify the Microsoft SQL Server name using the
following format:
l To connect to a named Microsoft SQL Server instance: computer_name\db_
server_name
l To connect to the default Microsoft SQL Server instance: computer_name
l In the User Name and Password fields, enter your credentials for logging in to
the server specified in the Server Name field.
l In the Database Name field, specify the name of the database specified in the
target endpoint of the Qlik Replicate task.
l In the Data Warehouse Schema field, specify dbo or your preferred schema.
l In the Data Mart Schema field, specify dbo or your preferred schema.
You can specify different schemas for the data warehouse and data
mart tables, but for the purpose of this quick start, we’ll use the same
schema.
except for the Schema, these should be the same as the data warehouse
connection details.
l Click Test Connection to verify that Compose is able to establish a connection
to the specified database and then click OK to save your settings.
l Click OK to save your settings.
5. In the Model panel, perform the following steps to create the model for data warehouse
generation:
a. From the drop-down menu in the top right corner of the Model panel, select Discover.
The Discover window opens.
b. Select the source database (i.e. the database without the "_landing" suffix). This is the
source endpoint in the Qlik Replicate task. The Source Table/View Selection -
<Data_Source_Name>_Landing window opens.
c. In the Source Table/View Selection window:
l Select the Tables option.
l Click the Search button.
l From the Results list, select which tables to discover and then click OK. The
Generating Model from <db_name> window opens.
d. Wait for the model generation to complete and then click Close.
6. In the Data Warehouse panel, perform the following steps to populate the Data Warehouse
with the source data:
a. Click Create. The Creating Data Warehouse window opens. Wait for the Data
Warehouse to be created and then click Close.
b. Click Manage. The Manage Data Warehouse Tasks window opens.
c. Click Generate. The Generating Statements for Task: <Name> window opens. Wait
for the ETL instruction set to be generated and then click Close.
d. Click Run. The Manage Data Warehouse Tasks window switches to Monitor view and
Qlik Compose starts to populate the Data Warehouse with data (this may take a few
seconds).
e. Wait for the Data Warehouse to be populated and then close the Manage Data
Warehouse Tasks window.
7. In the Data Mart panel, perform the following steps to create a data mart with a star schema:
a. Click New. The New Data Mart window opens. Leave the default name.
b. Make sure the Start New Star Schema Wizard check box is selected, and click
OK.The New Star Schema wizard opens. Leave the default name.
c. Select Transactional as the star schema type and then click Next.
d. In the Facts screen, select Order Details. Then click Next.
e. In the Dimensions screen, clear all the check boxes and then select Customers and
Products only, as shown below.
h. Click Create Tables. The Creating Data Mart: <Data Mart Name> window opens.
Wait for the Data Mart tables to be created and then close the window.
i. Click Generate. The Generating Statements for Data Mart: <Data Mart Name>
window opens. Wait for the generation of the task statements to complete and then
close the window.
j. Click Run.
The Manage Data Marts window switches to Monitor view and Qlik Compose
populates the Data Mart with data. Leave the Manage Data Marts window open in
Monitor view for now (The two buttons at the top right of the window allow you to
switch between Designer and Monitor views).
8. To display the data in a pivot table:
a. Click the Pivot toolbar button. The Select Columns for Pivot Table window opens.
b. From the drop-down list at the top of the window, select the Pivot Table columns as
follows:
l In the 1Fct_Order Details table, select Quantity.
l In the 1Dim_Customers table, select Country.
l In the 1Dim_Products table, select ProductName.
c. Click OK. A Pivot Table is created with your selected columns.
d. Drag the Quantity box to the space above the table and the ProductName box to the
space on the left.
e. Select Heatmap from the drop-down list below the Customize Columns button.
Your pivot table should now look like this:
The data warehouse contains the landing zone tables (the target of the Qlik Replicate task), the
logical entities, the actual data warehouse tables and the data mart tables.
For more information on the data warehouse structure in an project Qlik Compose project, see
Introduction (page 14).
Note that Qlik Compose will not let you add data sources before you add a data warehouse. This is
because the server connection settings for the source landing zone are derived from the data
warehouse settings.
For all supported data warehouse types, each data warehouse schema (or database if
there are no schemas) should be used exclusively for a single data warehouse. In other
words, using the same schema for different projects, data warehouses and landing
zones is not allowed. Data mart schemas, however, can be shared by different data
marts.
For more information on adding data sources, see Setting up Landing Zone and Data Source
connections (page 133).
For instructions on adding a data warehouse, see the following according to your data warehouse
type.
Although the procedures in this section specifically refer to Microsoft SQL Server, they
are equally applicable to Microsoft Azure SQL Managed Instance and Microsoft Azure
SQL Database.
Prerequisites
Before you can use Microsoft SQL Server as a data warehouse in a Qlik Compose project, make
sure that the following prerequisites have been met:
Client
Microsoft SQL Server Native Client must be installed on the Qlik Compose machine.
Permissions
To use Microsoft SQL Server as a Data Warehouse a Qlik Compose project, the Compose user must
be granted the following privileges in the Microsoft SQL Server database:
l The Qlik Compose user must have at least the db_owner user role on the Microsoft SQL Server
database.
l The Qlik Compose user must be granted the CREATE VIEW permission on the Microsoft SQL
Server database.
l A Microsoft SQL Server system administrator must provide this permission for all Qlik
Compose users.
l The Microsoft SQL Server instance is set up to allow Windows log on.
l The Compose user is specified as the "Log on as" user for the Qlik Compose Server service
account.
OR
Microsoft SQL Server is configured to allow login for the Qlik Compose Server service
account.
For information on how to view the data type that is mapped from the source, see the section for
the source database you are using.
Bigint BIGINT
Integer INT
Date DATETIME2
Datetime DATETIME2
GUID UNIQUEIDENTIFIER
When using Microsoft Azure SQL Database as the data warehouse, the data warehouse
must be located on the same database as the landing zone, although it should use a
different schema.
1. Open your project and click Manage in the bottom left of the Databases panel.
The Manage Databases window opens.
2. Click the New toolbar button or click the Add new database link in the middle of the
window.
The New Data Warehouse dialog box opens.
3. From the Type drop-down list, select the desired data warehouse.
If you choose this option, you also need to make sure that:
l The Microsoft SQL Server instance is set up to allow Windows log on.
l The Qlik Compose for Data Warehouses user is specified as the "Log on as" user for the Qlik
Compose for Data Warehouses service account.
-OR-
l Microsoft SQL Server is configured to allow login for the Qlik Compose for Data Warehouses
service account.
Prerequisites
Before you can use Oracle as a data warehouse in a Qlik Compose project, make sure that the
following prerequisites have been met:
Client
Before you can use Oracle as a source in a Qlik Compose project, make sure that the following client
prerequisites have been met:
l The Oracle database should be configured with the required permissions (see below) and
accessible from the Compose machine.
l Install Oracle Data Access Components (x64) on the computer where Qlik Compose is
located. Then, add the full path of the Oracle Data Access DLL to the system environment
variables.
The default path should be:
<ORACLE_PRODUCT_CLIENT_DIR>\ODP.NET\bin\4\
l The path to the Oracle Data Access DLL also needs to be specified in both
the machine.conf file and the Global Assembly Cache (GAC). In addition,
make sure that the Oracle.DataAccess.dll file exists in the following
location: C:\Windows\Microsoft.NET\assembly\GAC_64.
For more information, see Oracle Help Center.
l The Qlik Compose service needs to be restarted after installing the
required components.
l Install Oracle Instant Client for Microsoft Windows (x64) 19.0 or later on the computer where
Qlik Compose is located.
l If you want to use an Oracle TNS name in the connection settings, you first need to set the
ORACLE_HOME environment variable.
Example:
<ORACLE_PRODUCT_CLIENT_DIR>\product\<version>\client_1
Permissions
To use Oracle as a Data Warehouse a Qlik Compose project, the Compose user must be granted the
following privileges in the Oracle database:
For information on how to view the data type that is mapped from the source, see the section for
the source database you are using.
Date DATE
Datetime DATE
1. Open your project and click Manage in the bottom left of the Databases panel.
The Manage Databases window opens.
2. Click the New toolbar button or click the Add new database link in the middle of the
window.
The New Data Warehouse dialog box opens.
3. From the Type drop-down list, select the desired data warehouse.
Prerequisites
Before you can use Snowflake as a data warehouse in a Qlik Compose project, make sure that the
following prerequisites have been met:
Client
l Download and install Snowflake ODBC driver for Windows 2.18.1 or later.
l The Qlik Compose machine must be set to the correct time (UTC).
Permissions
The user specified in the Snowflake data warehouse settings must be associated with a role that
grants the following privileges:
Only required for user-specified schemas that do not yet exist on the target.
l Tables:
l CREATE
l SELECT
l INSERT
l UPDATE
l DELETE
l TRUNCATE
l REFERENCES (for current and future tables)
l DROP (for user-initiated Drop and Create operations)
l Views:
l SELECT
l CREATE (for current and future views)
l DROP (for user-initiated Drop and Create operations)
l External functions called by user-defined ETLs:
l USAGE
Limitations
The following limitations apply when using Snowflake as a data warehouse in a Compose project.
l Variant, object and array columns are not supported when creating the model using the
discovery method. Compose will ignore such columns during discovery and issue a warning.
l When discovering the landing zone, Snowflake converts all numeric data types to NUMBER
(38,0) and is therefore not recommended. For example, discovering a table with INTEGER i
and DOUBLE d columns in the landing zone would return NUMBER (38,0) for both, whereas
discovering these columns in the source would return more accurate data types.
l When ingesting data from a Replicate source that may have BIT fields (such as Microsoft SQL
Server), it is recommended to define a global data type transformation in Replicate to
convert BIT to STRING (1). Otherwise, Compose will convert BIT to VARCHAR (1) (as it does
not support BOOLEAN), which may cause a data type mismatch in the landing zone.
For information on how to view the data type that is mapped from the source, see the section for
the source database you are using.
BIGINT INTEGER
INTEGER INTEGER
DOUBLE
DATE DATE
TIME TIME
BYTE BINARY
1. Open your project and click Manage in the bottom left of the Databases panel.
The Manage Databases window opens.
2. Click the New toolbar button or click the Add new database link in the middle of the
window.
The New Data Warehouse dialog box opens.
3. From the Type drop-down list, select the desired data warehouse.
You can connect to Snowflake with ODBC, using a proxy server and
entering the appropriate ODBC environment parameters. For details,
see ODBC Configuration and Connection Parameters - Snowflake
Documentation.
You can connect to Snowflake with JDBC, using a proxy server and
entering the appropriate JDBC connection string:
jdbc:snowflake://<Snowflake server URL>:443/?&user=<snowflake
user name>&warehouse=<Snowflake Warehouse
name>&useProxy=true&proxyHost=<Proxy server
name>&proxyPort=<Proxy server listening port>&proxyUser=<proxy
server user name>&proxyPassword=<proxy server user's password>
l Database Name: The database in which to create the data warehouse tables.
l Data Warehouse Schema: The schema in which to create the data warehouse
tables.
l Data Mart Schema: The schema in which to create the data mart.
5. Click Test Connection to verify that Compose is able to establish a connection with the
specified data warehouse.
6. Click OK to save your settings.
The database is added to the list on the left side of the Manage Databases window.
Prerequisites
Before you can use Amazon Redshift as a data warehouse in a Qlik Compose project, make sure
that the following prerequisites have been met:
Driver
l Install and configure the latest Amazon Redshift 64-bit ODBC Driver.
l Install the Amazon Redshift JDBC Driver redshift-jdbc42-2.1.0.30 or later on the Compose
machine.
1. You can download the redshift-jdbc42-2.1.0.30 JAR file from:
Amazon Redshift JDBC Driver » 2.1.0.30
2. Copy it to:
<Compose_Installation_Folder>\java\jdbc
The Qlik Compose service needs to be restarted after copying the driver.
Permissions
Qlik Replicate performs the following operations on the replicated tables within Amazon Redshift:
If the user is the 'DB Owner', these permissions are in place by default. Otherwise, the user must be
granted these permissions to achieve successful replication.
Port
Make sure that port 5439 (the Amazon Redshift Cluster port) is open for inbound connections from
Qlik Compose .
For information on how to view the data type that is mapped from the source, see the section for
the source database you are using.
INTEGER INT4
BIGINT INT8
DATE DATE
DATETIME TIMESTAMP
1. Open your project and click Manage in the bottom left of the Databases panel.
The Manage Databases window opens.
2. Click the New toolbar button or click the Add new database link in the middle of the
window.
The New Data Warehouse dialog box opens.
3. From the Type drop-down list, select the desired data warehouse.
You must include the name of the Amazon Redshift database in the
connection string.
If this value is changed, existing tables will not be affected (i.e. the
change will only take effect if new columns are added to the model
and the data warehouse tables are updated accordingly).
5. Click Test Connection to verify that Compose is able to establish a connection with the
specified data warehouse.
6. Click OK to save your settings.
The database is added to the list on the left side of the Manage Databases window.
Prerequisites
Before you can use Microsoft Azure Synapse Analytics as a data warehouse in a Qlik Compose
project, make sure that the following prerequisites have been met:
Using Microsoft Entra ID (Active Directory) authentication with the JDBC driver
When connecting via the JDBC driver and authenticating using Microsoft Entra ID (formerly known
as Active Directory authentication), the following files need to be added to the [COMPOSE-
INSTALL-DIR]\java\jdbc folder. After adding the files, you need to restart the Qlik Compose service.
l mssql-jdbc-12.8.1.jre11.jar
l msal4j-1.15.1.jar
l jackson-databind-2.13.4.2.jar
l jackson-annotations-2.13.4.jar
l jackson-core-2.13.4.jar
l oauth2-oidc-sdk -11.9.1.jar
l jcip-annotations-1.0-1.jar
l content-type-2.3.jar
l lang-tag-1.7.jar
l nimbus-jose-jwt-9.37.3.jar
l json-smart-2.5.0.jar
l accessors-smart-2.5.0.jar
l asm-9.3.jar
l slf4j- api-1.7.36.jar
Permissions
The user specified in the Microsoft Azure Synapse Analytics connection settings must be granted
the following permissions.
The Compose must be granted the db_owner user role on the specified target database.
The Compose must be granted SELECT access (by adding the user to the master database and
then to the db_readers role, for example).
For information on how to view the data type that is mapped from the source, see the section for
the source database you are using.
BIGINT BIGINT
INTEGER INTEGER
DATE DATE
TIME TIME
BIGINTAUTOINC BIGINT
When using Microsoft Azure Synapse Analytics as the data warehouse, the data
warehouse database must be the same as the database that you will later define for the
landing zone, although it should use a different schema.
1. Open your project and click Manage in the bottom left of the Databases panel.
The Manage Databases window opens.
2. Click the New toolbar button or click the Add new database link in the middle of the
window.
The New Data Warehouse dialog box opens.
3. From the Type drop-down list, select the desired data warehouse.
Identifier labels
Several statements are tagged with an identifier label for troubleshooting 'problem queries' and
identifying possible ways to optimize database settings. The addition of labels to ELT queries
enables fine-grained workload management and workload isolation via Synapse WORKLOAD
GROUPS and CLASSIFIERS.
Hubs CMPS_HubIns
Satellites CMPS_SatIns
Prerequisites
Before you can use Google Cloud BigQuery as a data warehouse in a Qlik Compose project, make
sure the prerequisites described below have been met.
Permissions
When you create your Service Account Key for Google Cloud, make sure to select BigQuery >
BigQuery Data Owner as the Role. Leave the default key type (JSON) unchanged.
As part of the Service Account Key creation process, a JSON file containing the connection
information will be downloaded to your computer. You will need to copy the contents of this file to
the Service account key field in the Data Warehouse settings.
Client Prerequisites
Both the Simba ODBC driver and the Simba JDBC driver need to be installed on the Compose
machine.
To do this:
When installing driver versions later than 1.2.22.1026, after extracting the files to
the jdbc folder, you must delete the gson-<version>.jar file from the folder.
Otherwise, an error will occur.
l The dataset(s) specified in the connection settings must already exist before loading data
into Google Cloud BigQuery.
l When ingesting data from a Replicate source that may have BIT columns (such as Microsoft
SQL Server), it is recommended to define a global data type transformation in Replicate to
convert BIT to STRING (1). Otherwise, Compose will convert BIT to VARCHAR (1) (as it does
not support BOOLEAN), which may cause a data type mismatch in the Landing Zone.
l When discovering from a BigQuery landing database, BOOLEAN and FLOAT columns are not
supported and will be ignored. If you need such columns to be ingested to the data
warehouse, the following workarounds are available:
l Discover from the source database
l Convert these data types (which are not supported in Compose) to another type such
as VARCHAR (1) or INT
l The data warehouse data set and landing data set must be in the same region.
l As strings do not have length in BigQuery, when discovering from the Landing Zone,
Compose will assume a default length of VARCHAR(32767). From a practical perspective,
since these strings will also be created on BigQuery, they will have no runtime length either.
To keep things orderly however, best practice is to change strings of known length to their
actual expected length.
l Commonly used BigQuery functions were added to the Compose Expression Builder.
BigQuery SQL commands that are not listed in the Compose Expression Builder can be
entered manually if required.
l BigQuery does not support altering tables via standard DDL operations. To work around this
limitation, Compose creates a script that copies the data to a new table. After the data is
copied to the new table, make sure to delete the old table.
l Aggregated fact and state oriented data mart are not supported.
l Stored procedures in custom ETLs are not supported.
l Clustering keys are not supported.
For information on how to view the data type that is mapped from the source, see the section for
the source database you are using.
BYTES BYTES
DATE DATE
TIME TIME
STRING
BIGINT INTEGER
NUMERIC
If not, then:
VARCHAR STRING
INTEGER INTEGER
1. Open your project and click Manage in the bottom left of the Databases panel.
The Manage Databases window opens.
2. Click the New toolbar button or click the Add new database link in the middle of the
window.
The New Data Warehouse dialog box opens.
3. From the Type drop-down list, select the desired data warehouse.
Managing databases
You can edit and delete databases as required. The table below describes the available options.
Edit a In the left side of the Manage Databases window, select the database that you
database want to edit and then click the Edit toolbar button.
Delete a In the left side of the Manage Databases window, select the database that you
database want to delete and then click the Delete toolbar button.
For a list of the pros and cons of discovering the source database as opposed to the landing zone,
see Discovering the Source Database or Landing Zone (page 158).
l ID
l BIR_MAPPING_NR - internal mapping identifier used in staging tables for ETL
l ROWNR - internal row identifier used in staging tables for ETL
l RUNNO_INSERT - The task run number for INSERT operations.
l RUNNO_UPDATE - The task run number for UPDATE operations.
l OBSOLETE__INDICATION - Used to mark OBSOLETE records in data mart objects. See also:
The "Obsolete" indicator (page 264)
l TR_ID - The unique Transaction ID for a fact table record.
l BID_OCCS - Internal column used in ETL processing.
l FD - This column is added to tables that contain attributes (columns) with a History Type 2.
The column is used to delimit the range of dates for a given record version. The column name
can be changed in the project settings.
If you change the "From Date" name in the project settings, the new name will
become a reserved word.
l TD - This column is added to tables that contain attributes (columns) with a History Type 2.
The column is used to delimit the range of dates for a given record version. The column name
can be changed in the project settings.
If you change the "To Date" name in the project settings, the new name will
become a reserved word.
l FKNR - Foreign key number column used in logging tables to report missing references
captured via the data warehouse ETL
l _OID
l _VID
Permissions
This section lists the required permissions for the source landing zone and the source database
defined in a Qlik Replicate task.
l Read metadata
l Select from tables
l Create tables (for error marts)
l Insert to tables (error marts)
In this topic:
For information on how to view the data type that is mapped in the data warehouse, see the section
for the data warehouse database you are using.
CHAR Varchar
NCHAR(40) Varchar(80)
VARCHAR(2) VARCHAR
NUMBER Decimal
FLOAT Decimal(38,12)
REAL Decimal(38,12)
DATE Date
TIMESTAMP(6) Date
l BLOB
l CLOB
l NCLOB
l BFILE
l BINARY_FLOAT
l BINARY_DOUBLE
l INTERVAL YEAR (2) TO MONTH
l INTERVAL DAY (6) TO SECOND (5)
l RAW
l ROWID
l UROWID
l LONG
For information on how to view the data type that is mapped in the data warehouse, see the section
for the data warehouse database you are using.
char Varchar
nchar Varchar
bit Integer
tinyint Integer
smallint Integer
INT Integer
BIGINT Bigint
decimal Decimal
numeric Decimal
smallmoney Decimal(11,4)
money Decimal(20,4)
float Decimal(38,12)
real Decimal(18,6)
datetime Date
datetime2 Date
smalldatetime Date
BINARY BYTE
date Date
time Varchar(16)
uniqueidentifier GUID
l BLOB
l CLOB
l NCLOB
l VARCHAR (MAX)
l TEXT
l NVARCHAR (MAX)
l NVARCHAR (LENGTH)
l NTEXT
l VARBINARY
l IMAGE
l DATETIMEOFFSET
l TIMESTAMP
l SQL_VARIANT
l XML
For information on how to view the data type that is mapped in the data warehouse, see the section
for the data warehouse database you are using.
BIGINT Bigint
binary BYTE
bit bigint
char Varchar
date Date
datetime Date
DECIMAL Decimal
double Decimal(38,12)
FLOAT Decimal(38,12)
int integer
MEDIUMINT integer
MEDIUMTEXT Varchar(16777215)
nchar(36) Varchar(36)
NUMERIC Decimal
REAL Decimal(38,12)
set('a','b','c','d') Varchar(7)
SMALLINT integer
TEXT Varchar(65535)
time Date
timestamp Date
TINYINT integer
TINYTEXT Varchar(255)
year integer
l GEOMETRY
l GEOMETRYCOLLECTION
l JSON
l linestring
l LONGblob
l LONGTEXT
l mediumblob
l MULTILINESTRING
l MULTIPOINT
l MULTIPOLYGON
l point
l polygon
l tinyblob
l BIT(64)
l BLOB()
l BIGBLOB
l MEDIUMBLOB
l TINYBLOB
l BLOB
l varbinary (20)
For information on how to view the data type that is mapped in the data warehouse, see the section
for the data warehouse database you are using.
NUMBER DECIMAL
FLOAT DECIMAL
VARCHAR VARCHAR
BINARY BYTE
DATE DATE
TIME TIME
TIMESTAMP_NTZ DATETIME(9)
TIMESTAMP_LTZ DATETIME(9)
TIMESTAMP_TZ DATETIME(9)
OBJECT N/A
ARRAY N/A
For information on how to view the data type that is mapped in the data warehouse, see the section
for the data warehouse database you are using.
SMALLINT INTEGER
INTEGER INTEGER
BIGINT BIGINT
DECIMAL DECIMAL
BOOLEAN INTEGER
CHAR VARCHAR
DATE DATE
TIMESTAMP DATETIME
For information on how to view the data type that is mapped in the data warehouse, see the section
for the data warehouse database you are using.
DATE DATE
TYPE_TIMESTAMP DATE
TIMESTAMP DATE
TYPE_TIME DATE
TYPE_DATE DATE
DECIMAL DECIMAL
IBM DB2 for LUW data types Qlik Compose data types
SMALLINT INTEGER
INTEGER INTEGER
BIGINT BIGINT
WVARCHAR VARCHAR
BINARY BYTE
For information on how to view the data type that is mapped to the data warehouse, see the section
for the data warehouse you are using.
DATE DATE
DATETIME DATETIME
DATETIME2 DATETIME
SMALLDATETIME DATETIME
TIME TIME
CHAR VARCHAR
NCHAR VARCHAR
DECIMAL DECIMAL
BIT INTEGER
TINYINT INTEGER
SMALLINT INTEGER
INT INTEGER
Microsoft Azure Synapse Analytics data types Qlik Compose data types
BIGINT BIGINT
VARBINARY BYTE
BINARY BYTE
For information on how to view the data type that is mapped in the data warehouse, see the section
for the data warehouse database you are using.
DATE Date
TIME Time
TIMESTAMP Datetime
INT64 BigInt
NUMERIC(p,s) Decimal(p,s)
FLOAT64 Decimal(38,12)
STRING Varchar
Before you can define a landing zone in Qlik Compose, you first need to define a data warehouse.
For more information on adding data warehouses, see Setting up a data warehouse connection
(page 111).
Content Type Choose whether the content in the landing zone is Full Load, Change
Processing or Full Load and Change Processing (according to the Qlik
Replicate task definition).
See also After applying changes below.
Designated Select whether the landing zone is a Database or a Schema. This should
By reflect the target endpoint settings in the Qlik Replicate task.
When Oracle is the Data Warehouse, this field is read-only since the
Oracle landing zone is always designated by Schema.
For more information, see Defining a Qlik Replicate task (page 34).
Database This field is not applicable when Oracle is the Data Warehouse.
Name If the landing zone is designated by a Database, specify the database
name. This must be the same as the target database defined in the Qlik
Replicate task.
When Microsoft Azure Synapse Analytics is the data warehouse, the
landing zone database must be the same as the database defined for the
data warehouse, although it should use a different schema.
For more information, see Defining a Qlik Replicate task (page 34).
Schema If a schema name was specified in the Qlik Replicate task settings, specify
Name the same schema name here.
When Oracle is the Data Warehouse, this must be the same as the schema
defined in the Oracle target connection string in the Qlik Replicate task.
For more information, see Defining a Qlik Replicate task (page 34).
Error Mart Specify the schema where you want the data mart exception tables to be
Schema created. Data that is rejected by data quality rules will be copied to tables
Name in the specified schema.
For more information on error marts, see Defining and managing data
quality rules (page 217).
Field Description
After Replicate creates Change Tables in the landing zone in which subsequent
applying changes to the original Full Load data are stored. If you selected Change
changes Processing or Full Load and Change Processing as the Content Type,
you can determine what to do with the Change Tables after the changes
have been applied to the data warehouse tables:
Choose one of the following:
l Delete from Change Tables - Deletes the changes from the
Change Tables
l Keep in Change Tables - Keeps the changes in the Change
Tables. This is useful if you do not want all of the changes to be
applied at the same time.
For more information, see Working with the Keep in Change Tables
option (page 145).
l Archive the Change Tables - If you select Archive the Change
Tables, you also need to specify a Database name and Schema
name in the relevant fields.
Discover the As Compose does not support mapping directly to the Snowflake
VARIANT VARIANT data type, you need to choose whether VARIANT columns will
data type as be created as JSON (the default) or XML in the Snowflake database.
(applies to
Snowflake
only)
Field Description
Associate Select this to associate your Compose project with the related Replicate
with task. Replicate tasks replicate the relevant tables from the source
Replicate database to the landing zone in your data warehouse. Specifying the
Task Replicate task name will enable you to both discover the source tables'
primary keys, and monitor and control that task from within Compose.
However, before you can specify a Replicate task name, you first need to
define the connection settings to at least one Replicate Server machine.
To do this, click the Replicate Server Settings link below the Associate
with Replicate task field and then configure the settings as described in
Replicate Server settings (page 385).
Once you have configured connectivity to at least one Replicate Server,
you can then proceed to select a Replicate task.
To select a Replicate task:
1. Click the browse button to the right of the Associate with
Replicate task field.
The Select Replicate Task window opens.
2. Select a Replicate Server from the Server drop-down list.
The Replicate Tasks list is populated with all tasks defined on the
selected server.
3. Select the task that is replicating the source tables to the landing
zone and then click OK.
The name of the selected task is shown as read-only in the Associate
with Replicate task field.
4. If you want to generate the model by discovering the source database in the Replicate task,
leave the New Data Source window open for now as you will need to define connectivity to
the source database in the Replicate task.
For instructions on how to do this, see Defining Replicate data source connections (page
149).
Otherwise, click OK to save your settings.
l Use the changes in multiple Compose projects that share the same landing
l Leverage Change Table data across multiple mappings and/or tasks in the same project
l Preserve the Replicate data for auditing purposes or reprocessing in case of error
l Reduce cloud data warehouse costs by eliminating the need to delete changes after every
ETL execution
To facilitate this functionality, Compose keeps a "watermark" per table as a way of tracking which
data has been consumed and which data is yet to be consumed. The watermarks can be reset if
needed, as described in Deleting changes and resetting watermarks (page 148) below.
Use case
I have a table named Inventory in my landing that I would like to load into two separate tables in my
data warehouse for the purpose of tracking and analyzing changes. The tracking table needs to be
updated every 15 minutes while the analysis table needs to be updated once a day.
1. Set up a connection to my landing zone making sure to select the Keep in Change Tables
option.
2. Discover the source tables from the landing zone as described in Discovering the Source
Database or Landing Zone (page 158).
3. Duplicate the Inventory table in my model so that I have two tables, and then rename the
tables as follows: Inventory_Frequent (for tracking) and Inventory_Snapshot (for analytics).
For instructions on how to duplicate entities, see Managing entities (page 170)
Make sure when duplicating the tasks to select Full Load Only as the task type for
Full Load tasks and Change Tables Only as the task type for CDC tasks. See also
Adding, editing, and duplicating tasks (page 205).
7. Verify the correct mappings are selected and delete any redundant mappings that were
created when the tasks were duplicated.
8. Generate and run the source_Frequent and source_Snapshot Full Load tasks.
9. Generate the source_Frequent_cdc and source_Snapshot_cdc tasks.
10. Schedule the source_Frequent_cdc task to run every 15 minutes and schedule the source_
Snapshot_cdc task to run at 20:00 every day.
Command syntax
Example
After resetting the watermark, on the next CDC run all of the Change Table records will
be processed again.
Command syntax
Parameter Description
--landing The name of the landing in Compose containing the Change Tables
whose watermarks you want to reset.
--table The logical name (i.e. without the_ct suffix) of a specific Change
Table whose watermark you want to reset. When omitted,
watermarks for all Change Tables will be reset.
Example
1. Open your project and click Manage in the Databases panel. The Manage Databases
window opens.
2. Click the New toolbar button. The New Data Source window opens.
3. In the New Data Source window, select the Source database connection option.
4. Continue from one of the following topics as appropriate:
l Using Oracle as a source (page 149)
l Using Microsoft SQL Server as a source (page 151)
l Using MySQL as a source (page 153)
l Using IBM DB2 for LUW as a source (page 155)
Prerequisites
Before you can use Oracle as a source in a Qlik Compose project, make sure that the following
prerequisites have been met:
l The Oracle database should be configured with the required Permissions (page 134) and
accessible from the Compose machine.
l Install Oracle Data Access Components (x64) on the computer where Qlik Compose is
located. Then, add the full path of the Oracle Data Access DLL to the system environment
variables.
The default path should be: <ORACLE_PRODUCT_CLIENT_DIR>\ODP.NET\bin\4\
The path to the Oracle Data Access DLL also needs to be specified in both the
machine.conf file and the Global Assembly Cache (GAC). In addition, make sure
that the Oracle.DataAccess.dll file exists in the following location:
C:\Windows\Microsoft.NET\assembly\GAC_64.
The Qlik Compose service needs to be restarted after installing the required
components.
l Install Oracle Instant Client 19.0 or later (Windows x64) on the computer where Qlik Compose
is located.
l If you want to use an Oracle TNS name in the connection settings, you first need to set the
ORACLE_HOME environment variable.
Example:
<ORACLE_PRODUCT_CLIENT_DIR>\product\<version>\client_1
1. In the New Data Source window, enter the information as described in the table below.
Data source fields
Field Description
Port If you specified a TNS name in the Server Name field, make sure that this
field is empty. Optionally, change the default port.
User Name Specify your user name for accessing the Oracle database.
The specified user must have read/write privileges on the Oracle database.
SID If you specified a TNS name in the Server Name field, make sure that this
field is empty. Otherwise, specify the Oracle SID.
2. Click Test Connection to verify that Compose is able to establish a connection with the
specified database.
3. Click OK to save your settings.
The database is added to the list on the left side of the Manage Databases window.
Prerequisites
Before you can use Microsoft SQL Server as a source in a Qlik Compose project, make sure that the
following prerequisites have been met:
l Microsoft SQL Server should be configured with the required Permissions (page 134) and
accessible from the Compose machine.
l Microsoft SQL Server Native Client must be installed on the Qlik Compose machine.
l Qlik Compose supports the following Microsoft SQL Server editions.
l Enterprise Edition
l Standard Edition
l Workgroup Edition
l Developer Edition
l The Microsoft SQL Server instance is set up to allow Windows log on.
l The Compose user is specified as the "Log on as" user for the Qlik Compose Server service
account.
OR
Microsoft SQL Server is configured to allow login for the Qlik Compose Server service
account.
When using Microsoft Azure SQL Database as the data warehouse, the data warehouse database
must be the same as the database that you will later define for the landing zone, although it should
use a different schema.
1. In the New Data Source window, enter the information as described in the table below.
Data source fields
Field Description
Server Name Specify the name or IP address of the Microsoft SQL Server machine.
Windows Choose how you want Compose to log in to the Microsoft SQL Server
authentication database. If you choose Windows authentication, see Working with
SQL Server Windows authentication below.
authentication
User Name Specify your user name for accessing the Microsoft SQL Server
database.
The specified user must have read/write privileges on the Microsoft
SQL Server database.
Password Specify your password for accessing the Microsoft SQL Server
database.
Database Name Specify the name of the Microsoft SQL Server database.
2. Click Test Connection to verify that Compose is able to establish a connection with the
specified database and/or landing zone.
3. Click OK to save your settings.
The database is added to the list on the left side of the Manage Databases window.
If you choose this option, you also need to make sure that:
l The Microsoft SQL Server instance is set up to allow Windows log on.
l The Qlik Compose for Data Warehouses user is specified as the "Log on as" user for the Qlik
Compose for Data Warehouses service account.
-OR-
l Microsoft SQL Server is configured to allow login for the Qlik Compose for Data Warehouses
service account.
method, see Discovering the Source Database or Landing Zone (page 158).
Prerequisites
Before you can use MySQL as a source in a Qlik Compose project, make sure that the following
prerequisites have been met:
l The MySQL database should be configured with the required Permissions (page 134) and
accessible from the Compose machine.
The following MySQL editions are supported:
l MySQL Community Edition
l MySQL Standard Edition
l MySQL Enterprise Edition
l MySQL Cluster Carrier Grade Edition
l MySQL ODBC 64-bit client must be installed on the same computer as Qlik Compose.
Cluster prerequisites
To be able to discover clustered (NDB) tables, the following parameters must be configured in the
MySQL my.ini (Windows) file.
Cluster parameters
Parameter Value
ndb_log_bin=on
This ensures that changes in clustered tables will be logged to the binary
log.
1. In the New Data Source window, enter the information as described in the table below.
Data source fields
Field Description
Server Name Specify the name or IP address of the MySQL server machine.
User Name Specify your username for accessing the MySQL database.
The specified user must have read/write privileges on the MySQL
database.
2. Click Test Connection to verify that Compose is able to establish a connection with the
specified database and/or landing zone.
3. Click OK to save your settings.
The database is added to the list on the left side of the Manage Databases window.
Prerequisites
Before you begin to work with an IBM DB2 for LUW database as a source in Qlik Compose, make
sure the following prerequisites have been met:
l The IBM DB2 for LUW database should be configured with the required Permissions (page
134) and accessible from the Compose machine.
l The IBM Data Server Driver for ODBC and CLI version 10.5 must be installed on the Qlik
Compose machine.
1. In the New Data Source window, enter the information as described in the table below.
Data source fields
Field Description
Server Name Specify the name or IP address of the IBM DB2 for LUW server machine.
User Name Specify your username for accessing the IBM DB2 for LUW database.
The specified user must have read/write privileges on the IBM DB2 for
LUW database.
Password Specify your password for accessing the IBM DB2 for LUW database.
Database Specify the name of the IBM DB2 for LUW database.
Name
2. Click Test Connection to verify that Compose is able to establish a connection with the
specified database and/or landing zone.
3. Click OK to save your settings.
The database is added to the list on the left side of the Manage Databases window.
Managing databases
You can edit and delete databases as required. The table below describes the available options:
Edit a In the left side of the Manage Databases window, select the database that you
database want to edit and then click the Edit toolbar button.
Delete a In the left side of the Manage Databases window, select the database that you
database want to delete and then click the Delete toolbar button.
The model serves as the basis for data warehouse generation in Compose. There are three way of
creating the model: Use Compose to derive a tentative model by reverse engineering the source
database(s) (a process also known as "discovering"); Import a model created in ERwin or create the
model manually in Compose.
In this section:
If you change the "From Date" name in the project settings, the new name will
become a reserved word.
l TD - This column is added to tables that contain attributes (columns) with a History Type 2.
The column is used to delimit the range of dates for a given record version. The column name
can be changed in the project settings.
If you change the "To Date" name in the project settings, the new name will
become a reserved word.
l FKNR - Foreign key number column used in logging tables to report missing references
captured via the data warehouse ETL
For information about importing a model created in ERwin, see Importing the model from ERwin
(page 162).
Discovery factors
Discover the
Discover the Source Database
Factor
Landing Zone defined for the Qlik
Replicate task
2. Select whether to discover the source database or the landing zone and then click OK.
Note that the suffix "_landing" denotes the landing zone whereas the actual source
database appears without the suffix.
When discovering directly from Microsoft SQL Server source, TIME will be
discovered as STRING(16). As well as being mapped this way in Replicate,
this will also maintain accuracy when a TIME column is defined with high
precision.
You can select multiple tables/views by holding down the [Shift] (sequential
selection) or {Ctrl] (non-sequential selection) button.
If you add a table that already exists in the model with the same name, then the
new table is added with the name: source_table_name_01 (or source_table_
name_02 if the name source_table_name_01 already exists, and so on).
If the table contains attribute domains that differ from existing ones but have the
same name, they will also be appended with the _01 suffix.
If you aware of external changes to the metadata or if you notice any data synchronization
anomalies, Compose enables you to clear the metadata cache, either using the UI or using the CLI.
You can clear the Landing Zone metadata cache using either the Compose web console or the
Compose CLI.
Method 1:
Method 2:
1. Click the Manage button at the bottom left of the Data Warehouse panel.
The Manage Data Warehouse Tasks window opens.
2. In the Mappings tab, click Clear Landing Cache.
For information on clearing the Data Warehouse cache, see Clearing the data warehouse metadata
cache (page 229).
Command syntax:
ComposeCli.exe clear_cache --project project_name [--type landing|storage] [--landing_zone
source_name]
Parameters
Parameter Description
l landing
l storage
Example
ComposeCli.exe clear_cache --project MyProject --type landing --landing_zone MySource1
1. Open the Manage Model window as described in Managing the model (page 169).
2. In the Entities toolbar, click the Import from Project button.
3. The Import from Project wizard opens.
4. In the Entities tab:
a. Select a project from the Import from Project drop-down list.
b. Optionally, search for specific entities.
c. Select which entities to import or select Select All to import all entities.
5. Click Next to select which mappings to import.
To create new entities and mappings if the selected entities and mappings
already exist, clear the Replace existing entities and mappings check box.
The new entities/mappings will be named <existing_name>_IMPORTED (or <existing_
name>_IMPORTED_<n+> if the entity/mapping is imported more than once).
If you do not wish to import any mappings, clear the Mappings check box before
clicking Finish.
For more information on creating the ETL mapping(s) in the Data Warehouse panel, see Creating
and managing the data warehouse (page 194).
When you import from ERwin, you can then select the Use Global Mappings check box to apply
these mappings. See also Importing the model from ERwin (page 162).
You can add, edit, and remove entity and attribute mappings. If needed, you can also change the
source database referenced for the tables (if you have several different sources defined).
1. In the Model panel, from the drop-down menu in the top right, select Global Mappings.
The Global Mappings window opens in the Tables to Entities tab.
2. Import the ERwin entities:
1. Click Import Entities to Mappings toolbar button.
The Import Entities window opens.
2. In the File Path field, enter the full path to the ERwin.xml file (on the Compose Server
machine) that includes the entities you want to import.
3. Click OK.
3. Verify that Qlik Compose is using the desired source database. The database name is
displayed in green at the bottom right of the toolbar.
To select a different source database:
1. Click Change Source Database.
2. In the Set Source Database window, select a different database and then click OK.
4. Add new entities, edit existing entities, or remove entities as described in the following table:
Entity management options
To Do This
Add a new 1. In the Tables to Entities tab, click the New toolbar button.
entity 2. Next to the Entity Name field, click the browse button.
The Unmapped Entities window opens, listing only entities that
have not yet been mapped.
3. Select an entity and click OK.
4. Next to the Table Name field, click the magnifying glass icon.
The Find Table for [Entity Name] window opens for the selected
entity.
5. From the Tables drop-down list on the left, select the table to map
to.
6. Click OK. Qlik Compose populates the Table Schema field
automatically, based on the table you selected.
7. Repeat these steps for all unmapped entities.
Edit an entity 1. Move the mouse cursor over the entity and click the Edit button
(pencil icon) that appears on the right.
2. Make the required changes and click OK.
Search for an In the Search look-up field, start typing. Qlik Compose only displays
entity entities that match the search string.
Add a new 1. In the Columns to Attributes tab, click the New toolbar button.
attribute 2. Provide a name and description (optional) for the attribute and the
column.
3. Click OK.
To Do This
Search for In the Search look-up field, start typing. Only attributes that match the
an search string will be displayed.
attribute
When searching for an attribute based on the attribute name, you
must add the prefix "name:". For example, if you want to search for
an attribute that contains “ar” in its name, type “name: ar” in the
Search look-up field.
Edit an 1. Move the mouse cursor over the attribute and click the Edit button
attribute (pencil icon) that appears on the right.
2. Make the required changes and click OK.
6. Click Close.
Model limitations
l When Amazon Redshift is the data warehouse type, attribute names that contain the open
parenthesis character "(" are not supported. If any of your attribute names contain the "("
character, you should remove it before creating the data warehouse tables.
For information on renaming attribute names, see Add an attribute to all Satellite tables and
the Hub table (page 173).
l Discovering new tables does not affect existing entities in the model, even if there is a
relationship between the new entity and one of the existing entities. For example, in the
source database, Table 1 has a Foreign Key that points to Table 2. If Table 1 is added to the
model and then Table 2 is added later, Table 1 will not be updated to contain the required
Foreign Key.
l The data warehouse needs to be "adjusted" when deleting a relationship/attribute from the
model and then adding the same relationship/attribute back to the model. However, the
"Adjust" operation deletes the data from the corresponding data warehouse column.
Validating the model does not recalculate expressions for historical data that has
changed. Changes in a dimension expression or lookup of a column in a dimension are
not updated retroactively. In order to update historical data, you would need to reload
the data which could take a long time depending on the number of records and their
history.
1. Either click the Validate button in the bottom right of the Model panel.
OR
Select Validate from the drop-down menu in the top right of the Model panel.
The Validate Model window opens.
a. If the model is valid, a message will confirm the model’s validity. If the model is not
valid, a list of invalid tables/views will be displayed.
A message indicating why the entity is invalid will be displayed in the Message
column.
b. To resolve the issue, click the Edit Entities button to the right of the entity.
The Edit Model window opens showing the invalid entity.
2. Resolve the issue (in this case, by adding a Business Key) and then click Close.
A message will confirm the model’s validity.
3. Click Close to close the Validate Model window.
Either click the Display button in the bottom right of the Model panel.
-OR-
Select Display from the drop-down menu in the top right of the Model panel.
Diagram tab
In the Diagram tab, the following options are available:
You can select multiple entities by clicking them while holding down the [Ctrl] keyboard
button.
l Zoom - Increase or decrease the magnification using the slider at the top right of the screen.
Click the button to the right of the slider to restore the default size.
l Search - The ability to search for entities is particularly useful in a large model. To search for
an entity, type a search string in the Search box. Compose lists the names of entities that
match the search string. Select the desired entity.
l Drag the diagram - In addition to zooming, you can also drag the diagram by clicking the
space around the diagram and dragging. This is useful for very large diagrams where
zooming out would render the text unreadable. The guide at the bottom right of the window
shows you which part of the diagram is currently displayed.
l Show/Hide all attributes for a selected entity - Select an entity and then select/clear the
Attributes check box in the top left of the window.
l Show/Hide all business keys in the model - Select/Clear the Keys check box in the top left
of the window.
l Show/Hide relationship attributes - Right-click an entity and select this option to to
show/hide the entity's relationship attributes.
l Show/Hide business keys - Right-click an entity and select this option to to show/hide the
entity's business keys.
l Change the Diagram Direction - Select one of the available options from the Direction
drop-down list at the top of the window.
l Set as relationship source - See Creating and managing relationships (page 178).
l Hide this node - Right-click an entity and select this option to show/hide the entity. To show
the entity, click the Hidden Nodes box in the left of the window.
l Hide selected nodes - Right-click an entity and select this option to show/hide selected
entities. To show the hidden entities, click the Hidden Nodes box in the left of the window.
l Hide non-selected nodes - Right-click an entity and select this option to show/hide non-
selected entities. To show the hidden entities, click the Hidden Nodes box in the left of the
window.
l Invert selection - Right-click an entity and select this option to highlight all entities except
the selected entity.
l Select all - Right-click an entity and select this option to highlight all entities in the model.
l Select path - To highlight the path to which an entity belongs, either hover your mouse
cursor over the entity or right-click the entity and select Select Path.
l Select path and hide all other nodes - Right-click an entity and select this option to
highlight the entity’s neighbors.
l Edit - Either double-click the entity or right-click an entity and select the Edit option to edit
the entity’s attributes.
l Lineage - Right-click an entity and select this option to show/hide the entity’s lineage. For
more information on lineages, see Lineage and impact analysis (page 181).
l Search for an entity or attribute - To search for a specific entity or attribute, enter a part of
the name in the Search box. Entities that match the search string will be highlighted.
l Expand/Collapse - Click the arrow to the left of a table to see its attributes or related tables.
To show or hide all sub-tables and table attributes, click the Expand All/Collapse All buttons
at the top of the Tree View tab.
l Lineage - To see an entity or attribute's lineage, hover your mouse over a table or attribute
For example, clicking the button next to the City attribute (shown in the image above)
will open the following window:
For more information on lineages, see Lineage and impact analysis (page 181).
l In the Manage Model window - Editing the model in the Manage Model window is
preferable if you need to make several changes to the model as it provides access to all of
the model’s entities and attributes. To display the results of your changes, open the Model
Display window as described in Displaying the model (page 166).
l From the Model Display - Editing the model from the Model Display window is convenient if
you only need to edit one or two entities. Another advantage of this method is that it allows
you to see the result of your changes (in the entity relationship diagram) immediately.
1. Click the Manage button at the bottom left of the Model panel or click the Entities link in the
Model panel.
The Manage Model window opens.
2. Edit the model according to the descriptions below.
To open the Manage Model window from the Model Display window:
1. Open the Model Display window as described in Displaying the model (page 166).
2. Double-click the entity you want to edit.
The Manage Model window opens.
3. Edit the model according to the descriptions below.
All editing tasks are performed in the Logical Model tab, except for the following tasks which are
performed in the Physical Model tab:
For more information, see Defining Table Creation Modifiers (page 183).
Managing entities
You can add, edit and remove entities from your model as described in the table below.
All of the options available in the toolbar are also available from the drop-down menu in
the toolbar. This is useful when you reduce the window size, since some of the toolbar
buttons - or all of them depending on how small you make the window - will be hidden.
The only button that will not be hidden regardless of the eventual window size is the
drop-down menu button.
Add an entity 1. Click the New Entity button in the Entities toolbar.
2. Provide a name and description (optional) for the entity and then
click OK.
Edit an entity 1. Select the entity you want to edit and then click the Edit button in
the Entities toolbar.
2. Edit the entity’s name and description (optional) and then click OK.
To Do This
Duplicate an entity 1. Select the entity you want to duplicate and then select Duplicate
from the drop-down menu in the Entities toolbar.
2. Edit the entity’s name and description (optional) and then click OK.
The duplicated entity is added to the Entities list.
Import entities from See Importing entities and mappings from another project (page 161).
another project
Import entities from See Importing the model from ERwin (page 162).
ERwin
Managing attributes
You can add, edit and remove attributes as required. All attributes in the model belong to the
Attributes Domain. When adding a new attribute, you can either select an existing attribute from the
Attributes Domain or create a new Attributes Domain. Both of these options are described in the
table below.
To Do This
Create a new 1. Click the New Attribute button in the Attributes toolbar.
attribute The New Attribute window opens.
domain and 2. To designate the attribute as a business key, select the Key check box.
add it to the 3. Click the plus sign to the right of the Attribute domain drop-down list.
model The New Attribute Domain window opens.
a. Specify a Name for the attributes domain.
b. From the Type drop-down list, select one of the available data
types.
c. If the selected data type requires further configuration, additional
fields will be displayed. For example, when Decimal is selected, the
Length and Scale fields will be displayed. Set the values as
desired.
d. Optionally, specify a Description.
e. Click OK to add the newly created attribute domain to the
Attribute domain field and close the New Attribute Domain
window.
4. Continue from Step 5 in Add an existing attribute domain above.
You can also add new attribute domains via the Manage
Attribute Domains window. For more information, see
Managing the Attributes Domain (page 177)
Add an You can use the Add to all Satellites and Hub option to define the same Primary
attribute to all Index for the Hub table and all Satellite tables.
Satellite
Select the desired attribute and then click the Add to all Satellites and Hub
tables and
toolbar button. The attribute is added to the Hub table and to all the Satellite
the Hub table
tables.
To Do This
Edit an Method 1:
attribute
1. Select the attribute you want to edit and then click the Edit button in the
Attributes toolbar.
The Edit - AttributeName window opens
2. Continue from Step 2 of Add an attribute from the attributes domain
above.
Method 2:
Bulk edit See Bulk Editing History types and Satellite numbers (page 181).
history types
and satellite
numbers
Change the Select the attribute you want to move and use the Move Up/Move to Top and
attribute Move Down /Move to Bottom toolbar buttons to move the attribute.
order
Search for an In the Search lookup field, start typing. Only attributes that match the search
attribute string will be displayed.
To Do This
Create an See Add an attribute from the attributes domain or Edit an attribute above.
expression
for an
attribute
Export the Select an entity from the Entities list on the left of the Manage Model window
attributes to a and then select Export to CSV from the drop-down menu in the Attributes
CSV file toolbar. Depending on your browser settings, you will either be prompted to
download the <entityname>_Attributes.csv file or it will be downloaded to your
default Downloads location.
The CSV format differs slightly from the CSV format when Exporting
and importing projects using the CLI (page 78).
Assuming that the Northwind sample database is the model’s source, this could easily be done as
follows:
1. Add the TotalPrice attribute domain to the model as described in Managing attributes (page
171).
2. After finalizing the model, create the data warehouse tables as described in Creating the
data warehouse tables (page 196).
3. Click the OrderDetails mapping as described in Editing column mappings (page 207).
Note that the TotalPrice attribute has no mapping as it was added after the Northwind
source was discovered:
4. Open the Expression Builder by clicking the fx icon to the right of the TotalPrice column
name. Then, in the Expression Builder, add the Quantity and UnitPrice columns to create the
following expression:
Quantity*UnitPrice
For more information on creating expressions, see Creating expressions (page 186).
5. Click OK to close the Expression Builder and save the expression.
The Quantity and UnitPrice landing zone columns are now mapped to the TotalPrice data
warehouse column. Notice that the mapping lines are gray, indicating that the mapping is the
result of an expression.
Hovering the mouse cursor over the gray lines highlights the derived column (TotalPrice) and
the columns from which its data is derived (Quantity and UnitPrice).
1. From the drop-down menu in the top right of the Model panel, select Attributes Domain.
2. Add, delete and edit attributes as describe in the table below.
To Do This
Edit an 1. Select the desired attribute and then click the Edit toolbar button.
attribute The Edit: Name window opens.
domain 2. Edit the attribute as described in steps 2-6 of Add an attributes domain
above.
Note that the Edit: Name window also contains a Used in Entities list.
Knowing which entities the attribute is used in may affect the type of
changes you make, as the planned changes may not be appropriate for all
entities.
Remove an 1. Select the attribute you want to delete and then click the Delete toolbar
attribute button.
2. When prompted to confirm the deletion, click Yes.
l If your model is derived from the landing zone (as opposed to the source database(s)), the
model will be created without any relationships
l Ensure data integrity between related entities
You can create relationship from the Manage Model window or from the Display Model window.
Both of these methods are described below.
5. If the originating entity contains attributes that were foreign keys in the source database, you
can replace these attributes with Business Key attributes of the associated entity.
To do this:
Since the history type for Business Keys must be type 1, the option to change the
history type is unavailable when the Business Key check box is selected.
Since the satellite number for Business Keys must be "0", the option to change the
satellite number is unavailable when the Business Key check box is selected.
3. If you selected Method 2, continue below. If you selected Method 1, continue from Step 4 in
Adding Relationships via the Manage Model window above. If you selected Method 3,
continue from Step 5 in Adding Relationships via the Manage Model window above.
4. Right-click the relationship target entity and select Relationship Target for Relationship
Source Name.
The Add Relationship: Name window opens with the relationship target entity already
selected.
5. If the originating entity contains attributes that were foreign keys in the source database, you
can replace these attributes with Business Key attributes of the associated entity.
To do this:
a. Select the Replace Existing Attribute(s) check box.
The left column shows the Business Key Attributes of the Associated Entity.
b. From the Attributes of Originating Entity drop-down list on the right, select an
attribute from the originating entity that was meant to be a foreign key.
6. If you want the relationship attribute to be a Business Key, select the Business Key check
box. This option will only be displayed if the entity target can be designated as a Business
Key.
7. Set the History Type.
Since the history type for Business Keys must be type 1, the option to change the
history type is unavailable when the Business Key check box is selected.
Since the satellite number for Business Keys must be "0", the option to change the
satellite number is unavailable when the Business Key check box is selected.
Example:
The Orders entity contains two attributes that are related to the People entity: the Customer and
Seller attributes. Therefore, Mike wants to create two relationships from the Orders entity to the
People entity. The primary key of the People table consists of the FirstName and LastName
attributes. As there are two relationships, the primary key columns of the People entity will be
added twice to the Orders entity. To prevent duplication errors, Mike adds the Customer_ and
Seller_ prefixes to the relationship attributes in the Orders entity, which results in the physical
columns Customer_FirstName, Seller_FirstName, Customer_LastName, and Seller_LastName.
Deleting relationships
1. Click the Manage button in the bottom left of the Model panel.
The Manage Model window opens.
2. Select the relationship attribute you want to delete.
3. Click the Delete button in the Attributes toolbar.
The Delete Relationship window opens.
4. To restore an attribute that was replaced when the relationship was created, select the
Restore original attribute(s) check box. For more information about replacing attributes,
see Step 5 in Adding relationships via the Manage Model window above.
5. Click Yes to delete the relationship attribute.
1. Select the attributes whose History type and/or Satellite number you want to change and
click the Bulk Edit toolbar button.
2. In the Bulk Edit window, change the History type and/or Satellite number as required.
3. Click OK close the Bulk Edit window and save your settings.
Top-level entities in the data mart fact will not be shown in the lineage. For example, if
both the Orders and Order Details entities are used in a Fact, the Model lineage for
Orders will show Order Details but not Orders.
1. Click the Manage button in the bottom left of the Model panel.
The Manage Model window opens.
2. Display the lineage as described below:
Lineage procedures
To Do This
Show an entity’s Select the entity and select Show Lineage from the drop-down menu in
lineage the Entity toolbar.
Show an attribute’s Select the attribute and click the Show Lineage button in the Attribute
lineage toolbar.
The Date entity contains a record for every day. Dates in the Date entity range from January 1st
1900 to December 31st 2099.
The Time entity contains all the hours and minutes in a 24 hour period. When you create the data
warehouse tables, the Date and Time entities are automatically populated with relevant data. You
can view this data as described in Viewing the data warehouse tables (page 197).
Both the date and the time values are presented in multiple formats (e.g. 12 hour format or 24 hour
format), allowing you to choose which format will be displayed in your BI reports. Other format
include abbreviated forms of date and time, different month/year/day formats (e.g. 12/31/2017 as
opposed to 2017-12-31), and so on.
You can either add the entities to a new project (before you create the Data Warehouse tables) or
to an existing project. If you add them to an existing project’s model, you will also need to validate
and adjust the Data Warehouse as described in Validating the data warehouse (page 228).
You can even add custom date and time attributes to the entities from the tables in your landing
zone. For example, if one of your source tables lists all the working days and non-working days, you
can add an "Is Working Day" attribute to the Date entity and then load it from the relevant source
table. Just like regular entities, Compose knows how to merge the incoming data of working and
non-working days into the existing Date entity.
For an explanation of how to add attributes to an entity, see Managing attributes (page 171).
You cannot add relationships to the Date and Time entities. However, every date and time attribute
has an implicit relationship to the Date and Time dimensions, which allows you to select the relevant
dimension when creating your star schema in the data mart.
For information on working with Date and Time dimensions in the data mart, see Creating and
managing data marts (page 230).
For all of the supported data sources except Oracle, you can add both Date and Time
entities to your model. If you are using Oracle as your data source, you can only add the
Date entity to you model. This is because Oracle does not have a data type specifically
for Time.
You can also delete the Date and/or Time entities if you no longer require them
and add them again later.
The available options are located below the Columns list on the right of the tab, and are as follows:
l Project settings default - When this option is selected (the default), the settings from the
project settings' Table creation modifiers tab (page 43) will be used.
l Custom - This option is useful for appending additional table properties to the default
Compose CREATE TABLE statement. Leveraging this option requires SQL coding knowledge.
l Custom distribution keys - This option is useful if you only need to define custom
distribution keys for individual entities. Although this can also be done using the Custom
option (see below), the Custom distribution keys option is more convenient as it does not
require any prior SQL knowledge.
Compose does not provide any way of validating your SQL. Therefore, make sure
to validate the SQL before deploying in a production environment.
Set a distribution style From the Distribution Style drop-down, select Even, Key or All.
Delete a distribution key Select the distribution key and then click the Delete button. The key
is deleted.
Change the position of a Select the distribution key and then click the "Up" or "Down" buttons
distribution key to move the key to the desired position.
Set a distribution From the Distribution Method drop-down, select Hash, Round
method Robin or Replicate.
Delete a distribution key Select the distribution key and then click the Delete button. The key
is deleted.
Change the position of a Select the distribution key and then click the "Up" or "Down" buttons
distribution key to move the key to the desired position.
Creating expressions
Compose allows you to create data transformations in several different places according to your
needs. A transformation can either be a filter (i.e. excluding certain data) or an expression (i.e.
manipulating a single record). The table below lists the places where transformations can be
created and provides reasons for creating the transformation in each of the specified places.
Replicate l Filtering large amounts of data that is Before the data reaches the
not needed for the data warehouse (in landing zone.
the present or the future)
l Obfuscation due to regulatory reasons
or internal policies
l Data type conversion (e.g. converting
a source data type that is not
supported on the data warehouse
platform)
Model l The default location if you are not sure Applied as an update to the
where to put it staging tables after creating
l General business logic the mappings.
l Needed for several sources or several
data marts
The Expression Builder can be opened in several places, depending on your needs. For more
information about where to create a transformation, see the table in Creating expressions (page
186).
Expression builder
l Tabs on the left of the Expression Builder: These tabs contains elements that you can add
to an expression. Select elements and add them to the Build Expression pane to create an
expression. For more information, see Building expressions (page 188).
The following tabs are available:
l Parameters - Only displayed when opening the Expression Builder from within the
Reusable Transformations > Edit Transformation window.
The Operators and Functions displayed in the Expression Builder use SQL
format. As SQL support and implementation is different for each data
warehouse (i.e. database) type and version, the data warehouse being
used in your Compose project will determine which Operators and
Functions will be available. For example, functions introduced with
Microsoft SQL Server 2017 will not work if the database being used for the
data warehouse is Microsoft SQL Server 2015.
l Build Expression Pane: The Build Expression pane is where you build your expression. You
can add elements, such as columns or operators to the panel as well as type all or part of the
expression. For more information, see Building expressions (page 188).
l Parse Expression Pane: This pane displays the parameters for the expression. After you
build the expression, click Parse Parameters to list the expression parameters. You can then
edit the parameters, enter a value for each of the parameters and associate attributes with
them. For more information, see Parsing expressions (page 189).
l Test Expression Pane: This panel displays the results of a test that you can run after you
provide values to each of the parameters in your expression. For more information, see
Testing expressions (page 190).
Building expressions
The first step in using the Expression Builder is to build an expression in the Build Expression pane.
To add operators to your expression, you can use the Operator tab on the left or the
Operator buttons located above the Build Expression pane or any combination of
these.
To build an expression:
1. Hover the mouse cursor over the element that you want to add to your expression
(expressions usually start with an Input Column) and click the arrow that appears to its right.
2. Add Operators additional Input Columns and Functions as required.
To add operators to your expression, you can use the Operator tab on the left or the
Operator buttons located above the Build Expression pane or any combination of
these.
Example:
To create an expression that combines the FirstName name and LastName columns, do the following:
Parsing expressions
When you add operators to the expression, the expression’s parameters are usually added
automatically to the Parse Expression pane. However, when you complete your expression or edit
it, you may need to parse the expression see all of the parameters.
l Click the Parse Expression button below the Build Expression pane.
If the expression is not valid, a red error message will appear at the bottom of the Expression
Builder window.
If the expression is valid, the expression parameters and attributes (Input Columns) will be
displayed in the in the Parse Expression pane. See Testing an expression (page 191).
Testing expressions
You test your expression to check that results are as expected. The following figure is an example
of an expression that has been evaluated and tested.
Certain expressions may fail during runtime, even though clicking Test Expression in the
Expression Builder indicated that they were valid.
This is because clicking Test Expression runs a query whereas during runtime, the
expression is run as a sub-query. This issue arises partly because the rules that govern
queries are slightly different from the rules that govern sub-queries.
Testing an expression that contains an analytic function will validate the syntax without
actually executing the function. Additionally, the test will only be performed on a single
record.
Compose does not check the data types of columns used in an expression for
compatibility. For example, if a column of type integer is used in an expression for a
column of type varchar, the expression will not be executed successfully.
Testing an expression
To test an expression:
6. This step is only available for transformations created in the Edit Mappings window. When
you create a transformation in the Edit Mappings window, an additional button called Show
Data appears to the left of the Test Expression button. You can click this button to see how
your expression translates into actual data.
For example, clicking the Show Data button for the expression UnitPrice*Quantity will open
the following window.
For more information on the Edit Mappings window, see Editing column mappings (page
207) in Creating and managing the data warehouse (page 194).
1. From the drop-down menu in the top right of the Model panel, select Reusable
Transformations.
The Reusable Transformations window opens.
The window is split into the following panes:
l Upper pane - Lists the reusable transformations that have been defined.
l Lower pane - Provides additional information about transformation instances such as
where they are in use (e.g. mappings, model, etc.) and the expression that was
created using the transformation.
Select a transformation to see the additional information.
2. Click the New Transformation toolbar button.
The New Transformation window opens.
1. In the Name field, specify a name for the transformation.
2. In the Category field, specify a category name. If the category name already exists it
will be displayed below the field when you start to type the name. To group the new
transformation in the same category, simply select the existing name (unless of course
you wish to create a new category with a similar name).
In the Expression Builder, transformations are grouped according to their category
name, making it easier to find the transformation you want to use. Therefore, when
specifying a category name, it is recommended to choose a name that reflects the
purpose of the transformation. For example, if you create several transformations that
concatenate data, it would make sense to group those transformations under a
category called "Join".
3. To add a parameter to the transformation, click the New button to the right of the
Parameters heading.
A new row is added to the Parameters list.
4. Specify a name for the parameter, select an appropriate data type, and optionally
provide a description.
Once a transformation has been defined, it will be available for selection as needed in the
Expression Builder’s Transformations tab.
Delete a Select the transformation and then click the Delete toolbar button. When
transformation prompted to confirm the action, click OK.
Edit a Double-click the transformation or select the transformation and then click
transformation the Edit toolbar button. Continue as described in Defining reusable
transformations (page 193).
Edit a parameter Open the Edit Transformation window as described in Defining reusable
transformations (page 193). Then, select the parameter you want to delete
and click the Delete button above the Parameters list.
In this section:
l How Compose handles missing references in the data warehouse (page 195)
l Creating the data warehouse tables (page 196)
l Generating data warehouse tasks (page 198)
l Controlling data warehouse tasks (page 200)
If a record references another record which does not exist yet, then Compose will do the following:
l Insert a placeholder for the missing reference record. The placeholder record will only
include the business key and surrogate key. The rest of the columns will be set to NULL.
The fact being processed can already include a valid reference to the surrogate
key of the reference record.
Example:
If the "Orders" table references "SuperGlue" in the "Products" table, but "SuperGlue" does not exist
in that table, Compose will mark "SuperGlue" as a missing reference, insert a record with the key
value "SuperGlue" (assuming that the product name is the business key) to the "Products" table,
and insert NULL values in the remaining "Products" table columns.
When the missing reference eventually arrives, it will be mapped to the record created for it and the
NULL values will be replaced by the actual values.
If the record is defined as history type 2, the record with the NULL values will remain as a
historical record.
In addition, Compose automatically creates views for the TDWH tables in the following format:
<schema_name>.VDWH_<entity_name>[satellite_number_if_several]
Example:
dbo.VDWH_Customers02
For each entity, Compose creates a single view containing both the satellite data and the
associated hub data (or only the hub data if the entity has no satellites). If an entity has several
satellites, then Compose will create a view for each of the satellite tables. In such a case, the view
name will be suffixed with the user-defined satellite number as in the example above.
Compose for Data Warehouses adds RUNNO_INSERT and RUNNO_UPDATE columns to both the
Data Warehouse tables and the data mart tables. These columns contains the ETL task run number,
which can be used (in the Run Details window or in the Details tab) to find out more information
about the task (e.g. the number of rows updated or inserted per table). Note that in hub tables and
type 1 dimensions, the RUNNO_UPDATE number will usually be higher than the RUNNO_INSERT
number as these tables do not contain any history. In satellite tables or type 2 dimension tables
however, the RUNNO_INSERT number and the RUNNO_UPDATE number will always be the same as
a new row is inserted for each update (i.e. history is retained).
Data Warehouse views that contain both hub and satellite data will contain two RUNNO_
INSERT and two RUNNO_UPDATE columns. The hub table RUNNO columns are
appended with an "_H" (e.g. RUNNO_INSERT_H) while the satellite table table RUNNO
columns are appended with an "_S" e.g. RUNNO_UPDATE_S).
1. Click the Create button in the bottom right of the Data Warehouse panel. The Creating Data
Warehouse window opens.
A progress bar indicates the current progress. For each stage of the Data Warehouse
generation process, a corresponding message appears in the Messages list.
When creating table in a Microsoft SQL Server data warehouse, you may
encounter the following error:
Data warehouse creation failed. Error: Cannot create a row of size 11272
which is greater than the allowable maximum row size of 8060.
The statement has been terminated.
This is a well-documented Microsoft SQL Server limitation. To work around this
limitation you need to split the offending table(s) into smaller tables.
2. When the "Data warehouse created successfully" message appears, click Close.
When you click the link, the Data Warehouse Tables window opens showing a list of all the tables
in your data warehouse.
Compose for Data Warehouses adds RUNNO_INSERT and RUNNO_UPDATE columns to both the
Data Warehouse tables and the data mart tables. These columns contains the ETL task run number,
which can be used (in the Run Details window or in the Details tab) to find out more information
about the task (e.g. the number of rows updated or inserted per table). Note that in hub tables and
type 1 dimensions, the RUNNO_UPDATE number will usually be higher than the RUNNO_INSERT
number as these tables do not contain any history. In satellite tables or type 2 dimension tables
however, the RUNNO_INSERT number and the RUNNO_UPDATE number will always be the same as
a new row is inserted for each update (i.e. history is retained).
Data Warehouse views that contain both hub and satellite data will contain two RUNNO_
INSERT and two RUNNO_UPDATE columns. The hub table RUNNO columns are
appended with an "_H" (e.g. RUNNO_INSERT_H) while the satellite table table RUNNO
columns are appended with an "_S" e.g. RUNNO_UPDATE_S).
Apart from the Date and Time tables which are automatically populated on creation, the
other tables will be empty until you run the data warehouse task.
See Controlling data warehouse tasks (page 200) below for information on running a
data warehouser task.
In the <Table Name> window, you can perform the following tasks:
l Choose how many rows to display from the Rows drop-down list.
l Click the Column Settings button to choose to choose which columns will be displayed and
the order in which they will be displayed.
You can either generate individual tasks or you can generate multiple tasks concurrently.
1. Click the Manage button in the bottom left of the Data Warehouse panel. The Manage Data
Warehouse Tasks window opens.
2. If you have more than one task, in the left pane, select the task that you want to generate.
3. Do one of the following:
l To generate the task with basic validations, click the Generate toolbar button.
By default, Compose generates the task with basic validations. Basic validations are
suitable for most tasks, but are especially useful for tasks with numerous expressions
and lookups, as generating such tasks with all validations could take a long time.
l To generate the task with all validations, click the inverted triangle to the right of the
Generate button and select With all validations from the drop-down menu.
All validations includes validations that access the database to verify the existence of
columns used in expressions and lookups. As selecting With all validations will
significantly lengthen the time it takes to generate the task, you should only select it if
it's critical to verify that existence of such columns before the tasks starts.
The Generating task for <Name> progress window opens. When the "Generate task
finished successfully" message is displayed, close the window.
Only mappings selected in the Manage Data Warehouse Tasks window will be
generated.
1. Click the Manage button in the bottom left of the Data Warehouse panel. The Manage Data
Warehouse Tasks window opens.
2. Click the inverted triangle to the right of the Generate toolbar button and select Bulk from
the drop-down menu.
The Bulk Generate dialog opens.
3. Select the tasks you want to generate.
4. Optionally, change the task validation level:
l With basic validations: By default, Compose generates tasks with basic validations.
Basic validations are suitable for most tasks, but are especially useful for tasks with
numerous expressions and lookups, as generating such tasks with all validations could
take a long time.
l With all validations: This includes validations that access the database to verify the
existence of columns used in expressions and lookups. As selecting All validations
will significantly lengthen the time it takes to generate the task, you should only select
it if it's critical to verify that existence of such columns before the tasks starts.
5. Click the OK to start the task generation.
Only mappings selected in the Manage Data Warehouse Tasks window will be
generated.
Ingesting a historical record deletes any history that is later than the ingested record.
For example, if a data warehouse contains the following historical records:
2012 - Boston
2014 - Chicago
2015 - New Jersey
Ingesting the record 2013 - New York will delete the 2014 and 2015 records.
Data warehouse tasks can be run manually, scheduled to run periodically or run as part of a
workflow. The section below describes how to run a data warehouse task manually. For information
on scheduling data warehouse tasks or including them in a workflow, see Controlling and
monitoring tasks and workflows (page 267).
Data warehouse tasks cannot run in parallel with data mart tasks. Data warehouse tasks
that update the same tables cannot run in parallel.
1. Click the Manage button in the bottom right of the Data Warehouse panel. The Manage
Data Warehouse Tasks window opens.
2. If you have more than one task, in the left pane, select the task that you want to generate.
3. Click the Run toolbar button. The window switches to Monitor view and a progress bar
shows the current progress in terms of percentage.
You can stop the task at any time by clicking the Abort toolbar button. This may be
necessary if you need to urgently edit the task settings due to some unforeseen
development. After editing the task settings, simply click the Run button again to restart the
task.
Aborting a task may leave the data warehouse tables in an inconsistent state.
Consistency will be restored the next time the task is run.
4. When the progress reaches 100% completed, close the Manage Data Warehouse Tasks
window.
Other monitoring information such as the task details (i.e. the number of rows inserted/updated)
and the task log files can be accessed by clicking the Run Details and Log buttons respectively.
Once the data warehouse has been successfully loaded into the data warehouse tables, you can
then proceed to the final part of the Compose workflow - defining and populating data marts. For
more information, see Creating and managing data marts (page 230).
Common Table Expressions (CTEs) are not supported as well as some special clauses.
1. Click the Manage button in the bottom left of the Data Warehouse panel. The Manage Data
Warehouse Tasks window opens.
2. Select one of the following tabs according to your needs:
l Pre Loading ETL - to define an ETL that will manipulate the data before it is loaded
from the landing tables to the data warehouse staging tables. When enabled, the Pre-
loading ETL will be run even if there are no mappings or Replicate-generated source
data associated with it, which is particularly useful for customer wanting to perform
transformations on data generated by third-party tools.
l Multi Table ETL - to define an ETL for multiple tables.
l Single Table ETL - to define an ETL for a single table.
l Post Loading ETL - to define an ETL that will be executed after the data has been
loaded from the staging tables to the data warehouse.
3. If you selected Single Table ETL, select an entity in the Entity column and then click the New
button above the Entity list. For Multi Table and Post Loading ETLs, just click the New
button.
4. Specify a name for your ETL and then click OK.
If you selected Single Table ETL, the ETL is added as a link to the User Defined ETL column.
If you selected Multi Table ETL or Post Loading ETL, the ETL is added as a link in their
respective tabs.
5. Click the link to open the Edit ETL Instructions window.
6. If you selected Single Table ETL, select a column and click the arrow to the right of the
selected column to add it to the ETL.
If you selected Multi Table ETL or Post Loading ETL, select a table and a column and then
click the arrow to the right of the selected table/column to add it to the ETL. Repeat as
necessary.
7. Use the Select, Delete, Insert and Update toolbar buttons at the top of the window to add
SQL statements to your ETL.
8. To run the ETL as a stored procedure (that already exists in the data warehouse):
a. Select the Execute as Stored Procedure check box.
b. Click the Stored Procedure toolbar button.
c. Replace STORED_PROCEDURE with the name of your stored procedure and replace(PARAM1,
PARAM2) with any parameters that it needs. Note that parameters must be separated by
1. Click the Manage button in the Model panel. The Manage Model window opens.
2. Select Employees from the Entities list on the left.
3. Click the + (plus) toolbar button to add a new Attribute. A new row is added to the Attributes
table.
4. Type any letter in the Column Name column to bring up the "Add New" option. Click the "Add
New" option when it appears.
14. The next stage is to define an ETL that will map the First Name and Last Name source
columns to the Full Name data warehouse column.
15. Close the Edit Mappings - Map_Employees_1 window and then select the Single Table ETL
tab on the left.
16. Select Employees in the Entity column and then click the New button above the column. The
Add New Single Table ETL window opens.
17. Specify a name or leave the default name and then click OK.
18. Click the Edit button (represented by a pencil icon) at the end of the Employees row. The
Edit Single Table ETL: <Name> window opens.
19. In the editing pane on the right, enter the following instruction:
UPDATE dbo.TSTG_EMPLOYEES set
FullName = LASTNAME + FIRSTNAME
After Compose has finished populating the Data Warehouse, you can open the
table in Microsoft SQL Server Management Studio and verify that the new column
has been added with the correct data.
You can update custom ETLs using the Compose CLI. This functionality can be incorporated into a
script to easily update Custom ETLs.
Syntax:
Where:
l project is the name of the project with the custom ETLs you want to update
l infolder is the full path to the folder containing the custom ETL files
Example:
The file names in the input folder must be identical to the custom ETL names in the
specified project. Otherwise, an error will occur. The file extension (for example, .txt) is
not important, but the file must be in SQL format.
Whereas it's possible to define a single table ETL in the Multi Table ETL script, the
advantage of defining it as a single table ETL script is that it will be able to run in
parallel with other tables.
↓
Single Table ETL (Source of data: staging tables)
↓
Post Loading ETL (Source of data: data warehouse tables)
Within each of the above groups, the scripts are executed according to their numeric order (from
lowest to highest), which is set by the user-defined Sequence Number. The execution order of
several scripts in a group with the same sequence number will be random.
Managing tasks
a task contains the mappings between the columns in the landing zone tables and the columns in
the logical entities. The same mappings can be used by several tasks. You can create new tasks,
duplicate tasks and edit existing tasks as required.
You must regenerate the task and then run a data warehouse task whenever the
mappings are modified or whenever custom ETLs are added or modified. Populating the
data warehouse can either be done manually as described in Controlling data warehouse
tasks (page 200) or automatically as described in Scheduling tasks (page 272).
If you have already run the data mart tasks, then you also need to regenerate the data
mart ETLs and run the tasks again as described in Creating and managing data marts
(page 230).
For more information on global mappings, see Managing global mappings (page 163).
One possible reason to duplicate a task is if your model contains different types of tables and you
want to manage them in separate ETLs.
Adding tasks
1. Click the Manage button at the bottom left of the Data Warehouse panel. The Manage Data
Warehouse Tasks window opens.
2. Click the New Task toolbar button. The New Task dialog opens.
3. Specify a name for the task.
Do not select a task type that conflicts with you Replicate task. For instance, do
not select Change Processing if your Replicate task is Full Load only.
6. Click OK to create the task. Select the task name in the left pane and continue from Editing
column mappings (page 207).
Editing tasks
1. Click the Manage button at the bottom left of the Data Warehouse panel. The Manage Data
Warehouse Tasks window opens.
2. In the left pane, double-click the task you want to edit or hover your mouse cursor over the
task and click the button.
3. Edit the task as described in steps 3-5 of Adding tasks (page 206) above and then click OK.
4. Generate the task as described in Generating data warehouse tasks (page 198).
Duplicating tasks
1. Click the Manage button at the bottom left of the Data Warehouse panel. The Manage Data
Warehouse Tasks window opens.
2. Select the task you want to duplicate and then click the Duplicate toolbar button. The
Duplicate window opens.
3. Specify a Name for the new task.
4. Select a Landing Zone.
5. Optionally change the default Schema.
6. Select one of the available task types.
Do not select a task type that conflicts with you Replicate task. For instance, do
not select Change Tables Only if your Replicate task is Full Load only.
7. Click OK.
8. Select the task name in the left pane and continue from Editing column mappings (page 207).
3. In the Mappings column, click the mapping that you want to edit. The Edit Mapping: Name
window opens.
4. Edit the mapping as described below.
The mapping procedure differs depending on whether you are in Standard View or
Compact View. For information on changing the view, see Changing the view (page
208).
In Standard View:
1. Hover the mouse cursor over the source column name as shown in the image below. A gray
dot appears to the right of the column name.
2. Drag the mouse cursor from the gray dot to the desired column in the logical entity.
3. When the dotted line turns green (as shown below), release your mouse button.
Note that if the dotted line turns red (instead of a green), you will not be able to map the
source column with the desired data warehouse column. A red dotted line indicates that the
source and data warehouse column data types are incompatible with each other.
In Compact View:
Auto-generating mapping
Click the Auto-Map toolbar button.
Changing to a more compact view is recommended for sources tables that have numerous
columns. In compact view, the table columns are organized in rows (instead of a single list), making
it easier to locate source columns and map them to the desired data warehouse columns. You can
also use the search box to filter out all columns that do not match the search string.
For information on creating mappings in Compact view, see Map a column in a landing zone table to
a column in a staging area table.
To set a query:
1. Click the Set Query button. The Edit Mapping Select Query: <Mapping Name> window
opens.
2. Hover the mouse cursor over a table and/or a column and then click the arrow to the right of
the highlighted table/column to add it to the Query.
3. Use the Select button at the top of the window to add select statements to your query.
Optionally use the Undo, Redo and Clear buttons as required.
4. Click OK to save your settings and close the window.
The query results will be displayed on the left of the Edit Mappings: <Name> window.
When creating a filter for a table, the expression should return 1 for data that you
want to include and 0 for data that you want to exclude.
The filter will be applied after any Data Cleansing rules that are defined.
Each platform has its own variation of SQL syntax. Therefore, make sure the syntax you
use conforms to the SQL syntax supported by your data warehouse.
When mapping "From Date" columns in the Landing Zone to the "FD" Staging column,
make sure that the dates in the Landing Zone columns are not earlier than the "Lowest
Date" set in the Project Settings. Otherwise, any data with a "From Date" earlier than the
"Lowest Date" will be ignored.
If some of the dates in the Landing Zone column are earlier than the "Lowest Date" and
cannot be changed in the source, either change the "Lowest Date" set in the Project
Settings - or - use a transformation in Replicate to convert the source dates to dates that
are within"Lowest Date" to "Highest Date" range defined in your project.
1. Click the Manage button in the Data Warehouse panel. The Manage Data Warehouse Tasks
window opens.
2. In the left pane, select the task you want to add, delete, or rename.
3. Select the Mappings tab.
1. In the Logical Entities column, select the logical entity that you want to map.
2. Click the New button above the Logical Entities column. The New Mapping window opens.
3. Optionally change the default mapping name.
4. Click OK to save the mapping.
5. Enable the mapping.
Deleting a mapping
To delete a mapping:
1. In the Mappings column, hover the mouse cursor over the mapping you want to delete.
2. Click the Delete (x) button that appears to its right.
3. Click OK when prompted to confirm the deletion.
Renaming a mapping
To rename a mapping:
1. In the Mappings column, hover the mouse cursor over the mapping you want to rename.
2. Click the Rename (A) button that appears to its right. The Rename window opens.
3. Specify a new name for the mapping and then click OK.
Since Compose randomly chooses which record to add to the data warehouse, you may
want to run a data warehouse task first to see if there are any duplicate record errors. In
the event that there are, you can then modify the data source to remove records that
have the same business key.
You should also select the Handle Duplicates check box in the following situations:
l The Data Warehouse task type is either Full Load and Change Tables or Change Tables
Only.
This is because the Change Tables may contain two records with the same business key:
The old record and the updated record. When the Handle Duplicates check box is selected,
the updated record will always be inserted/updated to/in the data warehouse.
l When a single table in the data warehouse is derived from multiple landing zone tables, the
same business key will be set for each of the mappings. To prevent an error for occurring,
you need to select the Handle Duplicates check box.
To do this:
1. In the Manage Data Warehouse Tasks window, select the desired mapping.
2. Click the Null Updates toolbar button.
3. Select one of the available options. For a description of the options, see Handling Null
Updates.
1. Click the link to the desired task in the Data Warehouse panel. The Manage Data
Warehouse Tasks window opens.
2. In the Mappings column, click the mapping for the logical entity containing the result column
(with the data that you want to replace). The Edit Mapping - Name window opens.
3. Hover the mouse cursor over the relevant data warehouse column and then click the Lookup
button that appears to the right of the column name. The Select Lookup Table window
opens.
a. From the Database drop-down list, select the database containing the lookup table.
b. From the Schema drop-down list, select the schema containing your source lookup
tables.
c. Select either Table or View according to the lookup table type.
d. From the Table drop-down list, select the lookup table.
The right side of the Select Lookup Table window displays the lookup table columns
and their data types. To view the data in the lookup table, click the Show Lookup Data
button.
e. After you have selected the lookup table, click OK.
4. After selecting the lookup table, the Lookup Transformations - Table Name.Column Name
window opens. The window is divided into the following panes:
l Upper pane: The upper part of the right pane (Condition) displays the condition
expression, which stipulates the condition(s) for performing the lookup.
l Lower pane: The lower part of the right pane (Result Column) displays the column
result expression, which stipulates what data to replace in the target column.
5. To change the lookup table, click the Change Lookup Table button above the lookup table
columns and then perform steps a. to d. above.
6. To view the lookup table or landing table data, click the Show Lookup Data or Show Landing
Data buttons respectively.
7. To specify condition(s) for performing the lookup, click the Create Expression button (which
changes to Edit Expression after an expression has been created) above the Condition
expression. The Condition Expression - Column Name window opens.
You can create an expression using the landing and lookup table columns on the left.
For an example, see Lookup example (page 213). For information on creating expressions,
see Creating expressions (page 186).
8. To specify what data to replace or add if the lookup conditions are met, click the Create
Expression button (which changes to Edit Expression after an expression has been created)
above the Result Column expression. The Result Expression - Column Name window
opens.
You can create an expression using the landing and lookup table columns on the left.
For an example, see Lookup example (page 213). For information on creating expressions,
see Creating expressions (page 186).
9. To preview the results, click the Preview Results button.
10. Click OK to save your settings and close the Lookup Transformations - Table
Name.Column Name window.
Using lookup tables that do not have a task for CDC mapping
When the Store Changes option is enabled in the Replicate task, Replicate creates Change Tables
in the landing zone. These tables contain only the changes to the original data. The Compose task
CDC task reads the changes from Change Tables and applies them to the target tables. However, if
the landing zone contains dedicated lookup tables (i.e. tables that are not associated with any
Compose task), Compose will not be able to apply changes to these tables.
There are two ways of handling such a scenario, both of which are described below.
Method 1
Define another Replicate task with the Apply Changes replication option enabled.
Method 2
1. Discover the landing site and add all the lookup tables to the Compose model without any
relation to/from other entities.
2. Either, define lookups from the data warehouse hub tables to the newly added entities.
OR
Create relationships from the data warehouse hub tables to the newly added entities.
Creating relationships may not be a viable option when the lookup tables are
complex.
3. Define a new data warehouse Change Tables Only task that updates the lookup tables.
4. Ensure that the new task runs before the data warehouse task.
The advantage of this method is twofold: a.) All the tables used in the mappings are managed by
Compose, and b.) Only one Replicate task needs to be defined (which also means that the database
transaction logs are read only once). The disadvantage is that you need to ensure that the task that
updates the lookup entities always runs before any data warehouse task.
Lookup example
The following example shows how a lookup table is used to concatenate a Dutch translation of the
category name (located in the lookup table) to the original category name located in the landing
table.
Example:
Assuming the result column name is "Split Name", clicking the Preview Results button would
display the following table:
Note that dropping and recreating tables will delete all of the data in the tables and should only be
performed in lieu of a better option.
In some scenarios, you need to edit the CREATE table statements before they are run.
This can be done using the Generate DDL scripts but do not run them in Project settings
(page 38). For example, if your data warehouse tables contain partitions, you will need to
edit the script to maintain the partitions.
1. In the Data Warehouse panel, select the Drop and Recreate Tables item from the menu in
the top right corner. The Drop and Recreate Tables window opens.
2. You can select to drop and/or recreate one or more of the following tables:
l Data Warehouse & Data Marts - The data warehouse tables are derived from the
model whereas the data mart tables are derived from the data warehouse tables.
l Logging - These tables are generated when the task runs and contain logging
information. By default, these tables are prefixed with the string "TLOG".
l Intermediate - These tables are temporary tables that are created when the task runs.
By default, these tables are prefixed with the string "TTMP".
l Error Mart - These are the data mart exception tables. Data that is rejected by data
quality rules will be copied to tables in the specified error mart schema. See also Error
Mart.
l Archive Tables - These are the tables that are created when the option to archive
Change Tables after the changes have been applied (to the data warehouse tables) is
selected. For more information, see Defining landing zones (page 142)
3. Click OK to perform the drop and/or recreate operation.
Data profiling
Data profiling is an analysis of the candidate data sources for a data warehouse to clarify the
structure, content, relationships and derivation rules of the data. In short, data profiling helps you
understand your data and model it correctly.
Qlik Compose enables you to profile the data in the landing zone tables before it is loaded into the
data warehouse. If you discover a problem with certain data, then you can either manually adjust
the source tables or create a rule for handling the data in question.
1. Click the Manage button at the bottom of the Data Warehouse panel.
2. In the Manage Data Warehouse Tasks window, click the link in the Mapping column for the
table you want to profile.
3. In the Edit Mappings - <Name> window, click the Data Profiler toolbar button. The Profile
<Table Name> (Landing Zone) window opens. The following columns are displayed:
l Column Name - The name of the table column
l Nulls - The number of null values in the column
l Count - The number of rows in the column.
l Count Distinct - The number of unique rows in the column.
l Duplicates - The number of duplicate values in the column.
Note that although Compose calculates the number of duplicate values by subtracting
Count Distinct from Count, the actual number of records displayed when you click
the Duplicates number will be higher. This is because Compose has no way of
knowing which of the records that share the same column value are legitimate
duplicates (if any). It therefore displays all records that share the same value so you
can decide which of them to delete (if any).
For example, in the Employees table, there may be several employees that live in
London (the City column). Therefore duplicates of "London" are perfectly acceptable.
However, two employees with the same phone number and a different address, for
example, may indicate that the phone number in one of the records was entered
incorrectly.
Duplicate values are quite common and usually do not indicate a problem. Where this
feature is particularly useful however, is for detecting duplicate Primary Key candidate
columns.
l Data Type - The column data type
l Max - The highest data value
l Max Length - The longest data value
l Min - The lowest data value
l Min Length - The shortest data value
4. For more information about a value, click the link in the column. A window opens showing the
record(s) containing the value. To add a Data Quality rule, click the Data Quality button and
continue as described in Defining and managing data quality rules (page 217).
5. To only show columns that are mapped to a logical entity column, select the Only show
mapped columns check box.
6. To change the number or rows sampled, select a different value from the Rows to sample
drop-down list. Note that the table may contain less rows than the selected value. The
Sampled records value is the actual number of rows sampled.
7. To see all the table data, click the Show Data button.
The table's Full Load data will always be shown, even for a mapping in a Change
Processing (CDC) task.
8. To recalculate the data, click the Recalculate button. This is useful if the data in the landing
zone tables is being constantly updated (for example, due to a Replicate Change Processing
task).
9. To search for a particular value, start typing the value in the Search box. Only values that
match the search term will be shown.
Compose provides two ways of ensuring data quality: Data validation and data cleansing. As
opposed to data validation which usually results in data being rejected, data cleansing provides a
means of replacing, modifying, or deleting incomplete, incorrect or inaccurate data.
Data that is rejected by a rule will be copied to Error Mart tables in the Error Mart schema defined in
the Landing Zone database settings.
Details about rejected data can be viewed in the monitor's Error Mart tab. For more information,
see Viewing information in the monitor (page 267).
Data Cleansing rules will be applied before any filters that are defined.
To add a rule:
1. Click the Manage button at the bottom of the Data Warehouse panel.
2. In the Manage Data Warehouse Tasks window, click the link in the Mapping column for the
relevant table.
3. In the Edit Mappings - <Name> window, click the Data Quality toolbar button. The Data
Quality Rules - <Table Name> window opens.
4. To add a new rule, click the New toolbar button. A row is added to the rules table.
5. In the Name column, specify a name for the rule.
6. From the drop-down list in the Column column, select the column to which the rule will be
applied.
7. Hover the mouse-cursor over the Condition column and then click the fx button that appears
on the right.
8. In the Edit Condition Rule window, create a condition (using an expression) that the data in
the column must meet in order to be considered valid. For more information on creating
expressions, see Opening the expression builder (page 187).
See also Simple Example Rule (page 218) below.
9. From the drop-down list in the If Condition is False column, select Cleanse Silently.
10. Hover the mouse-cursor over the Correction column and then click the fx button that
appears on the right.
11. In the Edit Correction Rule window, create an expression to cleanse the data. For more
information on creating expressions, see Opening the expression builder (page 187).
See also Simple Example Rule (page 218) below.
12. In the Description column, enter a description for the rule.
13. In the Enabled column, select or clear the check box to enable (the default) or disable the
rule respectively.
To add a rule:
1. Click the Manage button at the bottom of the Data Warehouse panel.
2. In the Manage Data Warehouse Tasks window, click the link in the Mapping column for the
table you want to profile.
3. In the Edit Mappings - <Name> window, click the Data Quality toolbar button. The Data
Quality Rules - <Table Name> window opens.
The default rule rejects primary keys that have a null value and reports the rows.
4. To add a new rule, click the New toolbar button. A row is added to the rules table.
5. In the Name column, specify a name for the rule.
6. Hover the mouse-cursor over the Rule column and then click the fx button that appears on
the right.
7. In the Edit Data Quality Rule window create a rule using an expression. For more information
on creating expressions, see Opening the expression builder (page 187).
See also Simple Example Rule (page 218) below.
8. From the drop-down list in the Error Action column, select one of the following actions
(performed when the data does not meet the rule conditions):
l Reject and report - Reject the data and send a report
l Reject silently - Reject the data without sending a report
l Reject and abort - Reject the data and abort the data warehouse task
l Accept and report - Accept the data and send a report
l When the "report" option is selected, the row is reported to the <landing_
table_name>__ex table in the data warehouse error mart.
l When there are multiple data validation rules, Compose will stop evaluating
the data after the first error (and report only that error). Once the error is
fixed, additional data evaluation errors may be reported for the remaining
rules, each time the data is loaded.
A rule that is defined to reject or accept a non-null value (e.g. 2) in a given column
will also reject/accept NULL values that appear in the same column, but in
different records. To prevent this from happening, add the following condition to
the rule: "and column value is not null"
Example: LEN(${CName})<2 and (${CName} is not null)
${UnitsInStock}>1
To change the order of Data Quality rules, select the rule that you want to move and then use the
arrows above the rules list to change the position of the rule.
There are two ways you can view missing references in Compose. Either via the Monitor tab in the
Manage Data Warehouse Tasks window or by switching the console to Monitor view and selecting
the Missing References tab. The instructions below cover both of these methods.
To check for missing references in the Manage Data Warehouse Tasks window:
1. Click the Manage button in the lower left corner of the Data Warehouse panel.
2. Select the desired task in the left side of the Manage Data Warehouse Tasks window.
3. Switch to Monitor view by clicking the Monitor tab in the top right of the Manage Data
Warehouse Tasks window.
4. Click the View Missing References toolbar button. The Missing References - <task
Name> window opens.
The following information is displayed:
l General information: The run number of the task, when it started and ended, the total
number of inserts and updates, and the number of reported rows (if any).
l Missing references information:
l Missing Records from Entity - The name of the entity with missing reference
and the number of missing references.
To see the missing record keys for the entity, click the number in parentheses
to the right of the entity name.
The Missing Record Keys for Entity - <Entity Name> window opens showing
the list of missing keys and the number of times each key is referenced per
entity.
l Referenced from Entity - The entities that are referencing the entity with
missing references.
l Via Relationship - The name of the relationship in the Model.
5. To close the window, click Close.
To see the missing record keys for the entity, click the number in parentheses
to the right of the entity name.
The Missing Record Keys for Entity - <Entity Name> window opens showing
the list of missing keys and the number of times each key is referenced per
entity.
l Referenced from Entity - The entities that are referencing the entity with
missing references.
l Via Relationship - The name of the relationship in the Model.
4. To close the window, click Close.
Orders contains seven records pointing to Mr. Brown and one record pointing to Mr. Smith.
Disputes contains four records referencing Mr. Brown. Mr. Brown and Mr. Smith are "missing" from
Customers.
Clicking the number to the right of Customers (in the Missing Records from Entity column) would
open the following window:
See also: How Compose handles missing references in the data warehouse (page 195).
1. Click the Manage button at the bottom left of the DATA WAREHOUSE panel.
The Manage Tasks window open.
2. Click the Task Statements toolbar button.
3. The Task Statements - <Name> window opens in List View. Navigate through the
commands using the scroll bar or find specific commands using the Search box.
OR
Click the Item View button and navigate through the commands using the navigation
buttons at the bottom of the Task Statements - <Name> window.
To jump to a specific command, type the command number in the Go To field at the
bottom of the window and then press [Enter].
1. In List View, click the Export to CSV File button located to the left of the search field.
2. A file named "<name>_ETL_Instructions.csv" will be saved to your default Downloads
location or you will be prompted to save it (according to your browser settings).
1. Click the Manage button in the bottom left of the DATA WAREHOUSE panel.
2. Select a task in the left panel.
3. Click the Settings toolbar button.
A window opens displaying the following tabs: General, Advanced, and Consolidation.
General Tab
In the General tab, the following settings are available:
l Log level: Select the log level granularity, which can be any of the following:
l INFO (default) - Logs informational messages that highlight the progress of the ETL
process at a coarse-grained level.
l VERBOSE - Logs fine-grained informational events that are most useful to debug the
ETL process.
l TRACE - Logs finer-grained informational events than the VERBOSE level.
The log levels VERBOSE and TRACE impact performance. Therefore, you should only select
them for troubleshooting if advised by Qlik Support.
l Default History Resolution: Choose the granularity of the "From Date" column value when a
new history record is inserted:
l Minutes to update with the date and time. This is the default. When this option is
selected, a new record will be inserted each time the data is updated.
l Days to update with the date only. When this option is selected, only one record (the
most recently updated) will be inserted at the end of the day.
These settings will be applied, regardless of the original source column (when
mapped) or Change Table [header__] timestamp column (when not mapped)
granularity. So, for instance, if a source column with date and time granularity is
mapped to the "From Date" column and Days is selected, then only one record
(the most recently updated) will be inserted at the end of the day.
When creating a new project, the default behavior is to write NULL instead
of keeping the values unchanged between the two mappings.
l Set the target value to null: Select this if you want the source and target values to
correspond. This can be useful, for example, when a person moves address and one of
the column values (e.g. "State") changes to null.
When ingesting changes from an Oracle source, this option requires full
supplemental logging for all source table columns that exist on the target
and any source columns referenced in filters, data quality rules, lookups,
and expressions.
Advanced Tab
In the Advanced tab, the following settings are available:
l Sequential Processing: Select this option if you want all the data warehouse tasks to run
sequentially, even if they can be run in parallel. This may be useful for debugging or profiling,
but it may also affect performance.
l Maximum number of database connections: Enter the maximum number of connections
allowed. The default size is 10.
For more information, see Determining the required number of database connections (page
22).
l JVM memory settings: Edit the memory for the java virtual machine (JVM) if you experience
performance issues. Xms is the minimum memory; Xmx is the maximum memory. The JVM
starts running with the Xms value and can use up to the Xmx value.
l Position in default workflow: Select where you want the data warehouse tasks to appear in
the default workflow. For more information on workflows, see Workflows (page 276).
l Optimize for initial load: Optimizes initial load in certain cases. Only select this option if the
source tables do not reference missing records, use lookups, map different source records
to the same record, do not contain Type 1 self-references, or contain historical records. Note
also that when this option is selected, the following features are not supported:
l Data quality rules
l Derived attributes
l Consolidation of uniform sources (see Consolidation below)
l The Handle duplicates option
In the event that the task is used for incremental loading (using query-based change
processing), clear the check box after the initial load task completes and regenerate the task.
l Write task statement duration to the TLOG_PROCLOG table in the data warehouse: This
option is useful for troubleshooting performance issues with ETL processes as it records the
duration of each task statement in a special table (named TLOG_PROCLOG) in the data
warehouse. You can then use this information to locate task statements with abnormal
duration times and modify them accordingly.
l Do not create indexes for data warehouse tables: During the task, Compose creates an
internal index for each of the Data Warehouse tables (for query optimization). When running
several consecutive tasks (e.g. via a workflow) with a large volume of tables, this process
can be extremely time-consuming. In such a scenario, best practice is to select the check
box for each of the tasks, except the last one.
l Do not truncate staging tables: Select this option if you want the ETL process to preserve
the staging tables. Only use for debugging.
l Stop processing after populating the staging tables: Select if you do not want to proceed
to populating the warehouse. Only use for debugging.
l Do not drop temporary tables: Select this option if you want to keep the temporary tables
created during the ETL process. Only use for debugging.
Consolidation Tab
When the Consolidate uniform sources option is enabled, Compose will read from the selected
data sources and write the data to one consolidated entity. This is especially useful if your source
data is managed across several databases with the same structure, as instead of having to define
multiple data warehouse tasks (one for each source), you only need to define a single task that
consolidates the data from the selected data sources.
Editing the list of data sources requires you to regenerate the task.
The list of selectable data sources reflects the list of Source Databases that appears in
the Databases panel in Designer view.
To facilitate downstream processing, you might want to add a record identifier column
(for example, SourceID) to the primary key of all your entities. However, if one entity
references another (for example, Orders → Customers), a naming conflict will arise as
the new column (SourceID) will then appear in the referencing entity (Orders) twice. To
prevent such conflicts from occurring, you should add the column to each entity with a
unique prefix derived from the entity name. So, continuing with the Orders →
Customers relationship example, the column name in the Orders entity should be
orders_SourceID while the column name in the Customers entity should be customers_
SourceID.
Prerequisites
l The structure of the tables in the selected sources must be identical.
l Source type can be Table or View, but not Query.
The source data does not have to reside in tables only or in views only; it can be
ingested from a combination of views and tables. For example, the source data
might be ingested from tables A, B, and C in Landing 1, and views A, B, and C in
Landing 2.
Error Mart window showing the number of rows reported for each error mart
For more information on error marts, see Viewing information in the monitor (page 267).
Monitor showing a consolidation task with the total number of rows inserted from all data sources
For a data warehouse to be considered valid, the tables defined in the data warehouse need to be
identical to the physical tables in terms of metadata. Depending on the change, this may require
adjusting the physical tables or dropping and recreating them (via Compose).
If the data warehouse is not valid, any tasks that you attempt to run will fail.
Changes to Distribution Keys cannot be validated (or adjusted). Such changes need to
be applied manually to the Data Warehouse tables.
Sometimes, however, the differences between the model and the data warehouse cannot be
resolved automatically. In such cases, you need to drop and recreate the tables as described in
Dropping and recreating tables (page 214).
1. Click the Validate button at the bottom right of the Data Warehouse panel. The Validating
the Data Warehouse progress window opens.
If any differences are detected, the following message will be displayed: The data
warehouse is different from the model.
2. Click Close. The Model and Data Warehouse Comparison Report window opens.
3. Review the report and then click Adjust Automatically to resolve the differences
automatically or Generate Adjust Script to generate a script with the adjust commands.
l If you clicked Adjust Automatically, the Adjust Data Warehouse progress window
opens.
When the "The data warehouse was adjusted successfully." message is displayed, you
can close the window. Note that adjusting the data warehouse may require you to
update the data mart. In such a case, an appropriate message will be displayed for
each of the data marts that require updating.
l If you clicked Generate Adjust Script, the Generate DDL Scripts window opens
showing the progress of the script generation.
The generated scripts will be saved to:
<product_dir>\data\projects\<project_name>\ddl-scripts
Once the script(s) have been generated, you can close the Generate DDL Scripts
window.
After you close the Generate DDL Scripts window, the DDL Script Files window
opens automatically displaying the generated scripts. The DDL Script Files provides a
read-only view that allows you to review the scripts and download them.
The scripts need to be executed directly in your data warehouse. Make sure that any
modifications that you make to the scripts are done prior to executing them.
When you run the adjust scripts, backup tables are created from the
existing tables. The backup table names are appended with an "_old" suffix
and must be deleted manually after the script completes.
Search for "TODO" in the script to locate the part of the script that needs
modifying.
If you aware of external changes to the metadata or if you notice any data synchronization
anomalies, Compose enables you to clear the metadata cache, either using the UI or using the CLI.
Clearing the data warehouse metadata cache with the web console
1. In the Data Warehouse panel, select Clear Metadata Cache from the menu in the top right
corner.
For information on clearing the Landing Zone metadata cache, see Clearing the Landing Zone
metadata cache (page 160).
The storage value for the --type parameter described below refers to the data
warehouse metadata cache.
You can also clear the metadata cache using the CLI.
Command syntax:
ComposeCli.exe clear_cache --project project_name [--type landing|storage] [--landing_zone
source_name]
Parameters
Parameter Description
l landing
l storage
Example
ComposeCli.exe clear_cache --project MyProject --type landing --landing_zone MySource1
In this section:
l If you edit an expression or a column lookup in a dimension, the changes will not
be applied to existing data. To apply such changes, you need to reload the data
(which could take some time, depending on the number of records and whether
there are a lot of historical records).
l Data warehouse tasks cannot run in parallel with data mart tasks.
1. Click the New button located at the bottom of the Data Mart panel.
OR
Click the Manage button and then click the New button located at the top of the Manage
Data Marts window. The New Data Mart window opens.
2. Optionally change the default name and provide a description.
3. Make sure that the Start New Star Schema Wizard check box is selected (the default) and
then click OK. The New Star Schema wizard opens.
4. Provide a name and description (optional) for the star schema.
5. Select one of the available fact types:
l Transactional - A star schema with a transactional fact table allows you to retrieve
the desired data, even if a dimension table contains multiple versions of the same
record. To use an example from the automotive industry, selecting "OrderDate" as the
Transaction Date would allow you to generate a report for the number of customers
who bought cars in New York between 2013 and 2016, even if a customer moved to a
different city (which would also result in a new record being added to the Customers
dimension).
l Aggregated - A star schema with an aggregated fact table allows you to make
aggregate calculations based on the fact table attributes. For instance, you could
create an aggregated fact that shows the total freight costs per shipping region and
product category. Additionally, the presence of a transaction date in the fact table
makes it possible to retrieve the desired data, even if a dimension contains multiple
versions of the same record. To use an example from the shipping industry, a shipper
could use an aggregated fact to generate a report for the total cost of shipping rice to
Australia from 2015-2016.
l State Oriented - A star schema with a state oriented fact supports Type 2 columns in
the fact table. This is useful in cases where the fact is not a singular event in time, but
rather, consists of multiple "states" or events that occur over time. Typical example of
facts with multiple states are insurance claims or flight reservations. There are also
cases when the same entity is treated as both a fact and a dimension - for example,
Customers. In such cases, a report could be generated that relates to the state of the
fact, such as the time a claim was submitted to the time it was approved.
6. Click Next.
7. In the Facts screen, choose one fact for the star schema and then click Next. The
Dimensions screen is displayed. The left pane lists the dimensions that can be selected
while the right pane displays a diagram of the star schema with the selected dimensions. You
can view a dimension’s lineage by selecting the desired dimension and then clicking the
Lineage button. For more information on lineage, see Lineage and impact analysis (page
181).
The left pane of the Dimensions screen contains the following areas:
l Existing Dimensions - Lists the dimensions that already exist in your data mart. Note
that only dimensions that are relevant to the selected fact table will be displayed.
l Create New Dimensions - Lists all of the dimensions that can be added to the star
schema.
l Date Dimensions - Lists all of the Date dimensions that can be added to the star
schema. Note that these dimensions will only be available for selection if you added
the Date and Time entities to your model. For an explanation of how to do this, see
Adding Date and Time entities to your model (page 182).
l Time Dimensions - Lists all of the Time dimensions that can be added to the star
schema. Note that these dimensions will only be available for selection if you added
the Date and Time entities to your model. For an explanation of how to do this, see
Adding Date and Time entities to your model (page 182).
When adding dimensions using the wizard, if a root dimension already exists in
the data mart, any dimensions selected under the root dimension will be ignored.
Workaround: Edit the dimension and delete or add columns as required.
8. Choose which dimensions to include in the star schema and then click Next.
9. If you chose Star Schema with State Orientation as your star schema type, click Finish.
Otherwise, continue from Step 10 below.
10. In the Transaction Date screen, choose which Transaction Date to include in the data mart
fact table. Selecting a Transaction Date enables you to retrieve the required data, even if the
Dimension table contains multiple versions of the same record.
For example, a car salesman wants to know how many customers bought cars in New York
between 2013 and 2015. Selecting OrderDate as the Transaction Date for the Customers
Dimension would make it possible to retrieve this information, even if a customer moved to a
different city (which would also result in a new record being added to the data mart).
11. If you chose Transactional as your star schema fact type, click Finish. If you chose
Aggregated as your star schema fact type, continue from Step 12 below.
12. In the Aggregated Fact screen:
a. Select one or more columns from the Fact table on the left of the screen.
You can select multiple columns by holding down the [Shift] (sequential
selection) or [Ctrl] (non-sequential selection) buttons while selecting the
columns.
b. To add the column(s) to the Group By list on the right, either drag the columns to the
list or click the arrowhead button to the left of the Group By list. Note that each
dimension has a default "Group By" column that cannot be deleted.
c. To add the column(s) to the Aggregations list on the right, either drag the columns to
the list or click the arrowhead button to the left of the Aggregations list.
d. To add new columns to the Group By or Aggregations list, click the New button
above the list. In the New column window, specify a Name, Type, Description and
Aggregation (when adding a new aggregation column) and then click OK. The column
is added to the list.
e. To add an expression, hover the mouse cursor over the table cell in the Expression
column and then click the fx button that appears on the right. The Edit Expression:
<Name> window opens.
For more information on creating expressions, see Creating expressions (page 186).
f. To delete a column, select the column in the list and then click the Delete button
above the list.
You can select multiple columns for deletion by holding down the [Shift]
(sequential selection) or [Ctrl] (non-sequential selection) buttons while
selecting the columns.
14. Click the Create Tables toolbar button. The Creating Data Mart: Data Mart Name in Target
progress window opens. Wait for the "Create Data Mart tables finished successfully."
message to be displayed and then click Close.
After the data mart tables are created, the Create Tables button changes to Drop
and Recreate tables.
l To generate the task with basic validations, click the Generate toolbar button.
By default, Compose generates the task with basic validations. Basic validations are
suitable for most tasks, but are especially useful for tasks with numerous expressions
and lookups, as generating such tasks with all validations could take a long time.
l To generate the task with all validations, click the inverted triangle to the right of the
Generate button and select With all validations from the drop-down menu.
All validations includes validations that access the database to verify the existence of
columns used in expressions and lookups. As selecting With all validations will
significantly lengthen the time it takes to generate the task, you should only select it if
it's critical to verify that existence of such columns before the tasks starts.
The Generating Statements for Task: Data Mart Name window opens. Wait for the
"Generating Statement for Data Mart No. <number> finished successfully." message to be
displayed and then click Close.
16. Click the Run toolbar button. The window switches to Monitor view and a progress bar
shows the current progress in terms of percentage.
When the Total ETL reaches 100 percent, data mart population is complete.
You can stop the task at any time by clicking the Abort toolbar button. This may be
necessary if you need to urgently edit the task settings due to some unforeseen
development. After editing the task settings, simply click the Run button again to restart the
task.
Other monitoring information such as the run details (i.e. the number of rows
inserted/updated) and the task log files can be accessed by clicking the Run Details and Log
buttons respectively.
Should any errors occur, you can click the link at the end of the Failed bar for additional
information that may help you troubleshoot the problem.
Once your data mart has been loaded with data, you can check that the required data is
available for your BI tools. For more information, see Displaying data in a pivot table (page
237).
Indicates that although the structure for the star schema has been
defined, all or part of the dimension(s) and/or fact table do not
physically exist in the data warehouse. Click Create Tables to create
the tables and/or click Validate to see what needs to be adjusted.
Small squares indicate that there are denormalized tables under the
root dimension table. Each square represents a denormalized table, so
in the image on the left, the Orders root dimension has four
denormalized tables. To view the denormalized table names, hover the
mouse cursor over each of the squares.
Aggregation example
In the following example, Mike the organization’s data scientist, wants to create an aggregation
table that shows the total freight costs per shipping region and product category; for example, the
total cost of shipping rice to Australia in 2015.
To achieve this objective, he adds the CategoryName and ShipRegion attributes to the Group By list
and then adds the Freight attribute to the Aggregations list. As Mike is interested in the total
freight cost, he selects SUM as the Aggregation Type.
1. Click the Manage button at the bottom of the Data Marts panel.
2. In the Manage Data Marts window, either:
Switch to Monitor view (by clicking the monitor icon) in the top right corner.
OR
Remain in Design view and select a star schema.
3. Click the Pivot toolbar button. If you clicked the Pivot toolbar button in Monitor view and
your data mart contains several star schemas, you will be prompted to selected a star
schema. The Select columns for Pivot table window opens. The drop-down list at the top of
the window contains the Fact table and the Dimensions tables that were used to create the
star schema.
The Fact table name is prefixed with "Fct_" while dimension table names are prefixed with
"Dim_".
4. Make sure that "Fct_<FactName>" is selected in the drop-down list and then select which
fact column to add to the pivot table.
5. From the drop-down list, select a dimension and then select which dimension columns to add
to the pivot table.
If you added the Date and Time dimension tables to your data mart, you will be
able to select.
6. Optionally, repeat Step 5 to add columns from different dimensions to the pivot table.
When the same column is included in two different dimensions, the pivot table
may show incorrect data.
8. To form the actual table, drag columns to the gray area below the column names (the X-axis)
and to the gray area on the left of the window (the Y-axis). In the following example, the
ShippedDate column has been dragged to the X-axis while the OrderID column has been
dragged to the Y-axis.
In this example, the QTR column was selected from the Date dimension, allowing orders to
be grouped by quarter.
9. Change the table format, set aggregation, or perform additional actions as described in the
table below.
Additional actions
To Do this
Set the table From the upper drop-down list in the left of the pivot table window,
format choose one of the following:
l Table
l Table bar chart
l Heatmap
l Row heatmap
l Col heatmap
l Treemap
Aggregation From the lower drop-down list in the left of the pivot table window,
options choose one of the available options.
Note that additional drop-down lists may be displayed depending on the
selected aggregation option. For example, when Sum over Sum is
selected, two additional drop-down lists (containing column names) will
appear below the aggregation options. The Sum over Sum aggregate is
calculated by selecting one column from each of the drop-down lists.
Change the Click the Customize Columns button and continue from Step 3 above.
columns
Data marts pointing to different databases cannot contain tables with the same name.
To add a dimension:
1. Select the dimension(s) you want to add to the data mart. Then click OK. The dimension(s)
are added to the Dimensions list.
2. If you already created the data mart tables (as described in Adding data marts and star
schemas (page 231)), you need to create the new dimension table(s) in the data mart. To do
this, perform the validation process described in Validating and adjusting the data mart
(page 258).
Otherwise, perform steps 14 and 15 in Adding data marts and star schemas (page 231). If you
also want to run a data mart task, perform step 16 as well.
3. Select the dimension(s) you want to add to the data mart. Then click OK. The dimension(s)
are added to the Dimensions list.
4. If you already created the data mart tables (as described in Adding data marts and star
schemas (page 231)), you need to create the new dimension table(s) in the data mart. To do
this, perform the validation process described in Validating and adjusting the data mart
(page 258).
Otherwise, perform steps 14 and 15 in Adding data marts and star schemas (page 231). If you
also want to run a data mart task, perform step 16 as well.
Importing dimensions
You can import dimensions from other data marts in the same project. This is especially useful if:
l Several developers are working on the same data mart, developing different complex
dimensions
l You need to use a dimension from another data mart and modify it slightly
To import dimensions
1. Open the Manage Data Marts window and click the Import or Reference Dimensions
toolbar button.
2. From the Source data mart drop-down list, select the data mart containing the dimensions
to import.
3. Select Import the selected dimensions.
4. Select which dimensions to import and then click OK.
Only dimensions that do not already exist in the current data mart (with same
name) are available for selection.
Referencing dimensions
The ability to reference dimensions improves data mart design efficiency and execution flexibility
by facilitating the reuse of data sets. Reuse of dimension tables across data marts allows you to
break up fact tables into smaller units of work for both design and data loading, while ensuring
Throughout this section, a dimension that references another dimension will be referred
to as a "referencing dimension" where a dimension that is referenced by another
dimension will be referred to as a "referenced dimension".
1. Open the Manage Data Marts window and click the Import or Reference Dimensions
toolbar button.
2. In the Import or Reference Dimensions window, select the Source data mart and then
select Reference the selected dimension.
3. Select which dimensions you want to reference, then click OK.
The dimensions are added to the data mart.
4. To add the newly added dimension to the star schema, right-click the dimension and then
select Add to Star Schema.
The Add Dimension <name> to Star Schema window opens.
5. Select which star schema(s) you want to add the dimension to and then click OK.
After adding the referencing dimension to the star schema, you might see a
icon next to the star schema name. This means that you need to validate and
adjust the data mart containing the referenced dimension.
Best practices
l To prevent data inconsistencies, make sure that the source data marts ( i.e. the data marts
containing the original dimensions) are processed before any data marts referencing those
dimensions.
In some cases, it is okay to use circular references. If, for example, both Data Mart
A and Data Mart B are incrementally updated, then any updates to Data Mart A
will use the current version of Data Mart B, and vice versa.
l Conformed referenced dimensions that are used by one or more data marts should be
grouped into a single data mart, without fact tables.
l Transactional fact tables should be grouped into data marts, based on processing
requirements.
l Aggregate and State-oriented star schemas (fact tables) are typically processed during
batch windows as they require complete rebuilds. It is therefore recommended practice to
separate Aggregate and State-oriented fact tables from Transactional fact tables. Doing so,
allows Transactional fact tables to be processed incrementally throughout the day as
required, while allowing Aggregate and State-oriented fact tables to be processed during
batch windows.
1. Click the Manage button in the bottom left of the Data Mart panel. The Manage Data Marts
window opens.
2. In the left pane, select the data mart containing the star schema you want to edit.
3. Expand the list of start schemas and select the star schema you want to edit. Then either
click the Edit button in the lower toolbar or right-click the star schema and select Edit.
The Edit Star Schema - Name window opens. The following tabs are displayed:
l General tab: In the General tab, you can edit the star schema name, the fact table
name, the fact view name and the description.
The following option is also available for transactional and aggregated facts:
l Update fact with changes to Type 2 data warehouse entities - Select this
option (the default) if you want the fact table to always be updated with the last
record version of any Type 2 data warehouse entities the star schema contains.
Example:
Then the last version of Orders and Order Details will always used and
Address will be updated according to the Oder Date.
See also: Data mart views (page 255).
l Logical Attributes tab: In the Logical Attributes tab, you can add and delete columns,
edit a column’s properties, view a column’s lineage, change the column order, and
define filters.
Edit the Logical Attributes tab according to the table below.
l Physical Table tab: The Physical Table tab provides a preview of the actual
"physical" columns that will be created in the database. All editing tasks are performed
in the Logical Attributes tab, except for defining table creation modifiers which is
performed in the Physical Table tab.
For an explanation of how to define table creation modifiers, see Example of a Valid
Table Creation Modifier (page 247).
l Transaction Date tab: The Transaction Date tab enables you to change the
transaction date that you selected when you created the star schema.
For more information on transaction dates, see the Transaction Date screen.
This tab will not be displayed if your Star Schema Type is "State Oriented".
Delete a Select the column(s) you want to delete (multi-selection is supported) and click
column the Delete toolbar button.
To Do this
Create a Click the Filter toolbar button. The Expression Builder opens with the heading:
filter Edit Filter - TableName.
The assumption is that columns that are used in the filters do not
change between different versions of the record. If this is not the
case, the Full rebuild option should be selected in the Data Mart
settings. This assumption is also true for relationships; for example, if
a Sales record relates to Product which relates to Country, and the
filter is applied to the product's country, then the assumption is that
the sale cannot change its product so that it is filtered in or out based
on a new country.
Create or Hover the mouse cursor over the desired table column and then click the fx
edit an button that appears to the right of the Expression column. The Expression
expression Builder opens with the heading: Edit Expression - Column Name.
Change the Select the column(s) you want to move and then click the Move Down/Move to
column order Bottom or Move Up/Move to Top buttons as desired.
The available options are located below the Columns list in the Physical Table tab, and are as
follows:
l Project settings default - When this option is selected (the default), the settings from the
project settings' Table creation modifiers tab (page 43) will be used.
l Custom - This option is useful for if you need to append table creation modifiers to the
default CREATE TABLE statement Compose uses for fact tables. Leveraging this option
requires SQL coding knowledge.
l Custom distribution and sort keys - This option is useful if you only need to define custom
distribution keys or sort keys for the fact table. Although this can also be done using the
Custom option (see below), the Custom distribution and sort keys option is more
convenient as it does not require any prior SQL coding knowledge.
1. Open the star schema and select the Physical Table tab.
2. Select the Custom option.
3. Click the Edit button to open the Table Creation Modifier editor.
4. Enter the SQL parts you wish to append to the CREATE TABLE statement.
5. Optionally, but strongly recommended, validate the SQL in an external validation tool that
supports your specific database and version.
Compose does not provide any way of validating your SQL. Therefore, make sure
to validate the SQL before deploying in a production environment.
Setting and managing custom distribution keys for Amazon Redshift tables
Set and manage distribution keys for Amazon Redshift Data Warehouse according to the table
below.
Set a distribution style From the Distribution Style drop-down, select Even, Key or All.
Delete a distribution key Select the distribution key and then click the Delete button. The key
is deleted.
Change the position of a Select the distribution key and then click the "Up" or "Down" buttons
distribution key to move the key to the desired position.
Setting and managing custom sort keys for Amazon Redshift tables
You can define one or more of the physical table columns as sort keys. Amazon Redshift stores your
data on disk in sorted order according to the sort key. The Amazon Redshift query optimizer uses
sort order when it determines optimal query plans. For guidelines on choosing sort keys, visit
Choose the best sort key - Amazon Redshift.
Set and manage sort keys for Amazon Redshift Data Warehouse according to the table below.
Add a sort key 1. Select the Sort Keys tab below the Columns list.
2. From the Sort key style drop-down list, choose one of the
following styles:
l None to disable the sort keys
l Compound to use all of the columns listed in the sort key
definition, in the order they are listed
l Interleaved to give equal weight to each column in the sort
key
3. Click the Add Sort Key button.
A new row is added to the Sort Keys list. The Position column
indicates the order of the column.
4. From the drop-down list in the Column column, select the desired
column.
The column is add to the list.
5. Click OK to save your settings and close the Edit Dimension/Edit
Star Schema window.
Change the position Select the sort key you want to move and then click the up or down
of a sort key arrows to promote or demote the key.
Delete a sort key Select the sort key you want to delete and then click the Delete button.
For more information about sort keys, visit: Choosing sort keys - Amazon Redshift.
Editing dimensions
You can edit a dimension according to your needs. Editing options include adding columns, adding
attributes and defining filters.
To edit a dimension:
1. Click the Manage button in the bottom left of the Data Mart panel.The Manage Data Marts
window opens.
2. In the left pane, select the data mart containing the star schema you want to edit.
3. Expand the list of dimensions and select the dimension you want to edit. Then either click the
Edit button in the lower toolbar or right-click the star schema and select Edit.
The Edit Conformed Dimension - Name (or Edit Dimension - Name if the dimension has not
yet been added to the data mart) window opens. The following tabs are displayed:
l General tab: In the General tab, you can edit the dimension name, the dimension table
name, the dimension view name and the description. You can also change the
dimension’s history type by selecting Type 1 or Type 2 from the History Type drop-
down list. For more information on changing the history type, see Understanding
dimension history types (page 254).
See also Data mart views (page 255).
l Logical Attributes tab: In the Logical Attributes tab, you can add and delete columns,
edit a column’s properties, view a column’s lineage, change the column order, and
define filters.
Edit the Logical Attributes tab according to the table below.
l Physical Table tab: The Physical Table tab provides a preview of the actual
"physical" columns that will be created in the database. All editing tasks are performed
in the Logical Attributes tab, except for defining table creation modifiers which is
performed in the Physical Table tab.
For an explanation of how to define table creation modifiers, see Example of a Valid
Table Creation Modifier (page 247).
4. Edit the Logical Attributes tab according to Editing star schemas (page 243).
You can apply or revert your changes at any time, simply by clicking the Apply or
Cancel buttons respectively.
5. Click OK to close the window and save your settings or Cancel to close the window without
saving your settings.
To Do this
Delete a Select the column(s) you want to delete (multi-selection is supported) and click
column the Delete toolbar button.
Create a Click the Filter toolbar button. The Expression Builder opens with the heading:
filter Edit Filter - TableName.
The assumption is that columns that are used in the filters do not
change between different versions of the record. If this is not the
case, the Full rebuild option should be selected in the Data Mart
settings. This assumption is also true for relationships; for example, if
a Sales record relates to Product which relates to Country, and the
filter is applied to the product's country, then the assumption is that
the sale cannot change its product so that it is filtered in or out based
on a new country.
Create or Hover the mouse cursor over the desired table column and then click the fx
edit an button that appears to the right of the Expression column. The Expression
expression Builder opens with the heading: Edit Expression - Column Name.
Change the Select the column(s) you want to move and then click the Move Down/Move to
column order Bottom or Move Up/Move to Top buttons as desired.
The available options are located below the Columns list in the Physical Table tab, and are as
follows:
l Project settings default - When this option is selected (the default), the settings from the
project settings' Table creation modifiers tab (page 43) will be used.
l Custom - This option is useful for if you need to append table creation modifiers to the
default CREATE TABLE statement Compose uses for dimension tables. Leveraging this
option requires SQL coding knowledge.
l Custom distribution and sort keys - This option is useful if you only need to define custom
distribution keys or sort keys for the dimension table. Although this can also be done using
the Custom option (see below), the Custom distribution and sort keys option is more
convenient as it does not require any prior SQL coding knowledge.
Compose does not provide any way of validating your SQL. Therefore, make sure
to validate the SQL before deploying in a production environment.
column2 varchar(50),
)
WITH (HEAP)
Setting and managing custom distribution keys for Amazon Redshift tables
Set and manage distribution keys for Amazon Redshift Data Warehouse according to the table
below.
Set a distribution style From the Distribution Style drop-down, select Even, Key or All.
Delete a distribution key Select the distribution key and then click the Delete button. The key
is deleted.
Change the position of a Select the distribution key and then click the "Up" or "Down" buttons
distribution key to move the key to the desired position.
Setting and managing custom sort keys for Amazon Redshift tables
You can define one or more of the physical table columns as sort keys. Amazon Redshift stores your
data on disk in sorted order according to the sort key. The Amazon Redshift query optimizer uses
sort order when it determines optimal query plans. For guidelines on choosing sort keys, visit
Choose the best sort key - Amazon Redshift.
Set and manage sort keys for Amazon Redshift Data Warehouse according to the table below.
Add a sort key 1. Select the Sort Keys tab below the Columns list.
2. From the Sort key style drop-down list, choose one of the
following styles:
l None to disable the sort keys
l Compound to use all of the columns listed in the sort key
definition, in the order they are listed
l Interleaved to give equal weight to each column in the sort
key
3. Click the Add Sort Key button.
A new row is added to the Sort Keys list. The Position column
indicates the order of the column.
4. From the drop-down list in the Column column, select the desired
column.
The column is add to the list.
5. Click OK to save your settings and close the Edit Dimension/Edit
Star Schema window.
Change the position Select the sort key you want to move and then click the up or down
of a sort key arrows to promote or demote the key.
Delete a sort key Select the sort key you want to delete and then click the Delete button.
For more information about sort keys, visit: Choosing sort keys - Amazon Redshift.
The flowing figure shows the Customer_VID and Customer_OID (object identifier) columns in the
Customers dimension table:
If a dimension table is defined as history type 1 and one or more of the columns in the
corresponding data warehouse are defined as type 2, records in the dimension table will
simply be replaced with the most up-to-date record in the corresponding data
warehouse table.
Bulk operations
You can perform the following bulk operations on one or more data marts:
1. Click the Bulk Operations toolbar button in the Manage Data Marts widow. The Bulk
Operations window opens.
2. Select which operations to perform and on which data marts to perform them.
3. Click OK. The Preparing All Data Marts window opens, displaying the progress of the
selected operations.
4. When the <n> data marts were prepared successfully message is displayed, click Close.
l Allow authorized employees to utilize the data mart for analytics while preventing changes to
the actual data.
When you create a view for a fact or dimension table, you also need to set the schema in which the
view will be created.
Additionally, the schema permissions should only allow authorized employees to access the view
(s).
For instructions on setting the view schema, see Modifying data mart settings (page 262).
See also Editing star schemas (page 243) and Editing dimensions (page 249).
Deleting a dimension
1. Select the dimension and then click the lower Delete toolbar button.
OR
Right-click the dimension and select Delete.
2. Click OK when prompted to confirm the deletion.
Common Table Expressions (CTEs) are not supported as well as some special clauses.
ETL actions
To Do This
Define a Pre Loading or 1. Select either the Pre Loading or Post Loading tab as
Post Loading ETL appropriate.
2. Click the New button above the User Defined ETL column.
The Add New Pre/Post Loading window opens.
3. Specify a name for the ETL and then click OK.
The name is added as a link to the User-Defined ETL column.
4. Click the link.
The Edit ETL Instruction: Name window opens.
5. To define an ETL:
a. Select a table and/or column and click the arrow to the
right of the selected table/column to add it to the ETL.
b. Use the Select, Delete, Insert and Update quick access
buttons at the top of the window to add SQL statements
to your ETL.
c. To save your ETL, click OK.
d. To enable/disable the ETL, select or clear the check box
to the left of it as required.
Rename a Pre Loading 1. Select either the Pre Loading or Post Loading tab as
or Post Loading ETL appropriate.
2. At the end of the row containing the ETL name click the Rename
button (A).
3. Rename the ETL and then click OK to save the new name.
Edit a Pre Loading or 1. Select either the Pre Loading or Post Loading tab as
Post Loading ETL appropriate.
2. At the end of the row containing the ETL name click the Edit
button.
3. Edit the ETL as described in Step 5 of Define a Pre Loading or
Post Loading ETL.
You can update custom ETLs using the Compose CLI. This functionality can be incorporated into a
script to easily update Custom ETLs.
Syntax:
Where:
l project is the name of the project with the custom ETLs you want to update
l infolder is the full path to the folder containing the custom ETL files
Example:
The file names in the input folder must be identical to the custom ETL names in the
specified project. Otherwise, an error will occur. The file extension (for example, .txt) is
not important, but the file must be in SQL format.
l Click the Task statements toolbar button. The ETL - Name window opens in List View.
Navigate through the statements using the scroll bar or find specific statements using the
Search box.
OR
l Click the Item View button and navigate through the statements using the navigation
buttons at the bottom of the ETL - Name window
To jump to a specific statement, type the statement number in the Go To field at the
bottom of the window and then press [Enter].
1. In List View, click the Export to CSV File button located to the left of the search field.
2. A file named "<name>_ETL_Instructions.csv" will be saved to your default Downloads
location or you will be prompted to save it (according to your browser settings).
Additionally, the generated task statements must reflect the current state of the data mart. So, for
example, if a filter or expression was added/edited, you will need to regenerate the task statements
before running the data mart task.
If the data mart is not valid, any tasks that you attempt to run will fail.
Situations in which you need to validate the data mart and/or regenerate the task statements
include:
Note that clicking the Validate button only verifies that the table metadata is valid. In certain cases,
even if the metadata is valid, Compose will prompt you to regenerate the task statements (by
clicking the Generate button).
When you validate a data mart, Compose presents you with a list of operations that it needs to
perform for the data mart to be valid. Examples of such operations include adding dimension and
fact tables, deleting the fact table when the transaction date column has been deleted from the
model, and so on. You can either click Adjust Automatically or Drop and Recreate Tables to
approve the operations or click Cancel to continue working with the data mart in its present state.
1. Click the Validate toolbar button in the Manage Data Marts window. The Validating the
Data Mart progress window opens.
If any differences are detected, the following message will be displayed:
Data mart validation failed. The data mart is different from the model.
2. Click Close. The Model and Data Mart Comparison Report window opens.
3. Review the report and then click Adjust Automatically or Drop and Recreate Tables to
resolve the differences.
Either the Adjust Data Mart progress window opens or, if you clicked Drop and Recreate
Tables, confirm the drop and recreate operation. When you confirm the drop and recreate
operation, the Creating Data Mart: Name window is displayed.
4. When the "The data mart was adjusted successfully." or "The data mart has been created
successfully." (in the case of drop and recreate) message is displayed, close the window.
5. Click the Generate toolbar button to regenerate the task statements.
When a dimension’s history type has been changed directly in the data mart, the data
mart validation will be successful, but you also need to drop and recreate the tables by
clicking the Create Tables toolbar button. For information on changing history types,
see the General tab tab in the Edit Dimensions window.
You can also adjust the data mart automatically using the generate_project CLI. For more
information, see Generating projects using the CLI (page 98).
l If a new data warehouse attribute was added to a dimension or to a fact by directly editing
them in Compose:
l All columns are supported except Transaction Date columns, which cannot be added
automatically.
l For existing records, the newly added column will be set to the database default value,
usually NULL. If you want to load historical data for this column, you need to drop and
create the data mart and then reload it. For information on reloading the data mart, see
Reloading the data mart (page 260).
l If a logical attribute was dropped from a dimension or from a fact in the data mart, the data
mart adjust will:
l Drop it in the relevant tables, except Transaction Date columns which cannot be
dropped automatically.
l When there is an external dependent object that prevents deletion of the column (for
example, a View is defined on top of the data mart table), Compose will report the
error in the adjust execution messages. You then need to drop that object and run the
adjust again.
l For referenced dimensions:
l Adjusting a data mart does not adjust any dimensions that are referencing that data
mart. The data mart containing the referencing dimensions needs to be adjusted
separately.
l Adjusting a dimension might also affect the referencing data mart facts.
The limitations and considerations are also applicable when the data mart is
automatically adjusted using the generate_project CLI.
markReloadDatamartOnNextRun CLI. You mark the entire data mart to be reloaded on the next
run, which might be useful if many columns have been added, or you can mark only specific facts or
dimensions to be reloaded on the next run.
You can also mark dimensions and facts from several data marts to be reloaded on the next run or
mark multiple data marts to be reloaded in their entirety (on the next run) using a CSV file.
Command syntax
ComposeCli.exe mark_reload_datamart_on_next_run --project project_name --datamart datamart_
name [--fact fact_name|--dimension dimension_name|--csv csvfile_name] [--obsoleteallrecords]
Parameters
Parameter Description
--datamart The name of the data mart containing the fact(s) or dimension(s) to
be reloaded.
--obsoleteallrecords Use this parameter to mark all of the existing dimension records as
obsolete. You might need to do this if, for example, a new column
was added to the dimension and you want to reload existing
records with the new column. In such a case, you might want to
preserve the old records as they were before the new column was
added and populated.
Parameter Description
--csv A CSV file containing a list of facts or dimensions from one or more
data marts, or a list of data marts. Each row in the CSV file should
also contain "yes" or "no", indicating whether or not to mark the
records as obsolete.
data mart,fact|dimension,yes|no
MyDataMart1,orders,no
MyDataMart2,customers,yes
MyDataMart1,,no
MyDataMart2,,no
Example
ComposeCli.exe mark_reload_datamart_on_next_run --project inventory --datamart MyDataMart --
fact orders
1. In the Manage Data Marts window, select a data mart and click Settings.
The Setting - Data Mart Name window opens. In the General tab, the following settings are
available:
l Log level: Select the log level granularity, which can be any of the following:
l INFO (default) - Logs informational messages that highlight the progress of the
ETL process at a coarse-grained level.
l VERBOSE - Logs fine-grained informational events that are most useful to
debug the ETL process.
l TRACE - Logs finer-grained informational events than the VERBOSE level.
The log levels VERBOSE and TRACE impact performance. Therefore, you should only
select them for troubleshooting if advised by Qlik Support.
l Load Type: Select Full rebuild to build the data mart from scratch each time or
Incremental loading (default) to only load changes.
l Create tables in database - By default, data mart tables are created in the database
specified in the data warehouse connection settings. Optionally, click the browse
button and select a different database.
l Create tables in schema - By default, data mart tables are created in the schema
specified in the data warehouse connection settings. Optionally, specify a different
schema, either by typing the schema name or by clicking the browse button and
selecting one of the existing schemas. If you specify the name of a non-existing
schema, Compose will create the schema automatically.
This option is only available for Microsoft SQL Server, Amazon Redshift,
Snowflake, and Microsoft Azure Synapse Analytics.
l Create views in schema - By default, data mart views are created in the schema
specified in the data warehouse connection settings. Optionally, specify a different
schema, either by typing the schema name or by clicking the browse button and
selecting one of the existing schemas. If you specify the name of a non-existing
schema, Compose will create the schema automatically, unless the database is
Oracle.
If the view schema is different from the data mart schema, you need to
grant the following permission:
Grant SELECT on DM_SCHEMA to DM_VIEW_SCHEMA WITH GRANT OPTION
l Position in Default Workflow: Set the position you want the data mart to appear in
the default workflow. For more information on workflows, see Workflows (page 276).
l Optimize for initial load: This option is not applicable to the State Oriented and
Aggregated fact types. If the "Incremental Loading" option is enabled (the default),
clear the "Optimize for initial load" option after the initial load task completes and
regenerate the Data Mart task. If the "Full Rebuild" option is enabled, selecting
"Optimize for initial load" may accelerate the loading process.
l Write task statement duration to the TLOG_PROCLOG table in the data
warehouse: This option is useful for troubleshooting performance issues with task
statements as it records the duration of each task statement in a special table (named
TLOG_PROCLOG) in the data warehouse. You can then use this information to locate
task statements with abnormal duration times and modify them accordingly.
l Do not drop temporary tables: Select this option if you want to keep the temporary
tables created during the task. Only use for debugging.
l Enable table logging: This option is available for Oracle only. When enabled, the data
mart tables will be created with the Oracle LOGGING option enabled. Leaving this
option unselected (the default) should improve performance, but in some cases DML
operations will not be recorded in the redo log file. For more information on this option,
refer to the Oracle online documentation.
3. Click OK.
The Obsolete indicator is used in data marts with Type 2 dimensions and State Oriented Fact tables
only. These type of tables store history, so the attribute OBSOLETE__INDICATION will always be
present in data mart tables that contain the From Date and To Date attributes.
The Obsolete indicator is used in data marts when retroactive changes are applied. When a row is
added to a table for an object in which the From Date is earlier than the date of the existing row, the
existing row will be earmarked as obsolete by setting the value for OBSOLETE__INDICATION to the
current run number of the data mark task.
If no retroactive changes are used in the data marts, the value for the OBSOLETE__INDICATION will
be 0.
Rows in a dimension that become obsolete will also enforce changes to the Fact table. The
references of the Fact table to the obsolete rows is updated automatically so that the current,
correct rows are referenced. This means that obsolete rows are isolated, in the sense that they can
be deleted without subverting the referential integrity of the data mart.
The reason Compose does not delete these rows is for auditing purposes. For instance, consider a
report that was generated in the past (using the data mart) and now contains incorrect information.
Inspecting the obsolete rows may account for the discrepancy (i.e. incorrect data in the database
as opposed to a deliberate attempt to manipulate the data).
For security reasons, command tasks are blocked by default. To enable command tasks,
a Compose administrator needs to run the following commands in the Compose CLI:
In this section:
Before you define a command task, make sure that the executable or script file that you
want to run resides in the following directory on the Compose server machine:
PRODUCT_DIR\data\projects\YOUR_PROJECT\scripts
Editing a task
Select the task in the tasks list in the left of the Manage Command Tasks window and edit it as
described in Defining command tasks (page 265).
Deleting a task
Select the task in the tasks list in the left of the Manage Command Tasks window and then click
the Delete toolbar button. When prompted to confirm the deletion, click OK.
For information on defining workflows, controlling and monitoring tasks, and controlling and
monitoring workflows, see Controlling and monitoring tasks and workflows (page 267).
1. Open the Manage Command Tasks window and select the task you want to run.
2. Click the Run toolbar button.
3. The Manage Command Tasks window switches to Monitor view.
In Monitor view the following information is available:
l The task ID
l The current status
l When the task started and ended
l The overall task progress
In this section:
1. Open a data warehouse project and click the Monitor icon in the top right of the console.
A list of tasks is displayed for the current project. The left pane of the monitor allows you to
filter the task list by status as well as indicating the current number of running, failed and
completed tasks.
For fact tables, it is possible to view details about inserted rows, but not of
updated rows.
There are two ways you can view missing references in Compose. Either via the Monitor tab in the
Manage Data Warehouse Tasks window or by switching the console to Monitor view and selecting
the Missing References tab. The instructions below cover both of these methods.
To check for missing references in the Manage Data Warehouse Tasks window:
1. Click the Manage button in the lower left corner of the Data Warehouse panel.
2. Select the desired task in the left side of the Manage Data Warehouse Tasks window.
3. Switch to Monitor view by clicking the Monitor tab in the top right of the Manage Data
Warehouse Tasks window.
4. Click the View Missing References toolbar button. The Missing References - <task
Name> window opens.
The following information is displayed:
l General information: The run number of the task, when it started and ended, the total
number of inserts and updates, and the number of reported rows (if any).
l Missing references information:
l Missing Records from Entity - The name of the entity with missing reference
and the number of missing references.
To see the missing record keys for the entity, click the number in parentheses
to the right of the entity name.
The Missing Record Keys for Entity - <Entity Name> window opens showing
the list of missing keys and the number of times each key is referenced per
entity.
l Referenced from Entity - The entities that are referencing the entity with
missing references.
l Via Relationship - The name of the relationship in the Model.
5. To close the window, click Close.
Orders contains seven records pointing to Mr. Brown and one record pointing to Mr. Smith.
Disputes contains four records referencing Mr. Brown. Mr. Brown and Mr. Smith are "missing" from
Customers.
Clicking the number to the right of Customers (in the Missing Records from Entity column) would
open the following window:
See also: How Compose handles missing references in the data warehouse (page 195).
Controlling tasks
You can run and stop tasks/workflow manually or you can schedule them to run periodically.
In this section:
To abort a task:
Scheduling tasks
Scheduling tasks is a convenient way of continually updating the data warehouse and associated
data mart(s). For instance, you could schedule a data warehouse task to run at 4:00 pm and then
schedule a data mart task to run at 5:00 pm.
Note that as Compose does not provide a task-chaining option (i.e. run another task as soon as the
current task completes), it may be better to schedule tasks using an external tool that supports this
capability.
You can also use the command line interface (CLI) to run an task. For details, see Running tasks
using the CLI (page 273).
To schedule a task:
Use the generate_tasks command to generate tasks at project, task, data warehouse, and data mart
level.
Command syntax
ComposeCli.exe generate_tasks --project [project-name] --type [DW|DM] --task [task-name] --
timeout --skip_external_checks
Parameters
Parameter Description
--task The name of the task to generate. If omitted, all tasks will be
generated, unless the "project" and/or "type" parameters are
included in the command.
Example
ComposeCli.exe generate_tasks --project myproject --type DW --task mytask --timeout --skip_
external_checks
The run_task command populates the data warehouse or data mart with data. The "ETL" operation
can also be performed using the Run toolbar button located in Monitor view as well as in the
Manage Data Warehouse Tasks and Manage Data Marts windows.
Command syntax
ComposeCli.exe run_task --project project_name --type DW|DM|WF --task task_name --wait
timeout_in_sec
Parameters
Parameter Description
--task When:
Note that if wait is excluded from the command, the task may
appear to complete successfully even if it encountered an error.
Example
ComposeCli.exe run_task --project MyProject --type DW --task DWH1 --wait 1
Notifications
You can select events, on the occurrence of which, a notification will be sent to the specified
recipients.
${ERROR_CODE} The error code if an error was encountered during the task.
${EVENT_TYPE_ -
DESCRIPTION}
7. Click Next. In the Associate with screen, select whether to apply the rule to all tasks of to
selected tasks. If you chose Selected Tasks, select which tasks to apply the rule to.
8. Click Next to see a summary of the notification settings or Finish to save your settings and
exit the wizard.
9. If you clicked Next, review your settings and then click Finish to save the notification rule
and exit the wizard or Prev to edit your settings. You can also click the headings on the right
of the wizard to go directly to a specific window.
The notification will be added to the list of notifications in the Notification Rules window.
Delete a Select the rule and then click the Delete toolbar button. When prompted to confirm
Rule the deletion, click Yes.
Edit a Either double-click the rule you want to edit or select the rule and click the Edit
Rule toolbar button. Continue from Notifications (page 274).
Disable a Select the rule you want to disable and then either click the Disable toolbar button or
Rule clear the check box in the Enabled column.
Enable a Select the rule you want to enable and then click the Enable toolbar button or select
Rule the check box in the Enabled column.
If a notification is set for several events, the event ID will be 0 for each of the events.
Event IDs for task states within Workflows are not supported.
Workflows
Workflows enable you to run tasks both sequentially and in parallel. You can either schedule
workflows as described in Scheduling tasks (page 272) or run them manually using the Run toolbar
button.
You can create your own workflow and/or use the built-in workflow. The built-in workflow enables
you to run all of your data warehouse and data mart tasks as a single, end-to-end process. In a
project with a single data mart, all tasks run sequentially in the default workflow, starting with the
data warehouse tasks and ending with the data mart task. However, in projects with multiple data
marts, the data mart tasks run in parallel, following the completion of the data warehouse tasks.
When you create your own workflow, you decide which tasks to include in the workflow and the
order in which they will be run.
In this topic:
Adding a workflow
To add a workflow:
1. Switch to Monitor view by clicking the Monitor button in the top right of the Compose
console.
2. Click the New Workflow toolbar button.
The New Workflow window opens.
3. To create a workflow with all current tasks, select Create default workflow and then click
OK. Otherwise, continue from Step 4 below. Separate workflows will be created for Full Load
and Change Processing tasks. The default workflow cannot be edited and will appear as
Default Workflow in the list of monitored tasks.
To update the default workflow with newly added tasks, repeat steps 2-3 above.
Designing a workflow
The workflow window is divided into two panes. The pane on the left (hereafter referred to as the
Elements pane) is where you design your workflow and contains two default elements: Start and
End.
The Elements pane contains gateways and tasks that you can use in your workflow. The following
elements are available:
l Tasks - All existing Data Warehouse tasks, Data Mart tasks, and Command tasks.
l Gateways - There are two types of gateway: Parallel Split and Synchronize. Use the
Parallel Split gateway to create parallel paths. This is useful, for example, if you want two or
more tasks to run in parallel.
Use the Synchronize gateway to merge parallel paths. The workflow waits for all the Tasks
that precede the gateway to complete before continuing the flow.
To design a workflow:
1. Drag the desired workflow elements from the Elements pane to the pane on the left.
2. Arrange the elements in the order that you want them to run.
3. Connect the elements to each other by dragging the connector from the gray dot (that
appears on the right of an element when you hover the mouse cursor over it) to the target
element. When a blue outline appear around the target element, release the mouse button.
4. Optionally add error paths to the workflow. The workflow will follow the error path if a task
encounters an error. For example, if an error occurs with a Data Mart task, you may want to
run another Data Mart task in its place.
To add an error path, hover your mouse cursor over the task element. A red dot will appears
below the element. Drag the connector from the red dot to the target element, as shown
below.
Connecting two error paths to the same task should be avoided as the workflow will fail
if the task tries to run twice.
By default, a workflow will end with an error if one or more parallel tasks do not complete
successfully. However, in certain cases you may want the workflow to continue, even if one or more
of the parallel tasks failed.
To do this, you need to connect the error port of the relevant task(s) directly to the Synchronize
gateway. You can also design the workflow so that it follows the path leading from the Synchronize
error port, instead of continuing its normal flow.
In the example below, the error port of the MyCommandTask is connected to the Synchronize
gateway, meaning that even if MyCommandTask task fails, the workflow will continue. However, if
the MyCommandTask task fails, the workflow will not proceed directly to the End element. Instead,
it will follow the Synchronize gateway’s error path to the Source task.
Validating workflows
It is strongly recommended to validate your workflow before running it. This will prevent errors from
occurring during runtime due to an invalid workflow.
l The execution order of elements must be sequential and not cyclic. For example an element
cannot loop back to an element that precedes it in the execution order.
If the workflow is valid, a "<workflow_name> is valid." message will be appear at the top of the
window. If the workflow is not valid, a message describing the problems will appear instead.
You can monitor the workflow either in the Monitor tab or in the Progress Status tab. During
runtime, the workflow elements fill with blue providing a graphic indication of progress. If a task
encounters an error, the task element will appear with red fill instead of blue.
The example below shows a workflow containing three data warehouse tasks and one data mart
task. In the workflow, the data mart task will be run only after the completion of the parallel data
warehouse tasks.
Managing workflows
The table below describes the options available for managing workflows.
Delete a In Monitor view, select the workflow in the Task column and then click the
Workflow Delete Workflow toolbar button.
Edit a In Monitor view, either double-click the workflow you want to edit or select the
Workflow workflow and click the Open toolbar button. Continue from Adding and
designing workflows (page 277).
Delete an Either right-click the element and select Delete or select the element and then
element in click the Delete toolbar button.
workflow
Reset the Click the reset button to the right of the slider at the top of the window.
workflow view
Zoom in Move the slider at the top of the window to the left or right as required.
to/zoom out of
the workflow
Monitoring and controlling Replicate tasks from within Compose involves the following steps:
l Port: Optionally, change the default port (443). You should only change the default
port if you are certain that a different SSL port is being used.
l User Name and Password: Your credentials for logging in to the Qlik Replicate
machine.
When Replicate Server is installed on Linux, enter the user name and
password for the Windows machine on which the Replicate UI Server is
running.
l Get metadata timeout - The time to wait when discovering a task’s source database
or refreshing the metadata cache before returning a timeout error.
l Get task timeout - The time to wait when starting a Replicate task before returning a
timeout error.
4. Click Test Connection and then click OK if the connection is successfully verified.
The server is added to the Manage Replicate Servers window. Click Close to close the
window.
2. Click the browse button to the right of the Associate with Replicate task field.
The Select Replicate Task window opens.
3. Select a Replicate Server from the Server drop-down list.
The Replicate Tasks list is populated with all tasks defined on the selected server.
4. Select the task that is replicating the source tables to the landing zone and then click OK.
5. If you want to discover source database for model generation, select Source database
connection and then configure the settings as described in Defining Replicate data source
connections (page 149).
6. When you're done, click OK again to save the data source settings.
The Replicate task is immediately added to the Compose monitor.
If a task is stopped from within Replicate, the task status in Compose will be
"Completed" instead of "Aborted".
You can also define notifications for the task and add the task to a workflow. For more information,
see Notifications (page 274) and Workflows (page 276) respectively.
The monitor provides various information about the task. For details, see Viewing information in the
monitor (page 267).
In this section:
For information on which endpoints can be used in a Replicate task that lands data for Compose,
see Supported hive distributions for Data Lake projects (page 407).
Configuring multiple Replicate tasks with the same landing zone is not supported.
The steps below highlight the settings that are required when using Qlik Replicate with Compose.
For a full description of setting up tasks in Qlik Replicate, please refer to the Qlik Replicate Help.
Prerequisites
When defining the Replicate task, make sure the following prerequisites have been met.
l When Microsoft Azure HDInsight is defined as the Replicate target endpoint, you must set
the endpoint's Target storage format to Sequence.
l When Oracle is defined as the source endpoint in the Replicate task, full supplemental
logging should be defined for all source table columns that exist on the target and any source
columns referenced in expressions.
For information about turning off Speed partition mode, setting partitioning intervals,
and partition cleanup, see the Replicate Help.
1. Open Qlik Replicate and in the New Task dialog, do one of the following:
l To enable Full Load and CDC replication, enable the Full Load and Store Changes
options only (the Apply Changes option should not be enabled).
l To enable Full Load only replication (without CDC), enable the Full Load replication
option only.
2. Open the Manage Endpoint Connections window and define a source and target endpoint.
The target endpoint must be the Hive database where you want Compose to create the
Storage Zone tables. For more information on supported endpoints, see Supported hive
distributions for Data Lake projects (page 407).
3. Add the endpoints to the Replicate task and then select which source tables to replicate.
4. This step is not relevant for Full Load only tasks. To facilitate Schema evolution (page 339) in
Compose, select the DDL History Control Table in the Task Settings’ Metadata|Control
Tables tab. If you intend to scan all data sources (when performing schema evolution), then
you must do this for ALL Replicate tasks that move data to the Landing Zone, even those with
source endpoints that do not support schema evolution (e.g. Salesforce).
If you want the DDL History Control Table to be updated with any new source
tables that are added during the Replicate task, you must define Table Selection
Patterns in Replicate's Select Tables window.
5. This step is not relevant for Full Load only tasks. In the Task Settings' Store Change Setting
tab, make sure that Store Changes in is set to Change tables.
6. This step is not relevant for Full Load only tasks. In the Task Settings’ Change
Processing|Store Changes Settings tab, enable Change Data Partitioning.
7. This step is not relevant for Full Load only tasks. In the Task Settings’ Metadata|Control
Tables tab, select the Change Data Partitioning Control Table.
8. This step is not relevant for Full Load only tasks. If a Primary Key in a source table can be
updated, it is recommended to turn on the DELETE and INSERT when updating a primary
key column option in Replicate's task settings' Change Processing Tuning tab. When this
option is turned on, history of the old record will not be preserved in the new record. Note
that this option is supported from Replicate November 2022 only.
9. Run the task.
Wait for the Full Load replication to complete and then continue the workflow in Compose as
described in Adding and managing data warehouse projects (page 36).
Prerequisites
Before defining your Data Lake project, make sure the following prerequisites have been met.
Required clients
Depending on the Compute platform you select when you set up your project, you will need to
install one of the following drivers.
l As the driver name is the same for Cloudera Data Platform, Google Dataproc, and
Azure HDInsight, in order to prevent driver conflicts, only one project with any of
the aforementioned compute platforms can be created per Compose installation.
l As the driver name is the same for Cloudera Data Platform and Amazon EMR, in
order to prevent driver conflicts, only one project with any of the aforementioned
compute platforms can be created per Compose installation.
You need to register on the Simba and Cloudera websites before you can
download the Hortonworks or Hive JDBC Driver.
Databricks
1. Download the SimbaSparkJDBC42-<version>.zip or DatabricksJDBC42-<version>.zip
file from the Databricks website.
2. Copy the SparkJDBC42.jar file or the DatabricksJDBC42.jar file (according to the file you
downloaded) to <compose_product_dir>\java\jdbc.
3. Restart the Qlik Compose service.
Amazon EMR
1. Download the Amazon Hive JDBC Driver (HiveJDBC41.jar) from the Amazon website.
2. Copy the HiveJDBC41.jar file to <compose_product_dir>\java\jdbc.
3. Restart the Qlik Compose service.
or let Compose create them for you. If you want Compose to create the databases, you need to
grant the user defined in the Storage Zone settings, the CREATE DATABASE privilege for the
following databases:
l Storage Zone database - The database specified in the Storage Zone settings. This
database can have any name.
l Landing Zone database - The database specified in the Landing Zone settings. This
database can have any name.
l Exposed views database - This database must have the same name as the Storage Zone
database and be appended with the suffix defined in the project settings’ Naming tab (_v by
default).
l Internal views database - Both ODS Live Views and the HDS Live Views reference this
database for updates. This database must have the same name as the Storage Zone
database and be appended with the suffix defined in the project settings’ Naming tab (_v_
internal by default).
For more information about Compose views, see Working with views (page 325).
l SELECT
l CREATE
l MODIFY
l READ_METADATA
Compose can work with either Option 1 or Option 2 without requiring any special configuration.
Simply specify the Hive server and database name in the storage connection settings.
Note that certain platforms such as Databricks automatically power the compute platform on and
off as needed. With these platforms, Option 2 may not offer any benefits over Option 1.
As the “Federated” architecture may be better suited to certain environments, it’s recommended to
compare the performance of both options in a test environment before deciding which one to go
with.
As a general rule, the shorter the Change Processing task interval, the greater the impact on
performance and the higher the computing costs. Therefore, it is recommended to limit the
frequency of Change Processing tasks only to what is absolutely necessary.
The scheduling frequency should be determined by the rate at which data updates are required by
downstream consuming applications.
l Data Lake - for ingesting data from multiple sources and moving it to a storage system for
analytics.
l Data Warehouse - for ingesting data from multiple sources and creating analytics-ready data
marts.
This topic guides you through the steps required to set up a Data Lake project. For instructions on
setting up a Data Warehouse project, see Adding data warehouse projects (page 36).
You can set up as many projects as you need, although the ability to actually run tasks is
determined by your Compose license.
The following names are reserved system names and cannot be used as project
names: >CON, PRN, AUX, CLOCK$, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7,
COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8 and >LPT9.
Your choice will determine which storage system options are available in the
Storage screen.
6. In the Storage tab, select a storage system. If you select File System, choose a file format.
Then click Next.
l Renaming a column in Parquet or Avro format will cause the loss of all data
in that column.
l Parquet and Avro formats do not allow spaces in Primary Key column
names. If your project is set up to ingest tables from Replicate, you can
define a global transformation in Replicate to remove spaces from Primary
Key column names.
7. In the Compute tab, select a compute platform and then click Finish to exit the wizard.
Before configuring connectivity, make sure to install the relevant driver for your
compute platform. See Prerequisites (page 286) for more information.
9. Add a Storage Zone (Data Lake) and at least one source database as described in Defining a
Storage Zone (page 316) and Defining Landing Zones (page 326) respectively.
10. Select the source tables as described in Selecting source tables and managing metadata
(page 328).
11. Create the storage tables as described in Creating and Managing Storage Zone Tasks (page
350).
Project management actions are performed in the main Compose window. To switch
from a specific project to the main window, click the downward arrow to the right of the
project name and then select All Projects from the drop-down menu.
To Do this
See also: Project deployment (page 45) (Data Warehouse projects) and
Project deployment (page 299) (Data Lake projects).
Project settings
You can modify the project settings according to your needs.
1. Open your project as described in Managing and monitoring projects (page 292).
2. Click the downward arrow to the right of the project name and select Settings from the drop-
down menu.
The Settings window opens.
The project settings window is divided into the following tabs:
l General tab (page 293)
l Naming tab (page 295)
l Environment tab (page 296)
General tab
In this tab, the following settings are available:
Project details
Read-only information about the project deployment type, storage type, and compute type (which
were all set in the project setup wizard).
Miscellaneous
l Generate DDL scripts but do not run them: By default, Compose executes the CREATE,
ADJUST and DROP statements immediately upon user request. When you select this option,
Compose will only generate the scripts but not execute them. This allows you to review and
edit the scripts before they are executed.
For example, if you want to apply custom sorting or special formatting, you will need to edit
the CREATE statement accordingly.
Note that if you select this option, you will need to copy the scripts to your Storage Zone and
run them manually. You can view, copy and download the DDL scripts as described in
Viewing and downloading DDL scripts (page 309).
When this option is selected, you need to do the following to see the results:
l After running the scripts, clear the metadata cache as described in Clearing
the metadata cache (page 364).
l When this option is selected, you need to press [F5] (i.e. refresh the page)
in order for the web console to display the updated list of tables. This can
be done either before running the scripts (recommended) or after running
the scripts. Note that until you refresh the browser, the information in the
web console will only be partially updated.
l Ignore Mapping Data Type Validation: By default, Compose issues a validation error when a
landing table is mapped to a staging table with a different data type. You can select this
option to allow the mapping of different data types. Note that you should only select this
option if you need to map landing table data types to compatible (though not identical)
staging table data types.
l Do not display the default workflows in the monitor: Select this option if you want to
prevent the default workflows from being executed.
l Operational Data Store - This will create an ODS from the source data.
l Operational and Historical Data Store - This will create an ODS from the source data while
maintaining previous versions of updated records.
l Exclude the corresponding record from the ODS views - This is the default option as
records marked as deleted should not usually be included in ODS views.
l Include the corresponding record in the ODS views - Although not common, in some
cases, you might want include records marked as deleted in the ODS views in order to
analyze the number of deleted records and investigate the reason for their deletion. Also,
regulatory compliance might require you to be able to retrieve the past record status (which
requires change history as well).
Dates
l Lowest Date - The value of the "From Date" column when no other value is available.
l Highest Date - The value of the "To Date" column when no other value is available.
For a description of the "From Date" and "To Date" columns, see Naming tab below.
Naming tab
In this tab, you can change the default column, prefix, and suffix names.
If you change the prefix or suffix of existing tables (e.g. data warehouse tables), you
need to drop and create the data warehouse and data mart tables.
Suffix for The suffix used to identify the database used for exposed views.
exposed
views
database
Suffix for The suffix used to identify the database used for internal views.
internal
views
database
Suffix for The suffix used to identify Replicate Change Tables in the landing zone of the
Replicate Storage Zone.
Change
Tables
Prefix for The prefix to add to table names in the Storage Zone. Changing this after the
storage Storage Zone tables have already been created requires you drop and recreate
tables your Storage Zone tables.
Prefix for all The prefix to add to view names in the Storage Zone. Changing this after the
storage view Storage Zone tables have already been created requires you drop and recreate
types your Storage Zone tables.
For more information on storage view types, see Working with views (page 325).
Name Description
"From Date" The name of the "From Date" column. This column is added to tables that contain
column name attributes (columns) with history. The column is used to delimit the range of
dates for a given record version.
"To Date" The name of the "To Date" column. This column is added to tables that contain
column name attributes (columns) with history. The column is used to delimit the range of
dates for a given record version.
Environment tab
In this tab, you can specify information about your environment that will be displayed as a banner at
the top of the window when you open the project.
Provide the following information and then click OK to save your settings.
l Environment type: Select one of the following types according to your environment type:
Development, Test, Acceptance, Production, Other. This information will not be displayed
in the banner.
l Environment title: Specify a title for your environment. The title will be displayed in the
banner at the top of the console.
l Project title: Specify a title for your project. The project title will be shown in the console
banner. If both an Environment Title and a Project Title are defined, the project title will be
displayed to the right of the environment title.
l The Project title option requires Compose August 2021 Patch Release 12 or
later.
l When a project is deployed to a new environment, the environment title and
environment type in the new environment will not be overridden.
The following image shows the banner with both an Environment title and a Project title:
The banner text is shown without the Environment title and Project title labels.
This provides greater flexibility as it allows you add any banner text you like,
regardless of the actual label name. For example, specifying Project
owner: Mike Smith in the Project title field, will display that text in the banner.
Task recovery
You can set SQL state classes and error codes, on the occurrence of which, a task will be retried.
l Maximum retry count: The number of times to retry a task before exiting with failure.
Increasing the number of retries will impact system resources. Therefore, only increase the
default value if you expect tasks to recover after the default number of retries.
l Interval between retry attempts (sec): The time to wait between retry attempts. Increasing
the interval will consume more system resources. Therefore, only increase the default value
if it is critical that the task recover as soon as possible.
l Retry on these SQL state classes: The default is 08 (connection exceptions). You can add
additional classes as desired. Classes should be separated with a comma.
Example: 08,22,2F
l Retry also on these error codes: The default is 1205 (which occurs when a table is locked
by another process). You can add additional error codes as desired. Error codes should be
separated with a comma.
Example: 1205,2020,233
Resetting projects
You can reset projects as required. This can be useful during the project development stage as it
allows you to easily delete unwanted project elements.
To reset a project:
1. Open your project as described in Managing and monitoring projects (page 292).
2. Click the downward arrow to the right of the project name and select Reset Project from the
drop-down menu.
The Reset Project window opens.
3. Select which elements to reset according to your project type.
l Metadata and mapping definitions
For information on mappings, data storage tasks and metadata, see Selecting source
tables and managing metadata (page 328).
l Reusable transformations
For information on the reusable transformations, seeReusable transformations (page
348).
l Storage tables and files
For information on the Storage Zone, seeCreating and Managing Storage Zone Tasks
(page 350).
l Command tasks
For more information, seeCreating and managing command tasks (page 366).
l DDL scripts
For more information on DDL scripts, see Project settings (page 293) and Viewing and
downloading DDL scripts (page 309).
4. Click Reset Project and then click Yes when prompted to confirm your request.
Project deployment
Project deployment packages can be used to back up projects or migrate projects between
different environments (e.g. testing to production). As a deployment package is intended to be
deployed in a new environment, it contains the Storage Zone and data source definitions, but
without any passwords. The deployment package also does not contain any data from the Storage
Zone, only the metadata. The deployment package also contains the project metadata and mapping
information, which should be consistent with the Landing Zone tables in the new environment.
For a complete list of objects contained in the deployment package, see Exporting a project (page
301).
Deploying packages
This section explains how to deploy a project deployment package. You can only deploy packages
to an existing project. Therefore, before deploying a project, create a new project with the user
name and password required for connecting to the Storage Zone database and the Landing Zone
database (if defined) in the new environment. In addition, the Landing Zone databases in the target
project must have the same display name (defined in the Compose console) as the corresponding
databases in the source project. Note that as database settings are usually environment specific,
the database settings in the target project will not be overwritten by those of the source project.
When deploying, Compose does not override existing connection parameters, assuming they are
environment-local. This enables you to easily migrate projects from test to production, for example,
without the need to change user names, passwords or IP addresses.
If preferred, you can create an empty project and provide the required credentials after
the deployment completes. In this case, an error message prompting you for the missing
credentials will be displayed after the deployment completes.
To deploy a project:
1. Copy the ZIP file created when you created your deployment package to a location that is
accessible from the Compose machine.
2. Open Compose and choose one of the following methods:
l In the main Compose window, select the desired project. Then, click the Deployment
toolbar button and select Deploy from the drop-down menu.
l In the project window, select Deployment > Deploy from the project drop-down
menu.
The Deploy window opens.
3. Either drag and drop the file on the window.
-OR-
Click Select and browse to the location of the deployment package. In the Open window,
either double-click the deployment package ZIP file or select the file and click OK.
The package details will be displayed.
4. Click Deploy to deploy the package. When prompted to replace the existing project, confirm
the operation.
The project will be deployed.
When deploying a project defined with multiple Replicate Servers to any of the following:
Then the Landing Zone settings from the source project will be used, but the missing
databases will be created without a password and Replicate Server. These will need to
be configured manually.
Under normal circumstances, use the deployment options described in Project deployment (page
299) to export and import projects. For deployment automation or control by another tool, you can
use the command line interface (CLI) to perform the following tasks:
To export or import a project or project configuration including passwords, you first need
to change the default Master User Password.
For more information on changing the master user password, see Changing the master
user password (page 30).
See also: Migrating projects from the test environment to the production environment
(page 307) and Import/Export scenarios: When is a password required? (page 307)
Before running any command, you must run the Connecting to Qlik Compose server (page 78)
command.
To get help when using the command line, you can run the Help command. For example, for help
about exporting a project, issue the following command:
Exporting a project
You can use the export_project_repository CLI to export a project.
Existing Storage Zone tables and generated task statements are not exported.
Notifications and schedules are also not exported as they are considered to be
environment-specific.
Command syntax
ComposeCli.exe export_project_repository --project project_name --outfile output_file [--is_
without_credentials] [--password password] [--master_user_password master_user_password]
Parameters
Parameter Description
--outfile The path to and name of the output file. This file is in JSON format
(e.g. C:\file.json).
--is_without_credentials Use this parameter to specify that you want to export the project
settings without the encrypted fields. When importing to a new
project, you will need to manually enter the project passwords (in
the Compose database connection settings) after the import
completes. In addition to eliminating the need to specify a
password when exporting or importing the project, the is_without_
credentials parameter also allows the project to be used in every
Compose installation, regardless of its master user password. It is
also useful in the event that you would like to keep the existing
passwords in the target environment (e.g. when exporting from a
testing environment to an existing project in the production
environment).
--master_user_password The master user password defined for the source machine. When
used, this parameter must be used together with the password
parameter. Use the master_user_password > parameter if you want
to encrypt the credentials in the exported project, but do not want
the source master password to be used in a different environment.
In such a case, when you import the project to an environment that
has a different master password, you will only need to specify the
password qualifier.
Example
Export project without a password
Importing a project
You can use the import_project_repository CLI to import a project. If you import to an existing
project, all of the project settings, except the project configuration items will be overridden. For
information on the project configuration items, see Exporting the project configuration (page 304).
Command syntax
ComposeCli.exe import_project_repository --project project_name --infile input_file [--
password password] [--is_without_credentials] [--override_configuration] [--dont_backup_
existing_project]
Parameters
Parameter Description
--infile The full path to the input file, including the file name. This
file is in JSON format (e.g. C:\file.json).
Parameter Description
<product_dir>\data\projects\<project_name>_backup_
<timestamp>
Example
ComposeCli.exe import_project_repository --project MyProject --infile file.json --password
MyPassword --override_configuration --autogen
Existing Storage Zone tables and generated task statements are not imported. After the
import completes, you must perform step 3 below. You may also need to perform step 1
or 2, depending on whether you changed the Storage Zone connection settings (step 1)
or kept the existing connection settings (step 2).
1. If you changed the Storage Zone connection settings after importing the project,
then you need to create the tables in the new Storage Zone.
2. If you edited the Metadata in a testing environment and then imported the project
into a production environment, you need to validate and adjust the Storage Zone.
3. Generate the Data Storage task statements.
For information on validating the Storage Zone and generating the task statements, see Creating
and Managing Storage Zone Tasks (page 350).
For information about migrating projects, see Migrating projects from the test environment to the
production environment (page 307).
Command syntax:
ComposeCli.exe export_project_repository_config --project project_name --outfile output_file[-
-is_without_credentials] [--password password] [--master_user_password master_user_password]
Parameters
Parameter Description
--outfile The path to and name of the output file. This file is in JSON format
(e.g. C:\file.json).
--is_without_credentials Use this parameter to specify that you want to export the project
settings without the encrypted fields. When importing to a new
project, you will need to manually enter the Landing Zone(s) and
Storage Zone passwords (in the Connections panel) after the
import completes. In addition to eliminating the need to specify a
password when exporting or importing the project, the is_without_
credentials parameter also allows the project to be used in every
Compose installation, regardless of its master user password. It is
also useful in the event that you would like to keep the existing
passwords in the target environment (e.g. when exporting from a
testing environment to an existing project in the production
environment).
--master_user_password The master user password defined for the source machine. When
used, this parameter must be used together with the password
parameter. Use the master_user_password > parameter if you want
to encrypt the credentials in the exported project, but do not want
the source master password to be used in a different environment.
In such a case, when you import the project to an environment that
has a different master password, you will only need to specify the
password qualifier.
Example
Export project configuration without a password
Before you can import the project configuration, you must first run the import_
project_repository command described in Importing a project (page 303).
Command syntax:
ComposeCli.exe import_project_repository_config --project project_name --infile input_file [--
password password] [--is_without_credentials]
Parameters
Parameter Description
--infile The full path to the input file, including the file name. This file is in
JSON format (e.g. C:\file.json).
--is_without_credentials Use this parameter to specify to import the project settings without
the encrypted fields. In this case, you will need to manually enter
the project's Landing Zone and Storage Zone passwords in the
Data Zone Connection settings.
Example
ComposeCli.exe import_project_repository_config --project MyProject --infile file.json --
password MyPassword
Landing and Storage Connections (landing, storage and provisioning) will not be
overridden when moving to a production environment. This also includes the file format
set in the provisioning task.
The Landing Zone and Storage Zone display names must be identical in both the testing
and the production environments.
To perform the initial migration from the testing environment to the production environment:
1. Export the project from the test environment as described in Exporting a project (page 301).
2. Import the test project to the production environment as described in Importing a project
(page 303).
3. Edit the connection settings to point to the production Landing Zone and Storage Zone.
For more information, see Defining Landing Zones (page 326)and Defining a connection to
the Storage Zone (page 317)respectively.
4. Configure notifications and scheduling as needed.
For more information, see Scheduling tasks (page 371) and Notifications (page 373)
respectively.
1. Export the project from the test environment as described in Exporting a project (page 301).
2. Import the test project to the production environment as described in Importing a project
(page 303).
In all scenarios, if you import a project to an existing project, the credentials of the
existing projects are preserved (as they are part of the project configuration).
Scenario 1: Moving a project or project configuration between two Compose machines without
retaining the project credentials. This is useful when importing to a new project that will have
different project credentials.
In such a scenario, simply add the is_without_credentials parameter to either the export or the
import command.
Scenario 2: Moving a project or project configuration between two Compose machines that
have the same Master User Password.
In such a scenario, neither the export command nor the import command need to include a
password. If you do not want the source and target projects to have the same credentials (for Data
Zone connectivity, etc.), then you also need to specify the is_without_credentials parameter
in either the export or the import command.
Scenario 3: Moving a project or project configuration between two Compose machines that
have a different Master User Password, but without revealing the Master User Password of
the source machine.
In such a scenario, the export command must include the password and master_user_password
parameters while the import command must include the password parameter. The same password
(specified with the password parameter) must be used for both export and import.
Scenario 4: Moving a project or project configuration between two Compose machines that
have a different Master User Password.
In such a scenario, the export command does not need to include a password, but the import
command should specify the Master User Password of the source machine (using the password
parameter).
Command syntax
ComposeCli.exe generate_project --project project_name [--database_already_adjusted]
Parameters
Parameter Description
Example
ComposeCli.exe generate_project --project MyProject --database_already_adjusted
If Compose encounters an error while generating a storage task, it will skip the
problematic task and continue with the remaining tasks.
For more information on the Create DDL scripts only option, see Project settings (page 293).
1. Open your project as described in Managing and monitoring projects (page 292).
2. Click the downward arrow to the right of the project name and select Show DDL Scripts
from the drop-down menu.
The DDL Script Files window opens.
3. To view a script, select the desired script in the Script Files pane on the left. The script will
be displayed on the right.
4. To download a script, select the desired script in the Script Files pane on the left. Then click
the download button in the top right of the window.
5. To search for an element in the script, start to type in the search box. All strings that match
the search query will be highlighted blue.
You can navigate between search query matches using the arrows to the right of the search
box. Use the right and left single arrows to navigate matches sequentially. Use the right and
left double arrows to jump to the last and first match respectively.
6. To reset the search, either delete the search query or click the "x" in the right of the search
box.
Project versioning
Compose provides built-in project version control using the Git engine. Version control enables
Compose developers to commit project revisions to both a local and a remote Git repository. If a
mistake is made, Compose developers can easily roll back to earlier versions of the project while
minimizing disruption to all team members.
Revisions only store metadata and mapping information. After you revert to a saved
revision, you will need to recreate the data warehouse and data mart tables.
To prevent conflicts, each Compose project should have its own Git repository.
1. From the project drop-down menu, select Version Control > Settings.
The Version Control Settings - Git window opens.
The Local Commits area shows the local root folder where project revisions are committed.
The first time a project revision is committed, Compose creates a JSON file with the current
project settings. The <project_name>.json file is archived to a ZIP file (<project_name>_
deployment.zip), which is located in a project-specific folder under the source-control folder.
2. To enable commits to a remote Git database, select Enable remote commits and then
provide the following information:
l URL - The address of the remote Git database.
l User name - Your user name for accessing the remote Git database.
l Password - Your Personal Access Token for accessing the remote Git database.
Committing projects
You can commit a project using the console or using the CLI:
1. From the project drop-down menu, select Version Control > Commit.
The Commit - <Project_Name> window opens.
2. Enter a message in the Message box and optionally select the Remote push check box.
Note that the Remote push check box will be disabled if the Enable remote commits option
described above is not selected.
Command syntax
ComposeCli.exe commit --project project_name [--message message] [--remote]
Parameters
Parameter Description
Parameter Description
Example
ComposeCli.exe commit --project MyProject --remote
1. From the project drop-down menu, select Version Control > Revisions history.
The Revision History - <Project_Name> window opens.
By default, the last 10 revisions are shown. You can change this number by selecting one of
the available options from the Show drop-down list.
2. Optionally, use the Search box to find a specific revision.
3. Select the desired revision and then click the Deploy to Revision toolbar button.
4. When prompted to confirm the operation, click Yes.
The existing project will be replaced.
5. Click Close to close the Revision History - <Project_Name> window.
1. From the project drop-down menu, select Version Control > Revisions history.
The Revision History - <Project_Name> window opens.
By default, the last 10 revisions are shown. You can change this number by selecting one of
the available options from the Show drop-down list.
2. Optionally, use the Search box to find a specific revision.
3. Select the desired revision and then click the Download Revision as Package toolbar
button.
The package will be saved as a ZIP file in your browser's default download location.
As a prerequisite to creating a diagnostics package, the project must have at least one
database connection configured.
In this section:
High-level flow
A Qlik Compose workflow is typically set up as follows (simplified):
1. In Replicate, define a task that replicates the source tables to a specific target. The target
should be defined as the Landing Zone in your Qlik Compose project.
2. In Compose:
a. Configure access to your Storage Zone and your Landing Zone(s).
b. Use the "Discover" option to auto-generate the metadata from the source tables
located in the Landing Zone(s). You can even create the Metadata manually if you
prefer.
c. Optionally, create the Storage Zone tables and then generate the ETL statements that
will be executed when the task runs.
d. Run the separate Full Load and CDC tasks (in that order) that were automatically
created when the source tables were discovered.
Console elements
This section will familiarize you with the elements that comprise the Qlik Compose UI.
From the Windows Start menu, select All Programs > Qlik Compose > Qlik Compose Console.
Management View
In Management view, you can manage the following:
Designer View
When you add a new project or open an existing project, the console switches to Designer view.
You can switch back and forth between Designer view and Monitor view by clicking the Designer
and Monitor tabs in the top right of the console.
l Landing and Storage Connections - Configure access to your Landing Zone(s) and Storage
Zone.
For more information, see Defining Landing Zones (page 326)and Defining a connection to
the Storage Zone (page 317)respectively.
l Storage Zone - In the Storage Zone, you can:
l Discover and manage the source table metadata.
l Define data storage tasks that move the data from the Landing Zone(s) to the Storage
Zone.
For more information, see Selecting source tables and managing metadata (page 328) and
Creating and Managing Storage Zone Tasks (page 350).
In Designer view, each of the panels has a bar below the panel name. The bar can be empty, half-
filled or completely filled, according to the current configuration status of the panel properties, as
follows:
Monitor View
To switch to Monitor view, click the Monitor tab in the top right of the console.
Monitor View
In Monitor view, you can view the status of Qlik Compose tasks, schedule their execution (either
individually or as a workflow), view logs, and create notifications.
For more information, see Controlling and monitoring tasks and workflows (page 368).
In this section:
In this section:
Required permissions
The following permissions are required:
As the server connection settings for the Landing Zone are derived from the Storage
Zone settings, you must define a Storage Zone before defining a Landing Zone.
For more information on adding data sources, see Defining Landing Zones (page 326).
1. Open your project and click the Manage button in the bottom left of the Databases panel.
The Manage Databases window opens.
2. Either, click the Add New Database link in the middle of the window.
-OR-
Click the New toolbar button.
The New Storage window opens. The settings will be relevant for the compute platform you
selected when you set up your project. The sections below detail the settings according to
each of the available compute platforms.
To use AVRO file format with Hive 3.x, you must set the following parameter:
metastore.storage.schema.reader.impl=org.apache.hadoop.hive.metastore.SerDeStorageSc
hemaReader
Authentication Type:
l User name - Select to connect to the Hadoop cluster with only a user name. Then, in the
User name field, specify the name of a user authorized to access the Hadoop cluster.
l User name and password - Select to connect to the Hadoop cluster with a user name and
password. Then, in the User name and Password fields, specify the name and password of a
user authorized to access the Hadoop cluster.
l Knox - Select this option if you need to access the Hortonworks Hadoop distribution through
a Knox Gateway. Then, provide the following information:
l Host - The FQDN (Fully Qualified Domain Name) of the Knox Gateway host.
l Knox port - The port number to use to access the host. The default is "8443".
l Knox Gateway path - The context path for the gateway. The default is "gateway".
The port and path values are set in the gateway-site.xml file. If you are
unsure whether the default values have been changed, contact your IT
department.
l Cluster name - The cluster name as configured in Knox. The default is "Default".
l User name - Enter your user name for accessing the Knox gateway.
l Password - Enter your password for accessing the Knox gateway.
l Kerberos - Select to authenticate against the Hadoop cluster using Kerberos. Then, provide
the following information:
l Realm: The name of the realm in which your Hadoop cluster resides.
For example, if the full principal name is [email protected], then EXAMPLE.COM
is the realm.
l Principal: The user name to use for authentication. The principal must be a member of
the realm entered above. For example, if the full principal name is
[email protected], then john.doe is the principal.
l Keytab file: The full path of the Keytab file. The Keytab file should contain the key of
the Principal specified above.
l Host: The FQDN that will be used to locate the correct Principal in Kerberos. This is
only required if the IP address of the Hive machine is not known to Kerberos.
l Service name: The default is "hive". You should only change this if you are sure that
the service name is different.
If you are unsure about any of the above, consult your IT administrator.
Hive Access
l Use ZooKeeper - Select if your Hive machines are managed by Apache ZooKeeper.
l ZooKeeper hosts - The machines that make up the ZooKeeper ensemble (cluster). These
should be specified in the following format:
host1:port1,host2:port2,host3:port3
l ZooKeeper namespace - The namespace on ZooKeeper under which the HiveServer2
znodes are located.
l Host - If you are not using ZooKeeper, specify the IP address of the Hive machine. This
should be the same as the host name or IP address specified in the Cloudera Data Platform
Private Cloud or Hortonworks Data Platform target endpoint settings in the Replicate task.
l Port - If you are not using ZooKeeper, optionally change the default port.
l Database name - Specify the name of the Hive target database. This must be different from
the database specified in the Landing Zone settings.
If the database does not exist Compose will try and create it. This requires the
Compose user to be granted the necessary permission to create the database.
l JDBC parameters - Additional parameters to add to the default Simba JDBC connection
string. These should be key values separated by a semi-colon.
Example:
KEY=VALUE;KEY1=VALUE1
l You can set Hive parameters in the JDBC parameters. For example:
l mapred.job.queue.name=<queuename>
l hive.execution.engine=<enginename>
l To distinguish Compose Hive sessions from other Hive Sessions, if Tez is
being used, you can define a JDBC parameter to change the query
description, as follows:
hive.query.name=my_description
Authentication type:
l User name - Select to connect to the Hadoop cluster with only a user name. Then, in the
User name field, specify the name of a user authorized to access the Hadoop cluster.
l User name and password - Select to connect to the Hadoop cluster with a user name and
password. Then, in the User name and Password fields, specify the name and password of a
user authorized to access the Hadoop cluster.
If you are unsure about any of the above, consult your IT administrator.
Hive Access
l Host - Specify the IP address of the Hive machine. This should be the same as the host name
or IP address specified in the Amazon EMR target endpoint settings in the Replicate task.
l Port - Optionally change the default port.
l Database name - Specify the name of the Hive target database. This must be different from
the database specified in the Landing Zone settings.
If the database does not exist Compose will try and create it. This requires the
Compose user to be granted the necessary permission to create the database.
l JDBC parameters - Additional parameters to add to the default Simba JDBC connection
string. These should be key values separated by a semi-colon.
Example:
KEY=VALUE;KEY1=VALUE1
l You can set Hive parameters in the JDBC parameters. For example:
l mapred.job.queue.name=<queuename>
l hive.execution.engine=<enginename>
l To distinguish Compose Hive sessions from other Hive Sessions, if Tez is
being used, you can define a JDBC parameter to change the query
description, as follows:
hive.query.name=my_description
l Hive metadata storage type - Select one of the following storage mediums for your Hive
metadata:
l Hive Metastore - This is the default metadata storage type.
l AWS Glue Data Catalog - You can choose to store Hive metadata using the AWS Glue
Data Catalog. AWS Glue allows you to store and share metadata in the AWS Cloud in
the same way as in a Hive metastore.
When using AWS Glue Data Catalog for metadata storage, Compose
control tables will be created with the data type STRING instead of
VARCHAR (LENGTH).
l User name: The default user name is "token", which requires you to specify
your personal access token in the Password field. Although it's strongly
recommended to use a token, you can also access your Databricks workspace
using a standard user name and password.
l Password: If you are using a personal access token, this will be the token value.
Otherwise, specify the password for accessing your Databricks workspace.
l OAuth: Provide the following information for accessing your Databricks workspace:
l Client ID: The client ID of your application.
l Client Secret: The client secret of your application.
If you are unsure about any of the above, consult your IT administrator.
Hive Access
l Host - Specify the IP address of the Hive machine. This should be the same as the host name
or IP address specified in the Databricks target endpoint settings in the Replicate task.
l Port - Optionally change the default port.
l Catalog - If you want the storage tables to be created in Unity Catalog, specify the catalog
name.
If the Replicate task is Full Load and Store Changes, the storage catalog name can
be whatever you choose. However, both the catalog name defined in the
Replicate Databricks (Cloud Storage) target endpoint and the catalog name
defined in the landing settings must be hive_metastore.
l Database name - Select the Hive target database. This must be different from the database
specified in the Landing Zone settings. If you specified a catalog (above), only databases in
the catalog will be available for selection.
l If the database does not exist Compose will try and create it. This requires
the Compose user to be granted the necessary permission to create the
database.
l To prevent table name conflicts when using Databricks, the Landing Zone
and Storage Zone databases should be different.
l JDBC parameters - Additional parameters to add to the default Simba JDBC connection
string.
The following parameters are set by default and should not be changed:
l UseNativeQuery=1
l spark.sql.crossJoin.enabled=true
This parameter controls Spark SQL’s behavior when encountering queries that could
potentially result in a Cartesian product (cross join).
l You can set Hive parameters in the JDBC parameters. For example:
l mapred.job.queue.name=<queuename>
l hive.execution.engine=<enginename>
l To distinguish Compose Hive sessions from other Hive Sessions, if Tez is
being used, you can define a JDBC parameter to change the query
description, as follows:
hive.query.name=my_description
l User name - Specify the name of a user authorized to access the Hadoop cluster.
l Password - Specify the password of the user specified in the User name field.
Hive Access
l Host - Specify the IP address of the Hive machine. This should be the same as the host name
or IP address specified in the Microsoft Azure HDInsight target endpoint settings in the
Replicate task.
l Port - Optionally change the default port.
l Database name - Specify the name of the Hive target database. This must be different from
the database specified in the Landing Zone settings.
If the database does not exist Compose will try and create it. This requires the
Compose user to be granted the necessary permission to create the database.
l JDBC parameters - Additional parameters to add to the default Simba JDBC connection
string. These should be key values separated by a semi-colon.
Example:
KEY=VALUE;KEY1=VALUE1
l You can set Hive parameters in the JDBC parameters. For example:
l mapred.job.queue.name=<queuename>
l hive.execution.engine=<enginename>
l To distinguish Compose Hive sessions from other Hive Sessions, if Tez is
being used, you can define a JDBC parameter to change the query
description, as follows:
hive.query.name=my_description
Authentication type:
l User name - Select to connect to the Hadoop cluster with only a user name. Then, in the
User name field, specify the name of a user authorized to access the Hadoop cluster.
l User name and password - Select to connect to the Hadoop cluster with a user name and
password. Then, in the User name and Password fields, specify the name and password of a
user authorized to access the Hadoop cluster.
If you are unsure about any of the above, consult your IT administrator.
Hive Access
l Host - Specify the IP address of the Hive machine. This should be the same as the host name
or IP address specified in the Google Dataproc target endpoint settings in the Replicate task.
l Port - Optionally change the default port.
l Database name - Specify the name of the Hive target database. This must be different from
the database specified in the Landing Zone settings.
If the database does not exist Compose will try and create it. This requires the
Compose user to be granted the necessary permission to create the database.
l JDBC parameters - Additional parameters to add to the default Simba JDBC connection
string. These should be key values separated by a semi-colon.
Example:
KEY=VALUE;KEY1=VALUE1
l You can set Hive parameters in the JDBC parameters. For example:
l mapred.job.queue.name=<queuename>
l hive.execution.engine=<enginename>
l To distinguish Compose Hive sessions from other Hive Sessions, if Tez is
being used, you can define a JDBC parameter to change the query
description, as follows:
hive.query.name=my_description
1. Click Test Connection to verify that Compose is able to establish a connection with the
specified database.
2. Click OK to save your settings. The database is added to the list on the left side of the
Manage Databases window.
There are four types of view, depending on the project-level or entity-level data store type:
l ODS standard views – Created when the data store type is ODS only. These views will
always reflect the same data unless the storage task is run.
l ODS Live Views – Created when the data store type is ODS only. As opposed to standard
views, these views always reflect any changes to the Replicate Change Tables and to the
Storage tables.
l HDS standard views – Created when the data store type is ODS + HDS. These views contain
both current records and historical records and will only be updated if you run storage tasks.
l HDS Live Views – Created when the data store type is ODS + HDS. These views contain both
current records and historical records. As opposed to standard views, these views always
reflect any changes to the Replicate Change Tables and to the Storage tables.
For information about turning off Speed partition mode, setting partitioning intervals,
and partition cleanup, see the Replicate Help.
Tables that were reloaded in Replicate will be automatically reloaded in Compose the
next time the task runs. To prevent data inconsistency, Live Views should not be read
while the tables are being reloaded.
Standard views contain data that was already applied to the storage tables, with mid to low-level
latency. As consuming data from standard views requires less computing resources, this should be
the first choice for downstream users. However, if latency is too high for some applications, Live
views can be used instead. Although using live views significantly reduces latency, doing so
requires greater computational resources. There is also the possibility that the date in live views
might be less consistent than the data in standard views as updates may not have been applied to
all the storage tables at the same time.
Although the views are in a separate database, you can use the suffixes (specified in the project
settings’ Naming tab) to help identify them.
In this section:
l Read metadata
l Select from tables
For information on configuring the Landing Zone, see Defining Landing Zones connections (page
326).
The Landing Zone connection settings tell Compose where the source tables from the Replicate
task are located. Since the Landing Zone is always located on the Storage Zone Server and the
Storage Zone connection details have already been defined, you do not need to provide them
again.
Before you can define a Landing Zone connection in a Data Lake project, you first need
to define a Storage Zone connection.
For more information on defining a Storage Zone connection, see Defining a connection to the
Storage Zone (page 317).
3. In the Name field, specify a display name for your Landing Zone.
4. From the Content type drop-down list, choose whether the content in the landing zone is
Full Load Only, Change Processing or Full Load and Change Processing (according to the
Qlik Replicate task definition).
5. Specify the name of the Unity Catalog in the Catalog field, according to the following
guidelines:
l If the Replicate task is Full Load and Store Changes, both the catalog name defined in
the Replicate Databricks (Cloud Storage) target endpoint and the catalog name
defined in landing settings must be hive_metastore.
l If the Replicate task is Full Load only, the name must be the same as the catalog name
defined in the Replicate Databricks (Cloud Storage) target endpoint.
6. In the Database name field, select the database that was defined in the Replicate target
endpoint settings. If you specified a catalog (above), only databases in the catalog will be
available for selection.
For more information, see Defining a Qlik Replicate task (page 34).
7. Associate with Replicate Task - This is required. Select this to associate your Data Lake
project with the related Replicate task. Replicate tasks replicate the relevant tables from the
source database to the Landing Zone. Specifying the Replicate task name will enable you to
monitor and control that task from within Compose.
Before you can choose a Replicate task however, you first need to define the
connection settings to the Qlik Replicate Server machine. To do this, click the
Replicate Server Settings link below the Task field and then configure the
settings as described in Replicate Server settings (page 385).
Once you have configured connectivity to at least one Replicate Server, you can then
proceed to select a Replicate task.
To select a Replicate task:
1. Click the browse button to the right of the Associate with Replicate task field.
The Select Replicate Task window opens.
2. Select a Replicate Server from the Server drop-down list.
The Replicate Tasks list is populated with all tasks defined on the selected server.
3. Select the task that is replicating the source tables to the landing zone and then click
OK.
The name of the selected task is shown as read-only in the Associate with Replicate task
field.
8. Click Test Connection and then, if the connection is successful, click OK to save your
settings.
Edit a Data In the left side of the Manage Landing and Storage Connections window,
Zone select the desired Data Zone (Landing Zone or Storage Zone) and then click the
connection Edit toolbar button.
Delete a Data In the left side of the Manage Landing and Storage Connections window,
Zone select the desired Data Zone (Landing Zone or Storage Zone) and then click the
connection Delete toolbar button.
In this section:
l FD - This column is added to tables that contain attributes (columns) with a History Type 2.
The column is used to delimit the range of dates for a given record version. The column name
can be changed in the project settings.
If you change the "From Date" name in the project settings, the new name will
become a reserved word.
l TD - This column is added to tables that contain attributes (columns) with a History Type 2.
The column is used to delimit the range of dates for a given record version. The column name
can be changed in the project settings.
If you change the "To Date" name in the project settings, the new name will
become a reserved word.
l FKNR - Foreign key number column used in logging tables to report missing references
captured via the data warehouse ETL
In this section:
See also Managing entities (page 335) for information on adding entities manually.
If you want the metadata to be created with Primary Keys, you need to associate a
Replicate task with the Landing Zone. For instructions on how to do this, see Defining
Landing Zones connections (page 326).
3. Select the desired source Landing Zone and then click OK.
The Source Tables/Views Selection - Name window opens.
You can select multiple tables/views by holding down the [Shift] (sequential
selection) or [Ctrl] (non-sequential selection) button.
10. To add specific tables/views, select the desired tables and/or views and then click the >
(Add) button.
If you add a table that already exists in the Metadata with the same name, then
the new table is added with the name: source_table_name_DISCOVERED (or
source_table_name_DISCOVERED_02 if the name source_table_name_
DISCOVERED already exists, and so on).
If the table contains attribute domains that differ from existing ones but have the
same name, they will also be appended with the _01 suffix.
1. Open the Manage Metadata window as described in Managing the metadata (page 334).
2. In the Entities toolbar, click the Import from Project button.
3. The Import from Project wizard opens.
4. In the Entities tab:
l Select a project from the Import from Project drop-down list.
l Optionally, search for specific entities.
l Select which entities to import or select Select All to import all entities.
5. Click Next to select which mappings to import.
To create new entities and mappings if the selected entities and mappings
already exist, clear the Replace existing entities and mappings check box.
The new entities/mappings will be named <existing_name>_IMPORTED (or
<existing_name>_IMPORTED_<n+> if the entity/mapping is imported more than
once).
If you do not wish to import any mappings, clear the Mappings check box before
clicking Finish.
If you aware of external changes to the metadata or if you notice any data synchronization
anomalies, Compose enables you to clear the metadata cache, either using the web console or
using the CLI.
1. Click the Manage button at the bottom left of the Storage Zone panel.
2. Click the Clear Landing Cache button in the Manage Storage Tasks window.
See also the section describing how to clear the cache before discover.
1. In the Storage Zone panel, select the Clear Metadata Cache item from the menu in the top
right corner.
2. Click Yes to clear the storage zone metadata.
3. When the storage zone metadata cache has been successfully cleared, click Close.
Command syntax:
ComposeCli.exe clear_cache --project project_name [--type landing|storage] [--landing_zone
source_name]
Parameters
Parameter Description
l landing
l storage
Example
ComposeCli.exe clear_cache --project MyProject --type landing --landing_zone MySource1
Validating the metadata does not recalculate expressions for historical data that has
changed.
1. Click the Validate button in the bottom right of the Metadata panel.
Compose will run validation checks and identify any entities which are not valid.
If the metadata is valid, the following message will be displayed:
Validation tests completed successfully. No issues were detected.
If the metadata is not valid, the Validating Storage Zone window opens. This window is
divided into the following columns:
l Severity: Warning or Error.
l Message: A message indicating why the entity is invalid.
l Names: The names of the affected entities.
l Resolve: To open the Manage Metadata window and manually resolve the issue, click
the Edit Entities button.
2. Resolve the issue (for example, by adding a Business Key) and then click Close.
The Validate Metadata window will open.
3. Click the Refresh button in the top left corner.
A message will confirm the metadata’s validity.
4. Click Close to exit the window.
1. Click the Validate button in the bottom right of the Storage Zone panel, or select Validate
from the drop-down menu in the top right of the Storage Zone panel.
2. Compose will run a series of validation checks and the Validating the Storage window
opens.
If the Storage Zone metadata is not valid, the following message will be displayed:
The metadata is not valid.
If the Storage Zone is valid, the following message will be displayed:
The Storage Zone is valid.
If the metadata in the Manage Metadata window is not the same as the Storage Zone
metadata, the following message will be displayed:
The Storage Zone is different from the metadata.
3. This step is only applicable if the Storage Zone metadata differs from the metadata in
Compose. Review the report in the Metadata and Storage Comparison Report window and
then do one of the following:
l If all the changes can be adjusted automatically, do one of the following according to
your configuration::
l Click Adjust Automatically. The Adjust Storage Zone progress window opens.
When the "The Storage Zone was adjusted successfully." message is displayed,
close the window.
l If the Generate DDL scripts but do not run them option is set, click Generate
Adjust Script.
l Click Drop and Recreate Tables. You will be prompted to confirm this action.
Click Yes. The Dropping Storage Tables window opens. When the "The
Storage Zone tables were dropped and recreated successfully." message is
shown, close the window.
l If the Generate DDL scripts but do not run them option is set, click Generate
DDL Script.
When you click Generate Adjust Script or Generate DDL Script, the Generate DDL Scripts
window opens showing the progress of the script generation.
The generated scripts will be saved to:
<product_dir>\data\projects\<project_name>\ddl-scripts
Once the script(s) have been generated, close the Generate DDL Scripts window.
When working with a Hive-based compute platform, after you close the Generate DDL
Scripts window, the DDL Script Files window opens automatically displaying the generated
scripts. The DDL Script Files window provides a read-only view that allows you to review the
scripts and download them.
The scripts need to be executed directly in your Storage Zone. Make sure that any
modifications that you make to the scripts are done prior to executing them.
When you run the adjust scripts, backup tables are created from the existing
tables. The backup table names are appended with an "_old" suffix and must be
deleted manually after the script completes.
Search for "TODO" in the script to locate the part of the script that needs
modifying.
l Click the Manage button at the bottom left of the Metadata panel.
l Click the Entities number in the Metadata panel.
l Select Manage from the drop-down menu in the top right of the Metadata panel.
The Manage Metadata window is split into two tabs: The Logical Metadata tab and the Physical
Metadata tab. The Logical Metadata tab shows the entities and attributes as they appear in the
Metadata whereas the Physical Metadata tab provides a preview of the actual tables (and
columns) that will be created in the Storage Zone.
In the Logical Metadata tab, you can perform various management tasks such as adding and/or
editing entities and attributes.
In this section:
Managing entities
You can add, edit and remove entities as described in the table below.
Reducing the window size also shortens the toolbar. If the toolbar is too short to contain
all the buttons, the toolbar options will be displayed in the drop-down menu instead. The
shorter the toolbar, the more options will appear in the drop-down menu.
To Do This
Add an entity 1. Click the New Entity button in the Entities toolbar.
2. Provide a name and description (optional) for the entity and then click OK.
Edit an entity 1. Select the entity you want to edit and then select Edit from the drop-down
menu in the Entities toolbar.
2. Edit the entity’s name and description (optional) and then click OK.
Remove an 1. Select the entity (or multiple entities) that you want to remove, and then
entity select Delete from the drop-down menu in the Entities toolbar.
2. When prompted to confirm the deletion, click Yes.
Duplicate an 1. Select the entity you want to duplicate and then select Duplicate from the
entity drop-down menu in the Entities toolbar.
2. Edit the entity’s name and description (optional) and then click OK.
The duplicated entity is added to the Entities list.
Import See Importing entities and mappings from another project (page 330).
entities from
another
project
Include Select the check box in the Save History column to the right of the desired
historical entity. Note that if you chose ODS +HDS as your data store, all of the Save
records History check boxes will be selected by default.
Managing attributes
You can add, edit and remove attributes as required. All attributes in the Metadata belong to the
Attributes Domain. When adding a new attribute, you can either select an existing attribute from the
Attributes Domain or create a new Attributes Domain. Both of these options are described below.
The name cannot contain any of the following forbidden (by Hive) characters:
: ; . , ' "
You can create multiple instances of a single Attribute Domain. This is especially useful if you
want to use the same Attribute Domain across multiple tables, with each "instance" having its
own unique name. This also allows you to edit the properties of each attribute without
affecting the other attributes, even though all of the Attribute Domain instances share the
same parent Attribute Domain. For example, if the Attribute Domain name is "ID", you could
create one instance for it in the "Categories" entity named "CategoryID" and another
instance in the "Employees" entity named "EmployeeID". If, however, you edit the parent
Attribute Domain attribute, all instances of that attribute will be updated as well.
5. Data type: The data type of the Attribute Domain. This can only be edited by editing the
Attribute Domain.
6. To add a prefix to the attribute name, enter the desired prefix in the Prefix field.
Adding a prefix to an attribute name allows you to add multiple instances of the same
attribute domain. For example, the attribute "Employee" could become two different
attributes: "ReportsTo_Employee" and "HiredBy_Employee".
7. To create an expression for the attribute, click the fx button located after the Expression
field and then continue from Creating transformations (page 342).
8. Click OK to save your settings.
The name cannot contain any of the following forbidden (by Hive)
characters:
: ; . , ' " :
b. From the Type drop-down list, select one of the available data types.
c. If the selected data type requires further configuration, additional fields will be
displayed. For example, when Decimal is selected, the Length and Scale fields will be
displayed. Set the values as desired.
d. Optionally, specify a Description.
e. Click OK to add the newly created attribute domain to the Attribute domain field and
close the New Attribute Domain window.
3. Continue from Step 4 in Add an existing attribute domain above.
You can also add new attribute domains via the Manage Attribute Domains
window. For more information, see Managing the Attributes Domain (page 338)
To edit an attribute:
Method 1:
1. Select the attribute you want to edit and then click the Edit button in the Attributes toolbar.
The Edit Attribute Name window opens.
2. Edit the values as required and then click OK.
Method 2:
To remove an attribute:
The Storage Zone needs to be "adjusted" when deleting an attribute from the metadata
and then adding the same attribute back to the metadata. However, the "Adjust"
operation will also delete the data from the corresponding Storage Zone column.
l Select the attribute you want to move and use the Move Up/Move to Top and Move Down
/Move to Bottom toolbar buttons to move the attribute
l See Add an attribute from the attributes domain or Edit an attribute above.
l Select an entity from the Entities list on the left of the Manage Metadata window and then
select Export to TSV from the drop-down menu in the Attributes toolbar.
Depending on your browser settings, you will either be prompted to download the
<entityname>_Attributes.tsv file or it will be downloaded to your default Downloads
location.
1. From the drop-down menu in the top right of the Storage Zone panel, select Manage
Attributes Domain.
2. Add, delete and edit attributes as describe in the table below.
To Do This
: ; . , ' " :
3. From the Type drop-down list, select one of the available data types.
4. If the selected data type requires further configuration, additional fields will
be displayed. For example, when Decimal is selected, the Length and
Scale fields will be displayed. Set the values as desired.
5. Optionally specify a Description.
6. Click OK to add the attribute and close the New Attribute Domain window.
Edit an 1. Select the desired attribute and then click the Edit toolbar button.
attribute The Edit: Name window opens.
domain 2. Edit the attribute as described in steps 2-6 of Add an attributes domain
above.
Note that the Edit: Name window also contains a Used in Entities list.
Knowing which entities the attribute is used in may affect the type of
changes you make, as the planned changes may not be appropriate for all
entities.
Remove an 1. Select the attribute you want to delete and then click the Delete toolbar
attribute button.
2. When prompted to confirm the deletion, click Yes.
Schema evolution
Schema evolution allows users to easily detect structural changes to multiple data sources and
then control how those changes will be applied to your project. Schema evolution can be used to
detect all DDL changes that were made to the source database, although not all changes can be
applied automatically (see "Supported data changes" below). Schema evolution can be performed
using the web console or using the CLI as described below.
Schema evolution requires certain options to be turned on in the Replicate task(s). For information
about which options need to be enabled in Replicate for schema evolution, see Defining a Qlik
Replicate task (page 284).
Data warehouse adjustment and storage task generation will only occur if the
appropriate apply option (see Step 5 below) was selected.
If unsupported changes were detected, one of the following messages will be displayed:
DDL type <type> is not supported and will be skipped
If, for whatever reason, Compose fails to add the changes to the metadata or the
mappings, the next time you perform a scan, the changes will be detected again.
However, if Compose succeeds to apply the changes but fails to adjust the data
warehouse or generate the tasks, the changes will not be detected again. In such
a case, you will need to manually adjust the data warehouse and/or generate the
tasks after resolving the issue that prevented these operations from being
performed automatically.
Command syntax:
ComposeCli.exe schema_evolution --project project_name [--data_sources data_source] --action
apply_to_model_and_mappings|apply_to_model_mappings_adjust_storage|apply_to_model_mappings_
adjust_storage_generate_tasks|ignore_changes
Parameters
Parameter Description
--action How (or whether) to apply the changes. Possible values are:
Example
ComposeCli.exe schema_evolution --project MyProject --data_sources mysource1,mysource2 --
action apply_to_model_mappings_adjust_storage
Creating transformations
Compose allows you to transform data using an expression either in Replicate or Compose,
according to your needs. The table below provides further information about creating
transformations.
Replicate l Filtering large amounts of data that is not Before the data
needed for the Storage Zone (in the present reaches the landing
or the future) zone.
l Obfuscation due to regulatory reasons or
internal policies
l Data type conversion (e.g. converting a
source data type that is not supported on
the Storage Zone platform)
Metadata l The default location if you are not sure Between the Landing
where to put it Zone and the Storage
l General business logic Zone.
l Needed for several sources or several data
marts
In this section:
The Expression Builder enables you to create a transformation without needing to type anything
manually.
The Expression Builder can be opened in several places, depending on your needs. For more
information about where to create a transformation, see Creating transformations (page 342).
l Tabs on the left of the Expression Builder: These tabs contains elements that you can add
to an expression. Select elements and add them to the Build Expression pane to create an
expression. For more information, see Building Expressions (page 345).
The following tabs are available:
l Parameters - Only displayed when opening the Expression Builder from within the
Reusable Transformations > Edit Transformation window.
For information on reusable transformations, see Reusable transformations (page
348) below.
l Input Columns/Input Attributes - Columns/attributes that can be used to build your
expression.
l Transformations - Contains a list of reusable transformations. The tab is not
displayed if no reusable transformations have been defined.
For information on reusable transformations, see Reusable transformations (page
348) below.
l Operators - Operators that can be used to build your expression.
l Functions - Functions that can be used to build your expression.
The Operators and Functions displayed in the Expression Builder use SQL
format. As SQL support and implementation is different for each database
type and version, the database being used in your Compose project will
determine which Operators and Functions will be available.
l Build Expression Pane: The Build Expression pane is where you build your expression. You
can add elements, such as columns or operators to the panel as well as type all or part of the
expression. For more information, see Building Expressions (page 345).
l Parse Expression Pane: This pane displays the parameters for the expression. After you
build the expression, click Parse Parameters to list the expression parameters. You can then
edit the parameters, enter a value for each of the parameters and associate attributes with
them. For more information, see Parsing expressions (page 346).
l Test Expression Pane: This panel displays the results of a test that you can run after you
provide values to each of the parameters in your expression. For more information, see You
test your expression to check that results are as expected. The following figure is an example
of an expression that has been evaluated and tested. (page 346).
Building Expressions
The first step in using the Expression Builder is to build an expression in the Build Expression pane.
To build an expression:
1. Hover the mouse cursor over the element that you want to add to your expression
(expressions usually start with an Input Column) and click the arrow that appears to its right.
2. Add Operators additional Input Columns and Functions as required.
To add operators to your expression, you can use the Operator tab on the left or the
Operator buttons located above the Build Expression pane or any combination of
these.
Example:
To create an expression that combines the FirstName name and LastName columns, do the
following:
3. Then add a space between single quote characters and click the concatenate (+) operator
again.
4. Add the LastName Input Column to the Build Expression pane.
The expression would look like this:
Parsing expressions
When you add operators to the expression, the expression’s parameters are usually added
automatically to the Parse Expression pane. However, when you complete your expression or edit
it, you may need to parse the expression see all of the parameters.
l Click the Parse Expression button below the Build Expression pane.
If the expression is not valid, a red error message will appear at the bottom of the Expression
Builder window.
If the expression is valid, the expression parameters and attributes (Input Columns) will be
displayed in the in the Parse Expression pane. See the figure Test Expression (page 347).
Testing expressions
You test your expression to check that results are as expected. The following figure is an example
of an expression that has been evaluated and tested.
Testing an expression that contains an analytic function will validate the syntax without
actually executing the function. Additionally, the test will only be performed on a single
record.
Compose does not check the data types of columns used in an expression for
compatibility. For example, if a column of type integer is used in an expression for a
column of type varchar, the expression will not be executed successfully.
Test Expression
To test an expression:
For more information on the Edit Mappings window, see Column mappings (page 356) in
Creating and Managing Storage Zone Tasks (page 350).
Reusable transformations
In a single Compose project there may be several processes that require similar data
transformations. For example a reusable transformation can be defined that concatenates first and
last names. This transformation could then be used both in the Customers mapping and in the
Employees mapping.
1. From the drop-down menu in the top right of the Storage Zone panel, select Reusable
Transformations.
The Reusable Transformations window opens.
The window is split into the following panes:
l Upper pane - Lists the reusable transformations that have been defined.
l Lower pane - Provides additional information about transformation instances such as
where they are in use (e.g. mappings, metadata, etc.) and the expression that was
created using the transformation.
Select a transformation to see the additional information.
2. Click the New Transformation toolbar button.
The New Transformation window opens.
1. In the Name field, specify a name for the transformation.
2. In the Category field, specify a category name. If the category name already exists it
will be displayed below the field when you start to type the name. To group the new
transformation in the same category, simply select the existing name (unless of course
you wish to create a new category with a similar name).
In the Expression Builder, transformations are grouped according to their category
name, making it easier to find the transformation you want to use. Therefore, when
specifying a category name, it is recommended to choose a name that reflects the
purpose of the transformation. For example, if you create several transformations that
concatenate data, it would make sense to group those transformations under a
category called "Join".
3. To add a parameter to the transformation, click the New button to the right of the
Parameters heading.
A new row is added to the Parameters list.
4. Specify a name for the parameter, select an appropriate data type, and optionally
provide a description.
Once a transformation has been defined, it will be available for selection as needed in the
Expression Builder’s Transformations tab.
To Do This
Delete a Select the transformation and then click the Delete toolbar button. When
transformation prompted to confirm the action, click OK.
Edit a Double-click the transformation or select the transformation and then click
transformation the Edit toolbar button. Continue as described in Reusable transformations
(page 348).
In this section:
If table(s) were reloaded in Replicate, the table(s) will also be (automatically) reloaded in
Compose the next time the task runs. During reload, Live Views should not be read.
1. Click the Create button in the bottom right of the Storage Zone panel.
The Creating Storage Zone window opens.
A progress bar indicates the current progress. For each stage of the Storage Zone
generation process, a corresponding message appears in the Messages list.
2. When the process completes, click Close.
l Changing a Primary Key in the source record will cause a new record to be
inserted in the storage table.
l Regenerating the task statements after performing a non-supported change in
the metadata will appear to succeed without errors or warnings, but the task will
fail if run later.
l Defining a single task that ingests data from several Landing Zones is not
supported. As a workaround, you can create a separate task for each Landing
Zone.
1. Click the Data Storage Tasks button in the bottom left of the Storage Zone panel.
The Manage Data Storage Tasks window opens.
2. If there are multiple tasks, in the left pane, select the desired task.
3. Click the Generate toolbar button, and select one of the following options:
l With validation - This is the default option for generating storage tasks.
You can also generate storage tasks with validation (default option) by
clicking the Generate toolbar button.
l Without Validation - Select this option if you are sure that the storage tables are
adjusted properly and the mapping is valid. The generation of storage tasks is much
faster. Note that you could have errors later when running the task if something is not
valid.
The Generating Statements for Task: Name progress window opens. When the "Generate
task finished successfully" message is displayed, close the window.
Only mappings associated with the task in the Manage Tasks window will be
generated.
Running a task
Storage Zone tasks can be run manually, scheduled to run periodically or run as part of a workflow.
The section below describes how to run a Storage Zone task manually. For information on
scheduling Storage Zone tasks or including them in a workflow, see Controlling and monitoring
tasks and workflows (page 368).
l Was not originally selected in the Replicate Full Load and Apply Changes task (i.e.
was added later).
-OR-
l Was selected in a Replicate Full Load and Apply Changes task, but was not
selected in the mappings of the Compose Full Load and Change Processing data
storage tasks, and the tasks have already been run.
In any of the above scenarios, in order to get the data that was added later, you need to:
1. Duplicate the Compose Full Load and Change Processing tasks associated with
that table.
2. Run the duplicated Full Load task.
3. Run the duplicated Change Processing task.
Note that after running these tasks, duplicate records may exist in the Storage Zone, but
they will be removed when reading from the Storage Zone views.
1. Click the Manage button in the bottom left of the Storage Zone panel.
The Manage Storage Tasks window opens.
2. If multiple tasks have been defined, in the left pane, select the task that you want to run.
3. Click the Run toolbar button. The window switches to Monitor view and the following status
bars are displayed:
l Completed - Shows the tables that have already been loaded into Hive
l Loading - Shows the tables currently being loaded into Hive
l Queued - Shows the tables waiting to be loaded into Hive
l Error - Shows the tables that could not be loaded into Hive due to error. Click the
Show Details link below the bar to see more information about the statement(s) that
resulted in the error.
l Canceled - The number of canceled tables (tables that were not processed due to the
task being aborted) does not appear as a separate status bar. To view the number of
canceled tables, click the Select All link above the status bars.
To see more information about tables in a particular status, click the relevant status bar. A list
of tables in the selected status will be shown.
When the task status is indicated by a icon, close the Manage Storage Tasks window.
You can stop the task at any time by clicking the Abort toolbar button. This may be
necessary if you need to urgently edit the task settings due to some unforeseen
development. After editing the task settings, simply click the Run button again to restart the
task.
You can also access the task log files by clicking the View Log button.
Aborting a task may leave the Storage Zone tables in an inconsistent state.
Consistency will be restored the next time the task is run.
l For each Compose task, all of the mapping tables should be populated by
data from one Replicate task.
l You must regenerate the task statements and then run a Storage Zone task
whenever the mappings are modified. Populating the Storage Zone can
either be done manually as described in Controlling data storage tasks
(page 352) or automatically as described in Scheduling tasks (page 371).
If you have already run the data mart tasks, then you also need to
regenerate the data mart ETLs and run the tasks again as described in
Managing task definitions (page 354).
For more information on global mappings, see Managing global mappings (page 163).
l Full Load - Loads the selected tables from the Landing Zone into the Storage Zone.
l Change Processing - Updates the Storage Zone tables with the Landing Zone changes.
l Full Load and Change Processing - Loads the selected tables into the Storage Zone and
then updates them with the Landing Zone changes.
To Do This
Add a new 1. Click the Manage button at the bottom left of the Storage Zone panel.
Task The Manage Tasks window opens.
2. Click the New Task toolbar button.
The Add New Task window opens.
3. Specify a name for the task.
To Do This
Duplicate a 1. Click the Manage button at the bottom left of the Storage Zone panel.
Task The Manage Tasks window opens.
2. Select the task you want to duplicate and then click the Duplicate toolbar
button.
The Duplicate window opens.
3. Specify a Name for the new Task.
4. Select a Landing Zone.
5. Optionally change the default Schema.
6. Select the Task type according to your Replicate task type.
7. Click OK.
8. Select the task name in the left pane and continue from Column mappings
(page 356).
Column mappings
For improved metadata performance during discovery and mapping, Compose caches
the metadata of the Landing Zones after reading them. However, synchronization issues
may arise if the Landing Zone is modified outside of Compose. In such cases, you should
click Clear Landing Cache in the Mappings tab of the Storage Zone panel in order to
refresh the cache on the next reading of the metadata.
For details on recreating the Storage Zone cache, see Clearing the metadata cache
(page 364).
The mappings show the current mapping between the Landing Zone tables and the Storage Zone
tables. By default, the columns names and data in the Landing Zone tables and the Storage Zone
tables will be identical. However, you can manually change the mappings according to your needs,
either by simply mapping a Landing Zone column to a different Storage Zone column and/or by
using an expression.
To Do This
Map a column in
a Landing Zone The mapping procedure differs depending on whether you
table to a column are in Standard View or Compact View. For information on
in a Storage changing the view, see Change the view (page 359).
Zonetable
In Standard View:
1. Hover the mouse cursor over the Landing Zone column name as
shown in the image below.
A gray dot appears to the right of the column name.
2. Drag the mouse cursor from the gray dot to the desired column
in the Storage Zone table.
3. When the dotted line turns green (as shown below), release
your mouse button.
The mapping operation is completed.
Note that if the dotted line turns red (instead of a green), you
will not be able to map the Landing Zone column with the
desired Storage Zone column. A red dotted line indicates that
the Landing Zone and Storage Zone column data types are
incompatible with each other.
In Compact View:
1. Switch to Compact View as described in Change the view.
2. Drag the Landing Zone column to the cell located to the left of
the target Storage Zone column.
To Do This
Change the view Changing to a more compact view is recommended for Landing Zone
tables that have numerous columns. In compact view, the table
columns are organized in rows (instead of a single list), making it
easier to locate Landing Zone columns and map them to the desired
Storage Zone columns.
To change the view:
Click the Change View toolbar button. For information on creating
mappings in Compact view, see Map a column in a Landing Zone table
to a column in a Data Lake table.
Select a different Select a database from the Landing Zone Database drop-down list on
source database the left of the window.
Select a different Select a schema from the Schema drop-down list on the left of the
source schema window.
Select a different Select a table from the Table drop down list on the left of the window.
table
Create a 1. Hover the mouse cursor over the Storage Zone Column for
column-level which you want to create a transformation and then click the fx
transformation button that appears to its right.
The Expression Builder opens.
2. Continue from Creating transformations (page 342).
To Do This
Add a new 1. In the Data Lake Tables column, select the table that you want to
mapping map.
2. Click the NewMapping button above the Delivery Tables column.
The New Mapping window opens.
3. Optionally change the default mapping name.
4. From the Entity drop-down list, select the entity in the Storage
Zone to which you want to map.
5. Click OK to save the mapping.
6. Enable the mapping.
Delete a 1. In the Mappings column, hover the mouse cursor over the
mapping mapping you want to delete.
2. Click the Delete(x) button that appears to its right.
3. Click OK when prompted to confirm the deletion.
Rename a 1. In the Mappings column, hover the mouse cursor over the
mapping mapping you want to rename.
2. Click the Rename (A) button that appears to its right.
The Rename window opens.
3. Specify a new name for the mapping and then click OK.
1. Click the link to the desired task in the Storage Zone panel.
The Manage Tasks window opens.
2. In the Mappings column, click the mapping for the Storage Zone table containing the result
column (with the data that you want to replace).
The Edit Mapping - Name window opens.
3. Hover the mouse cursor over the relevant Storage Zone column and then click the Lookup
button that appears to the right of the column name.
The Select Lookup Table window opens.
1. From the Database drop-down list, select the database containing the lookup table.
2. From the Schema drop-down list, select the schema containing your source lookup
tables.
3. Select either Table or View according to the lookup table type.
4. From the Table drop-down list, select the lookup table.
The right side of the Select Lookup Table window displays the lookup table columns
and their data types. To view the data in the lookup table, click the Show Lookup Data
button.
5. After you have selected the lookup table, click OK.
The Lookup Transformations - Table Name.Column Name window opens.
The window is divided into the following panes:
Upper pane: The upper part of the right pane (Condition) displays the condition expression,
which stipulates the condition(s) for performing the lookup.
Lower pane: The lower part of the right pane (Result Column) displays the column result
expression, which stipulates what data to replace in the target column.
4. To change the lookup table, click the Change Lookup Table button above the lookup table
columns and then perform steps a. to d. above.
5. To view the lookup table or landing table data, click the Show Lookup Data or Show Landing
Data buttons respectively.
6. To specify the condition(s) for performing the lookup, click the Create Expression button
(which changes to Edit Expression after an expression has been created) above the
Condition expression.
The Condition Expression - Column Name window opens.
7. Create an expression using the landing and lookup table columns on the left.
For an example, see Using lookup tables (page 360).
For information on creating expressions, see Creating transformations (page 342).
8. To specify what data to replace or add if the lookup conditions are met, click the Create
Expression button (which changes to Edit Expression after an expression has been created)
above the Result Column expression.
The Result Expression - Column Name window opens.
9. Create an expression using the landing and lookup table columns on the left.
For an example, see Using lookup tables (page 360).
For information on creating expressions, see Creating transformations (page 342).
10. To preview the results, click the Preview Results button.
11. Click OK to save your settings and close the Lookup Transformations - Table
Name.Column Name window.
Lookup example
The following example shows how a lookup table is used to concatenate a Dutch translation of the
category name (located in the lookup table) to the original category name located in the landing
table.
1. Condition expression:
${Lookup.CategoryID}=${Landing.CategoryID}
Meaning: Perform the lookup only if the Category ID in the landing table and the lookup table
are the same.
2. Result column expression:
${Lookup.CategoryName} + ' is ' + ${Landing.CategoryName}
Meaning: Add the data in the CategoryName column in the lookup table to the data in the
CategoryName column in the landing table (separated by the word "is").
Assuming the result column name is "Split Name", clicking the Preview Results button would
display the following table:
When changing certain project settings (e.g. table prefixes) drop and create is required. If you
change the Metadata after the Storage Zone tables and/or files were already created and loaded
with data, you should adjust the Storage Zone to reflect the modified Metadata (as described in
Validating the metadata and storage (page 332)). Some changes however cannot be resolved by
adjusting the Storage Zone. In such cases, you can either revert the Metadata to its pre-modified
state or drop and (optionally) recreate the Storage Zone tables.
Note that dropping and recreating tables will delete all of the data in the tables and should only be
performed in lieu of a better option.
l In some scenarios, you need to edit the CREATE table statements before they are
run. This can be done using the Generate DDL scripts but do not run them option
in Project settings (page 38). For example, if you want to override the default
sorting of your Storage Zone tables or add specific formatting annotations, you
will need to edit the script to accomplish this.
l The Change Processing context (i.e. the point in time when changes were last
captured) is deleted when dropping all tables but preserved when dropping
selected tables. Therefore after deleting selected tables, in order for Compose to
continue processing changes from when the tables were dropped, you need to
perform the following additional steps:
1. In the Storage Zone panel, select the Drop and Recreate|Tables item from the menu in the
top right corner.
2. The Drop and Recreate Tables window opens.
3. Select Recreate to drop and recreate the storage tables or Drop to drop them only.
4. Click OK to perform the drop and/or recreate operation.
The tables will be dropped and/or recreated, unless the Generate DDL scripts but do not run
them option is enabled.
If Compose detects a mismatch between the Logical Metadata (defined via the Metadata panel)
and the Storage Zone metadata, the view recreation operation will fail and you will need to validate
and adjust the storage before retrying the operation.
Example
ComposeCli.exe recreate_views --project MyProject
See also: Validating the metadata and storage (page 332)
If you aware of external changes to the metadata or if you notice any data synchronization
anomalies, Compose enables you to clear the metadata cache, either using the web console or
using the CLI.
1. Click the Manage button at the bottom left of the Storage Zone panel.
2. Click the Clear Landing Cache button in the Manage Storage Tasks window.
See also the section describing how to clear the cache before discover.
1. In the Storage Zone panel, select the Clear Metadata Cache item from the menu in the top
right corner.
2. Click Yes to clear the storage zone metadata.
3. When the storage zone metadata cache has been successfully cleared, click Close.
Command syntax:
ComposeCli.exe clear_cache --project project_name [--type landing|storage] [--landing_zone
source_name]
Parameters
Parameter Description
l landing
l storage
Example
ComposeCli.exe clear_cache --project MyProject --type landing --landing_zone MySource1
-OR-
Click the Item View button and navigate through the statements using the navigation buttons at
the bottom of the Task Statements - <Source_Landing_Zone_Name> window.
To jump to a specific statement, type the statement's number in the Go To field at the
bottom of the window and then press [Enter].
In List View, click the Export to TSV File button located to the left of the search field.
To hide non-SQL steps from the display, select the Filter non-SQL steps check box
1. In the Manage Storage Tasks window, select a task in the left pane and then click Settings.
The Setting - <Task_Name> window opens.
2. In the General tab, you can change the logging granularity. In the Log level drop-down list,
the following options are available:
l INFO (default) - Logs informational messages that highlight the progress of the task at
a coarse-grained level.
l DEBUG - Logs fine-grained informational events that can be used to debug the task.
l TRACE - Logs finer-grained informational events than the DEBUG level.
Note that the log levels DEBUG and TRACE impact performance. You should only
select them for troubleshooting if advised by Qlik Support.
3. In the Advanced tab, the following settings are available:
l Sequential Processing: Select this option if you want all the Storage Zone processes
to run sequentially, even if they can be run in parallel. This may be useful for
debugging or profiling, but it may also affect performance.
l Maximum number of database connections: Enter the maximum number of
database connections that Compose is allowed to open for the task. The default
number is 10.
l JVM memory: Edit the memory for the java virtual machine (JVM) if you experience
performance issues. Xms is the minimum memory; Xmx is the maximum memory. The
JVM starts running with the Xms value and can use up to the Xmx value.
l Position in default workflow: Select where you want the Storage Zone tasks to
appear in the default workflow. For more information on workflows, see Workflows
(page 375).
4. To save your changes, click OK.
For security reasons, command tasks are blocked by default. To enable command tasks,
a Compose administrator needs to run the following commands in the Compose CLI:
In this section:
For security reasons, before you define a command task, make sure that the executable
or script file that you want to run resides in the following directory on the Compose
server machine:
PRODUCT_DIR\data\projects\YOUR_PROJECT\scripts
5. In the Parameters field, specify any parameters required by the command. Parameters
should be separated by a space.
6. The user context is the user account under which the Task will run. To change the current
user context, provide the User, Password and Domain of the account under which you want
the Task to run.
7. Click Save to save your changes or Discard to discard any unsaved changes.
The task will be added to the list of tasks in the left of the window.
To Do This
Edit a Select the task in the tasks list in the left of the Manage Command Tasks window
task and edit it as described in Defining Command tasks (page 367).
Delete a Select the task in the tasks list in the left of the Manage Command Tasks window
task and then click the Delete toolbar button. When prompted to confirm the deletion,
click OK.
Search Enter part of the task name in the search box above the task list. The list of tasks will
for a task be filtered to show only tasks that include the search term in their name.
For information on defining workflows, controlling and monitoring tasks, and controlling and
monitoring workflows, see Controlling and monitoring tasks and workflows (page 368).
1. Open the Manage Command Tasks window and select the task you want to run.
2. Click the Run toolbar button.
3. The Manage Command Tasks window switches to Monitor view.
In Monitor view the following information is available:
l The task ID
l The current status
l When the task started and ended
l The overall task progress
In this section:
1. Open a Compose project and click the Monitor icon in the top right of the console.
A list of tasks is displayed for the current project. The left pane of the monitor allows you to
filter the task list by status as well as indicating the current number of running, failed and
completed tasks.
l Workflow - Executes several tasks in succession. See also Adding and designing
workflows (page 375).
l Command - For information about Command Tasks, seeCreating and managing
command tasks (page 366)
l Replicate - The Qlik Replicate task that moves the data from the source database to
the Landing Zone.
l Started and Ended - The date and time the task started and completed (according to
the server time). If the task is running, the Ended column will display the current
progress. In the case of a Replicate task performing Change Processing, Running CDC
will be displayed.
l Next Instance - The next time the task is due to run (if the task is scheduled).
l Elapsed Time - The time it took for the task to complete or - if the task is still running -
how long the task has been running.
l Updated Tables - The number of tables updated in the Storage Zone.
l Scheduled - Whether the task has been scheduled. "N/A" indicates that the task has
never been scheduled whereas a check box indicates that the task has been
scheduled. Clear the check box to disable the scheduling.
2. To view additional information about a task, select the task. The information is displayed in
the following tabs in the lower pane of the monitor:
l Details - The Details tab shows the following status bars:
l Completed - Shows the tables that have already been loaded into Hive.
l Loading - Shows the tables currently being loaded into Hive.
l Queued - Shows the tables waiting to be loaded into Hive.
l Error - Shows the tables that could not be loaded into Hive due to error. Click
the Show Details link below the bar to see more information about the
statement(s) that resulted in the error.
To see more information about tables in a particular status, click the status bar. A list
of tables in the selected status will be shown.
You can also click the Task Commands button for more information about the
operations performed during the task.
l Progress Status - The Progress Status tab shows the task’s current progress as well
as the sub-status (Waiting/Standby, Running, Failed, etc.) of operations within the
task. To see details about a specific operations, click the number to the right of the
operation status.
For example, to view more information about an operation with an error status, click
the number to the right of the Failed bar.
l History - The History tab provides a list of previous task instances.
To view a task instance’s log file, select the task and click the View Log button.
To view more details about a task instance, either double-click the instance or select
the instance and then click the View Instance Details button. The Details tab is
shown.
3. To run a task immediately, select the task and then click the Run toolbar button.
4. To view and manage a task’s settings, select the task and then click the Open toolbar button.
For more information about the settings, see the relevant topic in this guide.
To abort a task:
Scheduling tasks
Scheduling tasks is a convenient way of continually updating the Storage Zone and associated data
mart(s). For instance, you could schedule a data warehouse task to run at 4:00 pm and then
schedule a data mart task to run at 5:00 pm.
Note that as Compose does not provide a task-chaining option (i.e. run another task as soon as the
current task completes), it may be better to schedule tasks using an external tool that supports this
capability.
You can also use the command line interface (CLI) to run a task. For details, see Running tasks using
the CLI (page 372).
To schedule a task:
The date and time the next instance is scheduled to run will appear in the Next Instance
column.
5. To disable a scheduled job, select the task and click the Edit Scheduling toolbar button.
Then, select the Disable check box in the <Name> Scheduler window.
6. To cancel a scheduled job for a task, select the task and click the Edit Scheduling toolbar
button. Then, in the <Name> Scheduler window, click Delete.
The run_task command populates the Storage Zone with data. The task can also be run using the
Run toolbar button located in Monitor view as well as in the Manage Task window.
Command syntax
ComposeCli.exe run_task --project project_name --type storage|workflow --task task_name --wait
timeout_in_sec
Parameters
Parameter Description
--type The type of task that you want to run you want to run.
Note that if wait is excluded from the command, the task may
appear to complete successfully even if it encountered an error.
Example
ComposeCli.exe run_task --project MyProject --type workflow --task DL1 --wait 1
Notifications
You can select events, on the occurrence of which, a notification will be sent to the specified
recipients.
Notifications will not be sent unless the mail server settings are correctly defined.
Setting notifications
To set a notification rule:
Variable Description
${EVENT_TYPE_
DESCRIPTION}
7. Click Next. In the Apply to screen, select whether to apply the rule to all tasks of to selected
tasks. If you chose Selected Tasks, select which tasks to apply the rule to.
8. Click Next to see a summary of the notification settings or Finish to save your settings and
exit the wizard.
9. If you clicked Next, review your settings and then click Finish to save the notification rule
and exit the wizard or Prev to edit your settings. You can also click the headings on the right
of the wizard to go directly to a specific window.
The notification will be added to the list of notifications in the Notification Rules window.
Delete a Select the rule and then click the Delete toolbar button. When prompted to confirm
Rule the deletion, click Yes.
Edit a Either double-click the rule you want to edit or select the rule and click the Edit
Rule toolbar button. Continue from Notifications (page 373).
Disable a Select the rule you want to disable and then either click the Disable toolbar button or
Rule clear the check box in the Enabled column.
Enable a Select the rule you want to enable and then click the Enable toolbar button or select
Rule the check box in the Enabled column.
If a notification is set for several events, the event ID will be 0 for each of the events.
Workflows
Workflows enable you to run tasks both sequentially and in parallel. You can either schedule
workflows as described in Scheduling tasks (page 371) or run them manually using the Run toolbar
button or Compose CLI.
You can create your own workflow and/or use the built-in workflow. The built-in workflow enables
you to run all of your tasks as a single, end-to-end process. The built-in workflow appears in the
Type column as "Default Workflow".
When you create your own workflow, you decide which tasks to include in the workflow and the
order in which they will be run.
In this section:
Adding a workflow
To add a workflow:
1. Switch to Monitor view by clicking the Monitor button in the top right of the Compose
console.
2. Click the New Workflow toolbar button.
The New Workflow window opens.
3. To create a workflow with all current tasks, select Create default workflow and then click
OK. Otherwise, continue from Step 4 below. Separate workflows will be created for Full Load
and Change Processing tasks. The default workflow cannot be edited and will appear as
Default Workflow in the list of monitored tasks.
Any tasks you create after adding the default workflow will not be automatically
included in the default workflow. If you want to create a default workflow that
includes newly added tasks, simply delete the existing default workflow and
create another one in its place.
Designing a workflow
The workflow window is divided into two panes. The pane on the left (hereafter referred to as the
Elements pane) is where you design your workflow and contains two default elements: Start and
End.
The Elements pane contains gateways and tasks that you can use in your workflow. The following
elements are available:
l Tasks - All existing Data Warehouse tasks, Data Mart tasks, and Command tasks.
l Gateways - There are two types of gateway: Parallel Split and Synchronize. Use the
Parallel Split gateway to create parallel paths. This is useful, for example, if you want two or
more tasks to run in parallel.
Use the Synchronize gateway to merge parallel paths. The workflow waits for all the Tasks
that precede the gateway to complete before continuing the flow.
To design a workflow:
1. Drag the desired workflow elements from the Elements pane to the pane on the left.
2. Arrange the elements in the order that you want them to run.
3. Connect the elements to each other by dragging the connector from the gray dot (that
appears on the right of an element when you hover the mouse cursor over it) to the target
element. When a blue outline appear around the target element, release the mouse button.
4. Optionally add error paths to the workflow. The workflow will follow the error path if a task
encounters an error. For example, if an error occurs with one task, you may want to run
another task in its place.
To add an error path, hover your mouse cursor over the task element. A red dot will appears
below the element. Drag the connector from the red dot to the target element, as shown
below.
Connecting two error paths to the same task should be avoided as the workflow will fail
if the task tries to run twice.
By default, a workflow will end with an error if one or more parallel tasks do not complete
successfully. However, in certain cases you may want the workflow to continue, even if one or more
of the parallel tasks failed.
To do this, you need to connect the error port of the relevant task(s) directly to the Synchronize
gateway. You can also design the workflow so that it follows the path leading from the Synchronize
error port, instead of continuing its normal flow.
In the example below, the error port of the MyCommandTask is connected to the Synchronize
gateway, meaning that even if MyCommandTask task fails, the workflow will continue. However, if
the MyCommandTask task fails, the workflow will not proceed directly to the End element. Instead,
it will follow the Synchronize gateway’s error path to the Source task.
Validating workflows
It is strongly recommended to validate your workflow before running it. This will prevent errors from
occurring during runtime due to an invalid workflow.
If the workflow is valid, a "<workflow_name> is valid." message will be appear at the top of the
window. If the workflow is not valid, a message describing the problems will appear instead.
Managing workflows
The table below describes the options available for managing workflows.
To Do This
Delete a In Monitor view, select the workflow in the Task column and then click the
Workflow Delete Workflow toolbar button.
Edit a In Monitor view, either double-click the workflow you want to edit or select the
Workflow workflow and click the Open toolbar button. Continue from Adding and
designing workflows (page 375).
To Do This
Delete an Either right-click the element and select Delete or select the element and then
element in click the Delete toolbar button.
workflow
Reset the Click the reset button to the right of the slider at the top of the window.
workflow view
Zoom in Move the slider at the top of the window to the left or right as required.
to/zoom out of
the workflow
You can monitor the workflow either in the Monitor tab or in the Progress Status tab. During
runtime, the workflow elements fill with blue providing a graphic indication of progress. If a task
encounters an error, the task element will appear with red fill instead of blue.
Monitoring and controlling Replicate tasks from within Compose involves the following steps:
l Step 1: Configure Qlik Compose to connect to the Qlik Replicate machine(s) as described in
Replicate Server settings (page 385).
l Step 2: Add the Replicate task name to the source Landing Zone settings as described in
Defining Landing Zones connections (page 326).
l Step 3: Monitor and control the Replicate task as described below.
The image Replicate Task in the Compose Monitor (page 380) shows how the Replicate task
appears in the Compose Monitor. You can stop and start the Replicate task using the Abort and
Run toolbar buttons.
If a task is stopped from within Replicate, the task status in Compose for Data Lakes will
be "Completed" instead of "Aborted".
You can also define notifications for the task and add the task to a workflow. For more information,
see Notifications (page 373) and Workflows (page 375) respectively.
The monitor provides various information about the task. For details, see Viewing information in the
monitor (page 369).
7 Managing Compose
Qlik Compose management options can be accessed from the Management menu located at the
top of the Compose main page.
In this section:
License enforcement
The license is enforced only when trying to generate, run, or schedule a task (via the web console
or API ). Other operations such as Test Connection may also fail if you do not have an appropriate
license.
Registering a license
This section describes how to register your Compose license. You can register the license either
using the console or using a command line.
1. Copy the license file to the computer on which Compose is installed or to any computer in
your network that can be accessed from the Compose computer.
2. Click Load and browse to find and select the license file. The license text is displayed in the
window. Check to be sure that the details are correct.
3. Click Register License to register the license. A message indicating the license was
registered successfully is displayed.
Command syntax
ComposeCli.exe register_license --infile|license_text
Parameters
Parameter Description
--license_text A string in JSON format. When specifying a JSON string, any quote
symbols should be escaped using a backslash (\).
Example
Register a license with --infile:
The following logging levels are available (ordered from the lowest level to the highest level):
1. Errors
2. Warnings
3. Info
4. Trace
5. Verbose
The higher levels always include the messages from the lower levels. Therefore, if you select Error,
only error messages are written to the log files. However, if you select Info, informational
messages, warnings, and error messages will be included. Selecting Verbose writes all possible
messages to the log.
1. From the Management menu, select Logs|Log Management. The Log Management
window opens displaying the Server Log tab.
2. To set a global logging level, move the top slider to the desired logging level. All of the sliders
for the individual modules move to the same level that you set in the main slider.
3. To set a logging level for individual Compose components, select a module and then move its
slider to the desired logging level.
4. Click OK to save your changes and close the Log Management window.
1. From the Management menu, select Logs|Log Management. The Log Management
window opens displaying the Server Log tab.
2. Select the Qlik Compose Agent log tab, and then move the slider to the desired logging
level.
3. Click OK to save your changes and close the Log Management window.
Changes to the logging level take place immediately. There is no need to restart the Qlik
Compose service.
1. From the Management menu, select Logs|Log Management. The Log Management
window opens.
2. Select the Log Settings tab.
3. The following options are available:
l Enable automatic roll over: Select this check box to determine the maximum size a
log file can reach before it is rolled over. The current log file is called Compose.log and
saved (older) log files are called Compose_xxxxxxxxxxxx.log where xxxxxxxxxxxx
represents a 12-digit timestamp.
l Roll over the log if the log file is larger than (MB): Use the counter or type in
the maximum amount of megabytes for a specific log file. When the log file
reaches the specified size, the old log is saved with a timestamp appended to
its name and a new log file is started. The default value is 10 megabytes.
The scheduled job that checks the log size runs every five minutes.
Consequently, the actual size of the log when rolled over might be larger
than specified.
l Enable automatic cleanup: Select this check box to define the maximum number of
log files to keep.
l Maximum number of log files to keep: Use the counter or type in the
maximum number of log files to keep. When the number of log files reaches the
specified maximum, Compose will delete the oldest log file whenever a new log
file is created, thereby ensuring the number of log files never exceeds the set
limit. The default is 45.
4. Click OK to save your settings and close the Log Management window.
1. From the Management menu, select Logs|View Logs. The Log File Viewer opens.
2. Select the log file you want to view from the list in the Log Files pane.
The contents of the log file will be displayed in the right pane. When you select a row in the
log file, a tooltip will display the full message of the selected row.
3. Browse through the log file using the scroll bar on the right and the navigation buttons at the
top of the window.
4. To search for a specific string in the log file, enter the search string in the search box at the
top of the window. Any terms that match the specified string will be highlighted blue.
1. From the Management menu, select Logs|View Logs. The Log File Viewer opens.
2. From the list in the Log Files pane, select the log file you want to download.
3. Click the Download Log File button in the top right of the window. The log file is
downloaded.
1. From the Management menu, select Mail Server Settings. The Mail Settings window
opens.
2. Configure the settings as follows:
l Mail server: Specify the outgoing mail server that will be used to send Qlik Compose
notifications, for example, smtp.example.com.
l Port: Enter the mail server port number. The default value is 25.
l Use secure email (SMTPS): Select this to connect to the mail server using TLS.
l Anonymous login: Enable this to allow Qlik Compose to access the mail server
without having to provide any user credentials.
l User name: Specify the user name for the account that will be used to send
notifications.
l Password: Specify the password for the account that will be used to send
notifications.
l Sender email address: Enter the email address that sends the email notifications. This
is the address that appears in the From field of the email notification.
l Send Test Mail: You this option to validate your mail server settings. Click Send Test
Mail to open the Send Test Email window. In the Email address for test email, enter
the email address to which you want the test email to be sent and then click Send.
3. Click OK to save your settings and close the Mail Settings window.
1. From the Management menu in the projects view, select Compose Agent Settings.
2. In the Compose Agent Settings window, select Remote server and provide the required
connection details.
3. Click OK to save your settings.
If you want to monitor the Replicate tasks, you need to provide the information that Compose needs
in order to establish a connection to the Replicate Server on which the tasks are running. After
providing this information, you will then be able to associate a source Landing Zone with a specific
Replicate task.
1. Open the Manage Replicate Servers window using any of the following methods:
l From the Management drop-down menu in the main toolbar, select Manage
Replicate Servers.
l In the New Data Source window, click the Replicate Server Settings link below the
Associate with Replicate task field.
The Manage Replicate Servers window opens.
2. Click Add Replicate Server.
The Add Server window opens.
3. Enter the following information:
l Name: A display name for the server.
l Description: (Optional) A description for the server.
l Host: The IP address or host name of the Qlik Replicate machine.
l Port: Optionally, change the default port (443). You should only change the default
port if you are certain that a different SSL port is being used.
l User Name and Password: Your credentials for logging in to the Qlik Replicate
machine.
When Replicate Server is installed on Linux, enter the user name and
password for the Windows machine on which the Replicate UI Server is
running.
l Get metadata timeout - The time to wait when discovering a task’s source database
or refreshing the metadata cache before returning a timeout error.
l Get task timeout - The time to wait when starting a Replicate task before returning a
timeout error.
4. Click Test Connection and then click OK if the connection is successfully verified.
The server is added to the Manage Replicate Servers window. Click Close to close the
window.
You can associate a user with a security role by adding the user to the appropriate Active Directory
group or by assigning a role directly to the user. By default, the user under whose account you
install Qlik Compose is an Admin. You can also fine-tune access control per user or group. For more
information, see Granular access control (page 389)
As a user with the relevant permissions, you can view and change the permissions for existing users
or groups, or add users or groups that do not yet exist in Qlik Compose.
The advantage of adding groups over users is that you can assign a security role to a group as a
whole, instead of to individual users, and any new user that is added to an existing group
automatically acquires the security role granted to that group.
To set user permissions using Active Directory groups, you can either create Active Directory
groups with the names listed in the table below, or you can create Active Directory groups with
different names. Then, add users to the groups according to the role you want them to perform.
If you create your own Active Directory groups, you need to add them to the User Permissions tab
in the Settings window and set their permissions as described in Managing user permissions (page
392).
Administrator QlikComposeAdmins
Designer QlikComposeDesigners
Operator QlikComposeOperators
Viewer QlikComposeViewers
l The Project view is available to all roles, but Designers only have read-access to user
permissions, and operators cannot add projects – they can only view the different settings
but not edit them. Viewers cannot edit settings, add, edit, or delete a project, or register a
license.
l The Model view for the Data Warehouse is available to all roles, but only Designers can
create and manage the model, import entities and mappings from other projects (including
models created in ERwin), manage global mappings, validate, define reusable
transformations, add Date and Time entities for the model, and so on.
The following table lists the permissions granted to each of the predefined security roles:
Projects: View projects and logs, generate Yes Yes Yes Yes
documentation
Data Warehouse: View data and logs, tasks, Yes Yes Yes Yes
commands and mapping
Data marts: View details and logs Yes Yes Yes Yes
User permissions can be assigned to individual Data Warehouse projects as well as across all
projects. The following hierarchy is in place, whereby:
l Compose User Permissions are applied globally. Changes to Compose permissions will
affect any level that inherits those permissions. At Compose root level, users must have at
least Viewer permissions.
Only Admin users at the Compose level can perform logging actions, such as,
changing the logging level and rolling over logs.
l All Projects User Permissions apply to all projects. When inheritance is enabled (the default),
permissions will be inherited from the “Compose” root level.
A user that is assigned All Projects User Permissions but not Compose User
Permissions is not authorized to log in to Compose.
l Project User Permissions apply to a specific project. When inheritance is enabled (the
default), permissions will be inherited from the “All Projects” level.
l Model User Permissions apply to the model unless overridden at any of the lower levels.
When inheritance is enabled (the default), permissions will be inherited from the “Project”
level.
Effective permissions are the permissions that take effect when a user is part of more than one
group, or when there is a conflict between the user's permission and the group's permission, or in
the hierarchy.
By default, the permission of a user or group object is inherited from the access control list (ACL) of
the object's parent. However, a lower or higher permission may override this permission. In this
case, the overriding higher permission is the effective permission for the object, stopping
inheritance from the parent. As a result, any changes to the parent no longer affect this user or
group.
In the User Permissions window, inheritance is indicated by a check mark in the Inherited column.
By default, inheritance is enabled for all users and groups on any level. Changing permissions by
using the slider automatically stops inheritance for the selected user or group. Qlik Compose also
lets you disable inheritance by disconnecting the entire authorization level from the parent level.
For information on how to do this, see Managing user permissions (page 392).
Syntax
composecli set_user_or_group_role --scope global|allprojects|project [--
project_name project-name] --role admin|designer|operator|viewer|none --user_
name netbios\user|--group_name netbios\group
Parameters
Parameter Description
role Required. The role that you want to assign the user or group: none, viewer,
operator, designer, or admin.
project_ The name of the project to assign the role on. Only required if --scope
name project.
user_name The name of the user to add or update. Required if no group is specified.
NetBIOS-name\user
Example:
qa\mike
group_name The name of the group to add or update. Required if no user is specified.
NetBIOS-name\group
Example:
qa\admins
Example
composecli set_user_or_group_role --scope project --project_name myproject --
role admin --group_name qa\admins
Revoking roles
You can revoke a user or group's role from a particular project, from all projects, or from Compose.
Syntax
composecli remove_user_or_group_role --scope global|allprojects|project [--
project_name project-name] --user_name netbios\user|--group_name
netbios\group
Parameters
Parameter Description
scope The scope of the user or group to remove: global, allprojects, or project.
project_ The name of the project to remove the user or group from. Only required if --
name scope project.
user _name The name of the user to remove. Required if no group is specified.
NetBIOS-name\user
Example:
qa\mike
NetBIOS-name\group
Example:
qa\admins
Example
composecli remove_user_or_group_role --scope project --project_name myproject
--user_name qa\mike
By default, inheritance is enabled for all objects (users and groups). This means that permissions
are automatically carried over from the parent object. You can turn inheritance on or off for all
objects at the current level. Effective permissions are the permissions that are in effect for a user at
any particular level.
For more information on the underlying concepts, see Granular access control (page 389) and
Inheritance and overrides (page 390).
By default, the User Permissions window opens at Console root level, displaying the currently
assigned user role permissions for each defined user/group. These permissions apply globally
unless they are overridden at any of the lower levels.
Changes to Compose permissions will affect any level inheriting those permissions.
The All Projects User Permissions window displays the currently assigned user role permissions
for each defined user/group. These permissions apply to all projects unless they are overridden at
any of the lower levels.
When inheritance is enabled, permissions will be inherited from the Compose root level.
The Project User Permissions window shows the user role permissions that apply to the specific
project '{project name}' for each defined user/group. These permissions apply to the specific
project unless overridden at any of the lower levels.
When inheritance is enabled, permissions will be inherited from the All Projects level.
When inheritance is enabled, these permissions will be inherited from the Project
level.
2. Click Save or OK to accept the changes, or Discard Changes or Cancel to undo them.
To disable inheritance:
1. In the User Permissions window, click Disable Inheritance.
This option disconnects the entire authorization level from the parent level.
To enable inheritance:
1. In the User Permissions window, click Enable Inheritance.
This option enables inheritance for all users and groups on this level.
l Inherit all permissions from parent and override any definition manually made at
this level: This option reinstates inherited permissions for all users and groups that
are already defined, and new users and groups will inherit their permissions from the
parent level.
l Inherit all permissions from parent but keep definitions manually made at this
level: This option preserves the permissions already defined for the existing users and
groups, and adds all permissions from the parent level. New users and groups will
inherit permissions from the parent level.
3. Click Enable.
4. Click Save or OK to accept the changes, or Discard Changes or Cancel to undo them.
To restore inherited permissions for a single user or group if they were overridden:
1. In the User Permissions window, select the user or group.
2. Click Restore Inheritance . The check mark returns to the Inherited column to indicate
that permissions for this user or group are inherited from the parent.
2. Click Get Effective Permissions. The effective permissions for the user you entered appear
below the button.
For operations performed by users with Operator privileges or later, the Compose Audit Trail shows
which user performed the operation, when it was performed, and on which objects.
By default, Compose retains audit files for one week or until they reach a total size of 100 MB (10
files). You can change these settings through the command line interface (CLI) as described in
Exporting Audit Trail files (page 396) below.
<Installation_Directory>\data\AuditTrail\audit_service
You can also export an audit trail file for a specific time range, as described in Exporting Audit Trail
files (page 396).
l Timestamp - The time when the row was inserted into the Audit Trail.
l User - The user that performed the operation.
l Node - The IP of the server on which the operation was performed.
l Requested Action - The API method/function that was called.
l Required Permission - The minimum role of the user that can perform the operation.
l Effective Permission - The actual role of the user that performed the operation.
l Security Result - Whether the user is allowed to perform the operation.
l Action Result - The completion status of the operation (success of failure).
l Error Message - The error message if the operation failed.
l Task - The name of the task where relevant.
l Notification - The notification defined for the operation (if defined).
l Payload - A URL. To view payload information, simply copy the link from the Payload column
and paste it into your browser's address bar.
Payloads for some operations (e.g. RegisterLicense) contain sensitive information and need
to be decoded. For information on decoding payloads, see Decoding an encoded payload
(page 398).
l Project Name - The name of the Compose project.
You can also export audit trails using the ExportAuditTrail API method. For further
information, see the Qlik Enterprise Manager Help and API Guide.
To do this:
1. From the Management drop-down menu, select Audit Trail. The Audit Trail window opens.
2. From the Time Range drop-down list, select the desired time range. If you select Custom,
Depending on your browser settings, you will either be prompted for a download location or the file
will be downloaded automatically to your preferred location.
Command syntax
ComposeCli.exe generate_audit_trail --start_timestamp timestamp [--end_timestamp timestamp] --
outfile full_path
Parameters
Parameter Description
--start_timestamp The date and time from which you want the audit trail to start, in
UTC format.
--end_timestamp The date and time on which you want the audit trail to end, in UTC
format. When not specified, the file will end at the latest audit trail
record.
--outfile The full path to the output file. If the path contains spaces, it
should be enclosed in quotation marks.
Example
ComposeCli.exe generate_audit_trail --start_timestamp 2020-06-30T16:15:00Z --end_timestamp
2020-07-14T16:15:00Z --outfile "C:\compose audit trails\audit.json"
Command syntax
ComposeCtl.exe audit_trail_control --age weeks --size megabytes
Parameters
Parameter Description
--age The number of weeks to retain the audit trail file (default 1 week).
--size The maximum size of the audit file to retain (default 100 MB).
Example
ComposeCtl.exe audit_trail_control --age 4 --size 1000
2. Copy the URL into your browser's address bar and press [Enter]. A byte array will be
displayed.
3. Copy the byte array into a Base64 decoder and decode it.
When building failover cluster solutions with Compose using Windows Server Failover
Cluster (WSFC) or a Linux failover cluster software, Qlik recommends using a block
device (physical, virtual or iSCSI-based) for the shared Compose DATA folder. Using
NFS or SMB-based storage is not supported due to the associated latency which could
greatly degrade the data transfer performance, as well as due to reduced reliability and
compatibility issues. When building a cloud-based high availability solution that needs to
span different availability zones, it is recommended to use a Storage-as-a-Service
solution that can handle the block-level replication of the storage and that is integrated
with the chosen failover clustering software.
In this section:
Preparation
Allocate two shared folders for Compose: one for the Compose server and the other for the
Compose agent
The setup instructions below assume that the Compose data folder is F:\Compose-server-data and
the Compose Agent data folder is F:\Compose-agent-data.
1. In the left pane of the Failover Cluster Manager, select Roles. The available roles will be listed
in the right pane of the console. Right-click the role you are working with and point to Add a
resource. Then select Generic Service.
2. In the Select Service screen of the New Resource wizard, select Qlik Compose from the list.
3. Click Next and follow the instructions in the wizard to create the resource. For information on
how to use this wizard, see the Microsoft online help.
Compose must be installed on the computer where you defined the service in order for
the service to be available in the list.
1. In the left pane of the Failover Cluster Manager console, select Roles.
2. From the list of available roles in the right pane of the console, select the role you are working
with.
3. In the bottom right pane, select the Resource tab. From the list of the available roles, select
Compose.
4. Right-click the Compose role and select Properties.
5. In the Compose Properties window, select the Dependencies tab.
6. Click Insert. A new line is added to the Resource list.
7. In the Resource column, click the arrow and select the Compose Data storage resource from
the list.
8. Click Insert and add the Network Name resource (it should have the same name as the
cluster).
9. Start the Service using the Failover Cluster Manager and access the console using the
Network name.
10. Register the license. The license should contain all host names of the cluster.
Example:
https://cluster_name_ip/qlikcompose/
In a cluster environment, this is not good practice because the URL will change each time the
cluster is rolled over. To resolve this issue, you need to set the cluster name as the Compose URL.
The cluster name must be registered in DNS, before you can set it.
Make sure the Compose role is offline, as the upgrade should bring the services
online.
4. As the upgrade process overrides the acjs.bat file, when the upgrade completes, add the
following row to the <PRODUCT_DIR>\java\bin\acjs.bat file:
SET AT_DATA=-d <agent data path>
If the above string already exists in acjs.bat, you can skip this step.
5. Bring the Compose role back online and make sure there are no connection errors.
6. Upgrade the projects by running the following command on the primary node only:
ComposeCtl.exe -d <server data path> setup postupdate
l DST On - Occurs approximately when Summer starts (actual date is country specific). Its
impact on local time is that local time is moved one hour forward (so, for example, 01:00AM
becomes 02:00AM). This DST change does not impact Qlik Compose as it does not result in
time overlap.
l DST Off - Occurs approximately when Winter starts (actual date is country specific). Its
impact on local time is that local time is moved back one hour (so, for example, 02:00AM
becomes 01:00AM). This DST change results in time overlap where local time travels over the
same hour twice in a row.
The comments below assume that the customer has not changed the time but rather the timezone
or the DST setting. Changing the actual time (not for minor time adjustments) is a sensitive
operation and is best done when Qlik Compose is stopped.
l Timestamps in logs and audit messages are in local time. As a result, when Winter time starts,
the logs will show the time going back an hour; conversely, when Summer time starts, the
logs may appear to be missing one hour.
l Statistics shown on the console are also sensitive to local time, and thus may also show
confusing/inaccurate data in the overlap period (going in to Winter time) or for the skipped
period (going into Summer time).
In general, it is recommended to avoid non-critical task design changes during the first overlap
period (going in to Winter time) so as to prevent confusion about when the changes took place.
The Scheduler is not adjusted to take into account daylight saving time (DST). For
example, a daily job which was scheduled to run at 11 PM should be rescheduled to run
at 11 PM after DST comes into effect.
Given the complexity of the topic and the involvement of many independent components and
settings, Qlik generally recommends that customers first verify the impact of DST changes in their
test environment.
B Support matrix
In addition to listing the platforms on which Qlik Compose can be installed, this topic also specifies
which source and target database versions can be used in a Qlik Compose task.
Qlik Compose December 2024 is compatible with the following Replicate and Enterprise Manager
versions:
l Qlik Replicate: November 2024, November 2023, May 2023, and May 2024
l Enterprise Manager: November 2024
For more information on discovering data sources, see Discovering the Source Database or Landing
Zone (page 158).
Microsoft Azure SQL Database (via the Microsoft SQL Server database Same as Microsoft
connection settings) SQL Server.
Microsoft Azure SQL Managed Instance (via the Microsoft SQL Server Same as Microsoft
database connection settings) SQL Server.
Snowflake N/A
l For all hive distributions, fully binary compatible versions are also supported.
l All major versions and selected minor versions are certified for use with Compose.
l For information about supported drivers, see Prerequisites (page 286).
Cloudera 7.x
Databricks (Cloud Storage) 10.4 LTS, 12.2 (LTS), 14.3 LTS, 15.4 LTS, and SQL warehouse
cluster
Databricks implementations
supported via the Databricks
(Cloud Storage) endpoint:
l Databricks on AWS
l Databricks on Google
Cloud Platform
l Microsoft Azure
Databricks
In this appendix:
Seconds 0-59 , - * /
Minutes 0-59 , - * /
Hours 0-23 , - * /
l * ("all values") Used to select all values within a field. For example, "*" in the minute field
means "every minute".
l ? ("no specific value") Useful when you need to specify something in one of the two fields in
which the character is allowed, but not the other. For example, if I want my task to run on a
particular day of the month (say, the 10th), but don't care what day of the week that happens
to be, I would put "10" in the day-of-month field, and "?" in the day-of-week field. See the
examples below for clarification.
l - Used to specify ranges. For example, "10-12" in the hour field means "the hours 10, 11 and
12".
l , Used to specify additional values. For example, "MON,WED,FRI" in the day-of-week field
means "the days Monday, Wednesday, and Friday".
l / Used to specify increments. For example, "0/15" in the seconds field means "the seconds 0,
15, 30, and 45". And "5/15" in the seconds field means "the seconds 5, 20, 35, and 50". You
can also specify '/' after the '' character - in this case '' is equivalent to having '0' before the
'/'. '1/3' in the day-of-month field means "run every 3 days starting on the first day of the
month".
l L ("last") Has a different meaning in each of the two fields in which it is allowed. For example,
the value "L" in the day-of-month field means "the last day of the month" - day 31 for
January, day 28 for February on non-leap years. If used in the day-of-week field by itself, it
simply means "7" or "SAT". But if used in the day-of-week field after another value, it means
"the last xxx day of the month" - for example "6L" means "the last friday of the month". You
can also specify an offset from the last day of the month, such as "L-3" which would mean
the third-to-last day of the calendar month. When using the 'L' option, it is important not to
specify lists, or ranges of values, as you'll get confusing/unexpected results.
l W ("weekday") Used to specify the weekday (Monday-Friday) nearest the given day. As an
example, if you were to specify "15W" as the value for the day-of-month field, the meaning is:
"the nearest weekday to the 15th of the month". So if the 15th is a Saturday, the trigger will
run on Friday the 14th. If the 15th is a Sunday, the trigger will run on Monday the 16th. If the
15th is a Tuesday, then it will run on Tuesday the 15th. However if you specify "1W" as the
value for day-of-month, and the 1st is a Saturday, the trigger will run on Monday the 3rd, as it
will not 'jump' over the boundary of a month's days. The 'W' character can only be specified
when the day-of-month is a single day, not a range or list of days. ** The 'L' and 'W'
characters can also be combined in the day-of-month field to yield 'LW', which translates to
"last weekday of the month".
l # Used to specify "the nth" XXX day of the month. For example, the value of "6#3" in the day-
of-week field means "the third Friday of the month" (day 6 = Friday and "#3" = the 3rd one in
the month). Other examples: "2#1" = the first Monday of the month and "4#5" = the fifth
Wednesday of the month. Note that if you specify "#5" and there is not 5 of the given day-of-
week in the month, then no firing will occur that month. ** The legal characters and the
names of months and days of the week are not case sensitive. MON is the same as mon.
Cron
expression Trigger frequency
example
0 0/5 14 * * ? Every 5 minutes starting at 2pm and ending at 2:55pm every day
0 0/5 14,18 * * Every 5 minutes starting at 2pm and ending at 2:55pm every day, AND every 5
? minutes starting at 6pm and ending at 6:55pm every day
0 0-5 14 * * ? Every minute starting at 2pm and ending at 2:05pm every day
0 15 10 ? * 6L 10:15am on the last Friday of every month, during the years 2002-2005
2002-2005
0 0 12 1/5 * ? 12pm (noon) every 5 days every month, starting on the first day of the month
D Supported characters
To prevent character validation errors, Compose best practice is to only use alphanumeric
characters, underscores and hyphens in table and column names. This is because object naming
rules are always determined by the database type, of which there may be several in a single
Compose project.
E Glossary
Attribute
In the Compose model, an attribute is a logical representation of a
physical column in a source database (or Landing Zone) table.
Attributes Domain
A list of all the attributes available in the Compose model. You can add,
edit and delete attributes according to your data warehousing needs. The
Attributes Domain also shows you which entities each attribute is used in,
as a single attribute may be used in several entities.
Change Tables
Change Tables are created in the Landing Zone when the Replicate task is
defined as Full Load and Store Changes or Store Changes only. When the
Store Changes replication option is enabled in the Replicate task, any
changes to the source tables will be replicated to the Change Tables in
the Landing Zone. The Change Table name format comprises the original
table name appended with a "__ct".
Entity
In the Compose model, an entity is a logical representation of a physical
source database/Landing Zone table or view.
ETL Task
In a project, the following ETL tasks can be run: - An ETL task that
extracts data from the Landing Zone, performs user-defined
transformations on the data, and loads it into the data warehouse tables. -
An ETL task that extracts data from the data warehouse, performs user-
defined transformations on the data, and loads it into the data mart
tables. Depending on the ETL task type and specific settings within
Compose, only changes to the existing data will be populated or all of the
data (regardless of whether any changes were made to the source).
Full Load
A Full Load replication task is a Replicate task that replicates all of the
selected source tables to the Landing Zone and populates them with data
from the source database. When you duplicate an existing data
warehouse ETL, you can set the ETL type to Full Load and Change Tables
(i.e. initially extract all the data from the Landing Zone tables and then
only the changes), Full Load Only (i.e. extract all the data from the Landing
Zone tables) or Change Tables Only (i.e. extract only the changes to the
Landing Zone tables).
History
Model attributes (and their corresponding data warehouse columns) can
either be defined as history Type 1 or history Type 2. When an attribute is
defined as history type 1, no history of the data is kept since old data will
always be overwritten with new data. When an attribute is defined as
history Type 2, a new record is added each time the record is updated.
This is especially useful for Slowly Changing Dimensions (SCDs). For
example, defining the Address attribute in the Customers table as Type 2
would enable you to retrieve data based on the customer’s location during
a certain time period. Attributes defined as history Type 1 will always exist
in hub tables whereas attributes defined as history Type 2 will always
exist in satellite tables.
Hub
A table in the data warehouse containing history Type 1 columns. When a
column is defined as history type 1, no history of the data is kept since old
data is overwritten with new data.
Incremental loading
The activity of loading only new or updated records from the data
warehouse into the data mart(s), using Full Load formatted data. The
input data may include all of the records (full) or only added and updated
records (partial). As opposed to CDC, incremental loading does not
indicate wether the change is an UPDATE, INSERT, or DELETE.
Landing Zone
The area in the data warehouse to which the source tables are replicated.
This is also the target endpoint in a Replicate task.
Lineage
A visual representation of the data flow of a particular table or column
from its source to its current location. Before editing an entity or attribute,
you may want to see which other entities/attributes or tables/columns will
be impacted by the change. For example, removing the "Discount"
attribute from a table will affect the "Total Price". Additionally, a single
attribute may have multiple names depending on its location.
Model
The business information model of an enterprise. Usually an ERD (Entity-
Relationship Diagram), the model should contain all of the information
needed to create the data warehouse. Models can be imported from
ERwin or generated automatically by discovering (otherwise known as
reverse engineering) the source database or Landing Zone.
Relationship
Similar to a foreign key, a relationship "attribute" is a special type of
attribute that points to another entity in the same model.
Satellite
A table in the data warehouse containing history Type 2 columns. When a
column is defined as history type 1, a new record is added whenever a
record is updated (instead the existing record being overwritten). Satellite
tables also contain two additional columns: FD (From Date) and TD (To
Date). For old records, these columns show the dates between which a
particular record was current (i.e. before a new record rendered it
obsolete). The TD column will only contain a date if the record has been
succeeded by a newer record. In Compose, you can set a satellite number
(1 and above) for attributes in the model. This is a good way of ensuring
that similar attributes (or columns in the data warehouse) appear in the
same satellite table. For example, setting the same satellite number for
the "Total" and "Discount" attributes ensures that both attributes will be
included in the same satellite table.