Informatica in Detail: (V- 9.5.
1)
(Informatica Power Center)
What is Workflow?
Workflow is a group of instructions/commands to the integrations service in
Informatica. The integration service is an entity which reads workflow
information from the repository, fetches data from sources and after
performing transformation loads it into the target.
Workflow - It defines how to run tasks like session task, command task,
email task, etc.
How to open Workflow Manager
Step1 – From your system click on start button (Widow button on Menu bar)
and select desired client. If you select ‘D' then Power center designer window
must appear.
In the Informatica Designer, Click on the repositories for QA.
Step 2 – This will open a window of Workflow Manager. Then, in the
workflow Manager.
1. We are going to connect to repository "Repo(QA) /SIT", so double click
on the folder to connect.
2. Enter user name and password then select "Connect Button".
Step 3- In the workflow manager.
1. Right click on the folder
2. In the pop up menu, select open option
This will open up the workspace of Workflow manager.
Note – use ctrl+s shortcut to save the changes in repository
Naming Convention - Workflow names are prefixed with using 'wkf_', if
you have a session named 's_m_employee_detail' then workflow for the
same can be named as 'wkf_s_m_employee_detail'.
When you create a workflow, it does not consist of any tasks. So, to execute
any task in a workflow you have to add task in it.
Now you are ready with the workflow having a command task to be
executed.
How to execute workflow
Step 1 – To execute the workflow
1. Select workflows option from the menu
2. Select start workflow option
This will open workflow monitor window and executes the workflow
Once the workflow is executed, it will execute the command task to create a
folder (guru99 folder) in the defined directory.
Session Task
A session task in Informatica is required to run a mapping.
Without a session task, you cannot execute or run a mapping. A session task
can execute a mapping or group of mappings. So, there is a one to one
relationship between a mapping and a session. A session task is an object
with the help of which Informatica gets to know how and where to execute a
mapping and at which time. Sessions cannot be executed independently, a
session must be added to a workflow. In session object cache properties can
be configured and also advanced performance optimization configuration.
How to create a session task.
In this exercise you will create a session task for the mapping
"m_emp_emp_target" which you created in the previous article.
Step1 – Open Workflow manager
How to execute the workflow.
Step 1 – In the workflow manager, open the workflow "wkf_run_command"
Step 5 – Start the workflow and monitor in the workflow monitor.
Implementing the scenario
We had a workflow "wkf_run_command" having tasks added in serial mode.
Now we will add a condition to the link between session task and command
task, so that, only after the success of command task the session task will be
executed.
Step 1 - Open the workflow "wkf_run_command"
Step 2 - Double click on the link between session and command task
An Expression window will appear
Step 3 – Double click the status variable under "cmd_create_folder" menu. A
variable "$cmd_create_folder.status" will appear in the editor window on
right side.
Step 4 - Now we will set the variable "$cmd_create_folder.status" condition
to succeeded status .which means when the previous tasks is executed and
the execution was success, then only execute the next session task.
1. Change the variable to "$cmd_create_folder.status=SUCCEEDED"
value.
2. Click OK Button
The workflow will look like this
When you execute this workflow, the command task executes first and only
when it succeeds then only the session task will get executed.
-----------------------------------------------------------QA--------------------------------------
Workflow monitor consists of following windows – (QA Stuff)
Navigator window- shows the monitored repositories, folders &
integration service
Output window – displays the messages from integration services and
repository
Properties window – displays the details/properties about tasks and
workflows
Time window – displays the progress of the running tasks & workflows
with timing details.
Now, let see what we can do in Workflow Monitor
How to open Workflow Monitor
Step 1 – In Informatica Designer or Workflow manager toolbox, click on the
workflow monitor icon
Step 2 – This will open workflow monitor window
In the workflow monitor tool, you will see the repositories and associated
integration services on the left side. Under the status column, you will see
whether you are connected or disconnected to integration service. If you are
in the disconnected mode, you won't see any running workflows. There is a
time bar which helps us to determine how long it took a task to execute.
Step 3 – The workflow monitor is in a disconnected mode by default. To
connect to integration service.
1. Right click on the integration service
2. Select connect option
After connecting, the monitor will show the status as connected.
Views in Workflow Monitor
There are two types of views available in Informatica workflow monitor
Task view
Gantt View
Task View
Task view displays the workflow runs in report format, and it is organized by
workflow runs. It provides a convenient approach to compare workflow runs
and filter details of workflow runs.
Task view shows the following details
Workflow run list – Shows the list of workflow runs. It contains folder,
workflow, worklet, and task names. It displays workflow runs in
chronological order with the most recent run at the top. It displays
folders and Integration Services alphabetically.
Status message - Message from the Integration Service regarding the
status of the task or workflow.
Node - Node of the Integration Service executed the task.
Start time - The time at which task or workflow started.
Completion time – The time at which task or workflow completed the
execution.
Status - Shows status of the task or workflow, whether the workflow
started, succeeded, failed or aborted.
Gantt Chart View
In Gantt chart view, you can view chronological view of the workflow runs.
Gantt chart displays the following information.
Task name – Name of the task in the workflow
Duration – The time taken to execute the task
Status – The most recent status of the task or workflow
To switch between Gantt chart and task views
To switch from Gantt chart to Task view or vice versa, click on the respective
button as shown in the screenshot to change the mode.
Example- How to monitor and view details
In previous examples, we have created a
Mapping "m_emp_emp_target": A mapping is a set of instructions
on how to modify the data and processing of transformations that
affects the record set.
Session "s_ m_emp_emp_target" : A session is a higher level object
to a mapping which specifies the properties of execution. For example
performance tuning options, connection details of sources/targets, etc.
Workflow "wkf_s_m_emp_emp_target": A workflow is a container
for the session and other objects, and it defines the timing of the
execution of tasks and the dependency or flow of execution.
Now, we will analyze the details of execution in this topic.
Step 1 – Restart the workflow designer, as described in previous topic
Step 2 – Go to workflow monitor and in the monitor window you will see
details as shown in screen shot like repository, workflow run details, node
details, workflow run start time, workflow run completion time and status.
Step 3 – Here you can view the currently running workflow, which is having
status as "running".
Step 4 – Once the workflow execution completes, it status would change to
succeeded/failed along with start and end time details.
Step 5 – To view the task details
1. Right click on task name
2. In the pop-up window select "get run properties"
3. A properties window would appear with the task details
Step 6 – Click on each of the menu of the properties window to view specific
details.
Here we chose "Task Details" to view. It will display all the details like
Instance Name, Task Type, Start Time, Integration Service Name, etc.
Task details -
Source and Target Statistics
Source and target statistics gives the details of source and target. For
example, how many rows are fetched from the source and how many rows
are populated in the target the current throughput, etc
In the following screen, 14 records are fetched from the source, and all 14
are populated in the target table.
Applied rows signify how many records Informatica had tried to
update or insert the target
Affected rows signify how many numbers of applied rows were
actually succeeded.Here all 14 rows are successfully loaded in the
target, so the count is same for both.
Rejected rows signify how many rows are dropped due to target
constraint or other issues.
you have learned how to open and monitor the workflows and tasks using
workflow monitor.
How to open a mapping:
How to set tracing level in a transformation
Step 1 – Open the mapping in Informatica designer, for which you want to
set the tracing level
Step 2 – Double click on the transformation (Source Qualifier transformation
"SQ_EMP")
It will open edit transformation window.
Step 3 – In the edit transformation window
1. Click the properties tab
2. Select the Tracing level option
3. From the drop down select Verbose data
4. Select OK button
Step 4 – Save the mapping and execute the mapping ( usectrl+s keyboard
shortcut to save the mapping)
Step 5- To view the log
1. Open workflow monitor and select the session which was executed in
the last step
2. Click on "session log" option to open the session log for the mapping
This will open session log window for the mapping
The session log provides the detail about how your session got executed. It
provides the timing details when execution started and stopped. It provides
the basic idea about the performance. It mentions which database
connection you are using, what parameter file you are using. It also
summarizes about the source and targets by mentioning how many source
records were fetched, how many records were loaded into the target, etc.
Step 6 – Scroll down in the log, and you can see additional log details
captured including the data records.
In this way, you can set the tracing level in mappings to capture the
additional details for debugging.
In this tutorial, you have learned how to set, configure and execute the
debugger and how to set tracing level in mappings. These options provide
you the ability to debug the mappings.
Transformation of Source Rows As Property
This property allows you to define how the source data affects the target
table. For example, you can define that the source record should be inserted
or deleted from the target.
This property has four options –
Insert (New record insert by Appending)
Update (New record will be appended with CDC)
Delete(Soft-delete)
Data-driven
When this property is set to insert, the source data will be marked to
be inserted. It means the data will only be inserted.
When the property is set to update, the target data will be updated by
the source data. For updating of data primary key needs to be defined
in the target table.
When property is set to delete the source data which is already
present in the target will be deleted from the target table. For this
property to execute and apply the changes, the primary key should be
defined in the target table.
With the property set to data driven, the Informatica checks what
source records are marked. If in a mapping the source records are
marked as insert then records will be inserted into the target. If records
are marked as an update in the mapping, then the records will be
updated in the target. So what operation will be performed at the
target depends on how records are handled inside the mapping.
How Tosee Source Rows – Delete
Step 1 -
1. In the property tab of the session task, select "Delete" option in "Treat
Source Rows as"
2. Select OK Button
Step 2 – To define primary key in target table, open Informatica designer
1. Open target designer
2. Double click on the "emp_target" table
This will open an "Edit Table" for our target table.
Step 3 – In the edit window of target table
1. For the EmpNo column, select key type as "primary key" from the
scroll down menu and
2. Select OK button.
Step 4 – Save the changes in Informatica and execute the workflow for this
mapping.
When you execute this mapping, the source records which are already
present in the target will get deleted.
INFORMATICA Transformations
Tutorial & Filter Transformation
What is Transformation?
Transformations is in Informatica are the objects which creates, modifies or
passes data to the defined target structures (tables, files or any other
target).
The purpose of the transformation in Informatica is to modify the source data
as per the requirement of target system. It also ensures the quality of the
data being loaded into the target.
Informatica provides various transformations to perform specific
functionalities.
For example, performing tax calculation based upon source data, data
cleansing operation, etc. In transformations, we connect the ports to pass
data to it, and transformation returns the output through output ports.
In this tutorial- you will learn
Classification of Transformation
Filter Transformation
Classification of Transformation
Transformation is classified into two categories, one based on connectivity,
and other based on the change in no of rows. First we will look the
transformation based on connectivity.
Types of transformation based on connectivity
Connected Transformations
Unconnected Transformations
In Informatica, during mappings the transformations which are connected to
other transformations are called connected transformations.
For example, Source qualifier transformation of Source table EMP is
connected to filter transformation to filter employees of a dept.
Those transformations that are not connected to any other transformations
are called unconnected transformations.
Their functionality is used by calling them inside other transformations like
Expression transformation. These transformations are not part of the
pipeline.
The connected transformations are preferred when for every input row,
transformation is called or is expected to return a value. For example, for the
zip codes in every row, the transformation returning city name.
The unconnected transformations are useful when their functionality is only
required periodically or based upon certain conditions. For example,
calculation the tax details if tax value is not available.
Types of transformations based on the change in no of rows
Active Transformations
Passive Transformations
Active Transformations are those who modifies the data rows and the
number of input rows passed to them. For example, if a transformation
receives ten number of rows as input, and it returns fifteen number of rows
as an output then it is an active transformation. The data in the row is also
modified in the active transformation.
Passive transformations are those who does not change the number of input
rows. In passive transformations the number of input and output rows remain
the same, only data is modified at row level.
In the passive transformation, no new rows are created, or existing rows are
dropped.
Following is the List of Transformations in
Informatica
Aggregator Transformation
Router Transformation
Joiner transformation
Filter Transformation
Sequence Generator Transformation
Sorter Transformation
Lookup transformation
Expression Transformation
What is Filter Transformation?
Filter Transformation is an active transformation as it changes the no of
records.
Using the filter transformation, we can filter the records based on the filter
condition. Filter transformation is an active transformation as it changes the
no of records.
For example, for loading the employee records having deptno equal to 10
only, we can put filter transformation in the mapping with the filter condition
deptno=10. So only those records which have deptno =10 will be passed by
filter transformation, rest other records will be dropped.
How to use filter transformation-
Step 1 – Create a mapping having source "EMP" and target "EMP_TARGET"
Step 2 – Then in the mapping
1. Select Transformation menu
2. Select create option
Step 3 - Then in the create transformation window
1. Select Filter Transformation from the list
2. Enter Transformation name "fltr_deptno_10"
3. Select create option
Step 4 – The filter transformation will be created, Select "Done" button in
the create transformation window
Step 5 – In the mapping
1. Drag and drop all the Source qualifier columns to the filter
transformation
2. Link the columns from filter transformation to the target table
Step 6 – Double click on the filter transformation to open its properties, and
then
1. Select the properties menu
2. Click on the Filter condition editor
Step 7 – Then in the filter condition expression editor
1. Enter filter condition – deptno=10
2. Select OK button
Step 8 – Now again in the edit transformation window in Properties tab you
will see the filter condition, select OK button
Now save the mapping and execute it after creating session and workflow. In
the target table, the records having deptno=10 only will be loaded.
Other Ex:
Now, again click on properties tab in Edit Transformations window, and you
will see only those data that you have selected.
When you click on "OK" button it will open SQL Editor Window, and
1. It will confirm the data you have selected are correct and ready for
loading into the target table
2. Click on OK button to process further
Step 5 – Drag and drop all the columns from Source qualifier to router
transformation
Step 6 – Double click on the router transformation, then in the
transformation property of it
1. Select group tab
2. Enter group name "deptno_20"
3. Click on the group filter condition
Step 7 – In the expression editor, enter filter condition deptno=20 and
select OK button.
Step 8 – Select OK button in the group window
Step 9 – Connect the ports from the group deptno_20 of router
transformation to target table ports
Now, when you execute this mapping, the filtered records will get loaded into
the target table.
Step 3 – From the transformation menu, select create option.
1. Select joiner transformation
2. Enter transformation name "jnr_emp_dept"
3. Select create option
Step 4 – Drag and drop all the columns from both the source qualifiers to
the joiner transformation
Step 5 - Double click on the joiner transformation, then in the edit
transformation window
1. Select condition tab
2. Click on add new condition icon
3. Select deptno in master and detail columns list
Step 6 - Then in the same window
1. Select properties tab
2. Select normal Join as join type
3. Select OK Button
For performance optimization, we assign the master source to the source
table pipeline which is having less no of records. To perform this task –
Step 7 –Double click on the joiner transformation to open edit properties
window, and then
1. Select ports tab
2. Select any column of a particular source which you want to make a
master
3. Select OK
Step 8 – Link the relevant columns from joiner transformation to
target table
Now save the mapping and execute it after creating session and workflow for
it. The join will be created using Informatica joiner, and relevant details will
be fetched from both the tables.
Step 4 – Create a new transformation in the mapping
1. Select sequence transformation as the type
2. Enter transformation name "seq_emp"
3. Select Create option
Step 5 - Sequence generator transformation will be created, select the done
option
Step 6 - Link the NEXTVAL column of sequence generator to SNO column in
target
Step 7 – link the other columns from source qualifier transformation to the
target table
Step 8 – Double click on the sequence generator to open property window,
and then
1. Select the properties tab
2. Enter the properties with Start value =1, leave the rest properties as
default
3. Select OK button
Now save the mapping and execute it after creating the session and
workflow.
Step 2 – Create a new transformation using transformation menu then
1. Select lookup transformation as the transformation
2. Enter transformation name "lkp_dept"
3. Select create option
Step 3 – This will open lookup table window, in this window
1. Select source button
2. Select DEPT table
3. Select Ok Button
Step 4 - Lookup transformation will be created with the columns of DEPT
table, now select done button
Step 5 - Drag and drop DEPTNO column from source qualifier to the lookup
transformation, this will create a new column DEPTNO1 in lookup
transformation. Then link the DNAME column from lookup transformation to
the target table.
The lookup transformation will lookup and return department name based
upon the DEPTNO1 value.
Step 6 – Double click on the lookup transformation. Then in the edit
transformation window
1. Select condition tab
2. Set the condition column to DEPTNO = DEPTNO1
3. Select Ok Button
Step 7 – Link rest of the columns from source qualifier to the target table
Now, save the mapping and execute it after creating the session and
workflow. This mapping will fetch the department names using lookup
transformation.
The lookup transformation is set to lookup on dept table. And the joining
condition is set based on dept number.
Top 50 Informatica Interview
Questions & Answers
1. What do you mean by Enterprise Data Warehousing?
When the organization data is created at a single point of access it is called
as enterprise data warehousing. Data can be provided with a global view to
the server via a single source store. One can do periodic analysis on that
same source. It gives better results but however the time required is high.
2. What the difference is between a database, a data warehouse and
a data mart?
Database includes a set of sensibly affiliated data which is normally small in
size as compared to data warehouse. While in data warehouse there are
assortments of all sorts of data and data is taken out only according to the
customer's needs. On the other hand datamart is also a set of data which is
designed to cater the needs of different domains. For instance an
organization having different chunk of data for its different departments i.e.
sales, finance, marketing etc.
3. What is meant by a domain?
When all related relationships and nodes are covered by a sole
organizational point, its called domain. Through this data management can
be improved.
4. What is the difference between a repository server and a
powerhouse?
Repository server controls the complete repository which includes tables,
charts, and various procedures etc. Its main function is to assure the
repository integrity and consistency. While a powerhouse server governs the
implementation of various processes among the factors of server's database
repository.
5. How many repositories can be created in informatica?
There can be any number of repositories in informatica but eventually it
depends on number of ports.
6. What is the benefit of partitioning a session?
Partitioning a session means solo implementation sequences within the
session. It's main purpose is to improve server's operation and efficiency.
Other transformations including extractions and other outputs of single
partitions are carried out in parallel.
7. How are indexes created after completing the load process?
For the purpose of creating indexes after the load process, command tasks
at session level can be used. Index creating scripts can be brought in line
with the session's workflow or the post session implementation sequence.
Moreover this type of index creation cannot be controlled after the load
process at transformation level.
8. Explain sessions. Explain how batches are used to combine
executions?
A teaching set that needs to be implemented to convert data from a source
to a target is called a session. Session can be carried out using the session's
manager or pmcmd command. Batch execution can be used to combine
sessions executions either in serial manner or in a parallel. Batches can have
different sessions carrying forward in a parallel or serial manner.
9. How many number of sessions can one group in batches?
One can group any number of sessions but it would be easier for migration if
the number of sessions are lesser in a batch.
10. Explain the difference between mapping parameter and mapping
variable?
When values change during the session's execution it's called a mapping
variable. Upon completion the Informatica server stores the end value of a
variable and is reused when session restarts. Moreover those values that do
not change during the sessions execution are called mapping parameters.
Mapping procedure explains mapping parameters and their usage. Values
are allocated to these parameters before starting the session.
11.What is complex mapping?
Following are the features of complex mapping.
Difficult requirements
Many numbers of transformations
Complex business logic
12. How can one identify whether mapping is correct or not without
connecting session?
One can find whether the session is correct or not without connecting the
session is with the help of debugging option.
13. Can one use mapping parameter or variables created in one
mapping into any other reusable transformation?
Yes, One can do because reusable transformation does not contain any
mapplet or mapping.
14. Explain the use of aggregator cache file?
Aggregator transformations are handled in chunks of instructions during
each run. It stores transitional values which are found in local buffer
memory. Aggregators provides extra cache files for storing the
transformation values if extra memory is required.
15. Briefly describe lookup transformation?
Lookup transformations are those transformations which have admission
right to RDBMS based data set. The server makes the access faster by using
the lookup tables to look at explicit table data or the database. Concluding
data is achieved by matching the look up condition for all look up ports
delivered during transformations.
16. What does role playing dimension mean?
The dimensions that are utilized for playing diversified roles while remaining
in the same database domain are called role playing dimensions.
17. How can repository reports be accessed without SQL or other
transformations?
Ans:Repositoryreports are established by metadata reporter. There is no
need of SQL or other transformation since it is a web app.
18. What are the types of metadata that stores in repository?
The types of metadata includes Source definition, Target definition,
Mappings, Mapplet, Transformations.
19. Explain the code page compatibility?
When data moves from one code page to another provided that both code
pages have the same character sets then data loss cannot occur. All the
characteristics of source page must be available in the target page.
Moreover if all the characters of source page are not present in the target
page then it would be a subset and data loss will definitely occur during
transformation due the fact the two code pages are not compatible.
20. How can you validate all mappings in the repository
simultaneously?
All the mappings cannot be validated simultaneously because each time only
one mapping can be validated.
21. Briefly explain the Aggregator transformation?
It allows one to do aggregate calculations such as sums, averages etc. It is
unlike expression transformation in which one can do calculations in groups.
22. Describe Expression transformation?
Values can be calculated in single row before writing on the target in this
form of transformation. It can be used to perform non aggregate
calculations. Conditional statements can also be tested before output results
go to target tables.
23. What do you mean by filter transformation?
It is a medium of filtering rows in a mapping. Data needs to be transformed
through filter transformation and then filter condition is applied. Filter
transformation contains all ports of input/output, and the rows which meet
the condition can only pass through that filter.
24. What is Joiner transformation?
Joiner transformation combines two affiliated heterogeneous sources living in
different locations while a source qualifier transformation can combine data
emerging from a common source.
25. What is Lookup transformation?
It is used for looking up data in a relational table through mapping. Lookup
definition from any relational database is imported from a source which has
tendency of connecting client and server. One can use multiple lookup
transformation in a mapping.
26. How Union Transformation is used?
Ans: It is a diverse input group transformation which can be used to combine
data from different sources. It works like UNION All statement in SQL that is
used to combine result set of two SELECT statements.
27. What do you mean Incremental Aggregation?
Option for incremental aggregation is enabled whenever a session is created
for a mapping aggregate. Power center performs incremental aggregation
through the mapping and historical cache data to perform new aggregation
calculations incrementally.
28. What is the difference between a connected look up and
unconnected look up?
When the inputs are taken directly from other transformations in the pipeline
it is called connected lookup. While unconnected lookup doesn't take inputs
directly from other transformations, but it can be used in any
transformations and can be raised as a function using LKP expression. So it
can be said that an unconnected lookup can be called multiple times in
mapping.
29. What is a mapplet?
A recyclable object that is using mapplet designer is called a mapplet. It
permits one to reuse the transformation logic in multitude mappings
moreover it also contains set of transformations.
30.Briefly define reusable transformation?
Reusable transformation is used numerous times in mapping. It is different
from other mappings which use the transformation since it is stored as a
metadata. The transformations will be nullified in the mappings whenever
any change in the reusable transformation is made.
31. What does update strategy mean, and what are the different
option of it?
Row by row processing is done by informatica. Every row is inserted in the
target table because it is marked as default. Update strategy is used
whenever the row has to be updated or inserted based on some sequence.
Moreover the condition must be specified in update strategy for the
processed row to be marked as updated or inserted.
32. What is the scenario which compels informatica server to reject
files?
This happens when it faces DD_Reject in update strategy transformation.
Moreover it disrupts the database constraint filed in the rows was
condensed.
33. What is surrogate key?
Surrogate key is a replacement for the natural prime key. It is a unique
identification for each row in the table. It is very beneficial because the
natural primary key can change which eventually makes update more
difficult. They are always used in form of a digit or integer.
34.What are the prerequisite tasks to achieve the session partition?
In order to perform session partition one need to configure the session to
partition source data and then installing the Informatica server machine in
multifold CPU's.
35. Which files are created during the session rums by informatics
server?
During session runs, the files created are namely Errors log, Bad file,
Workflow low and session log.
36. Briefly define a session task?
It is a chunk of instruction the guides Power center server about how and
when to transfer data from sources to targets.
37. What does command task mean?
This specific task permits one or more than one shell commands in Unix or
DOS in windows to run during the workflow.
38. What is standalone command task?
This task can be used anywhere in the workflow to run the shell commands.
39. What is meant by pre and post session shell command?
Command task can be called as the pre or post session shell command for a
session task. One can run it as pre session command r post session success
command or post session failure command.
40.What is predefined event?
It is a file-watch event. It waits for a specific file to arrive at a specific
location.
41. How can you define user defied event?
User defined event can be described as a flow of tasks in the workflow.
Events can be created and then raised as need arises.
42. What is a work flow?
Ans: Work flow is a bunch of instructions that communicates server about
how to implement tasks.
43. What are the different tools in workflow manager?
Following are the different tools in workflow manager namely
Task Designer
Worklet Designer
Workflow Designer
44. Tell me any other tools for scheduling purpose other than
workflow manager pmcmd?
The tool for scheduling purpose other than workflow manager can be a third
party tool like 'CONTROL M' and ‘Tydal’.
45. What is OLAP (On-Line Analytical Processing?
A method by which multi-dimensional analysis occurs.
46. What are the different types of OLAP? Give an example?
ROLAP eg.BO, MOLAP eg.Cognos, HOLAP, DOLAP
47. What do you mean by worklet?
When the workflow tasks are grouped in a set, it is called as worklet.
Workflow tasks includes timer, decision, command, event wait, mail, session,
link, assignment, control etc.
48. What is the use of target designer?
Target Definition is created with the help of target designer.
49. Where can we find the throughput option in informatica?
Throughput option can be found in informatica in workflow monitor. In
workflow monitor, right click on session, then click on get run properties and
under source/target statistics we can find throughput option.
50. What is target load order?
Ans: Target load order is specified on the basis of source qualifiers in a
mapping. If there are multifold source qualifiers linked to different targets
then one can entitle order in which informatica server loads data into
targets.
Session Log File Name & Session Log File
directory
Configure this property to modify
Default session log file name and
Path of the log file
The $PMSessionLogDir\ is an Informatica variable and in windows it points to
the following default location "C:\Informatica\9.6.1\server\infa_shared\
SessLogs".