IMS Induction Manual V1.2
IMS Induction Manual V1.2
PITNEY BOWES
Prepared By
Wipro Technologies
Confidentiality
This document is being submitted to Pitney Bowes by Technologies, with the explicit
understanding that the contents would not be divulged to any third party without prior written
consent from Technologies
APPROVAL SIGN-OFF
The above referenced deliverable has been reviewed and accepted by:
Name(s) Date
Technologies
TABLE OF CONTENTS
Preface............................................................................................................................................................ 6
Chapter 1 - Introduction................................................................................................................................ 7
1.1 The Project............................................................................................................................................ 7
1.2 The Scope............................................................................................................................................. 7
Chapter 2 - Project Details........................................................................................................................... 8
2.1 Project Phases....................................................................................................................................... 8
2.2 Technical Environment................................................................................................................... 8
2.3 PB Development Environment........................................................................................................ 9
2.4 Application Summary.................................................................................................................... 24
Chapter 3 - Current Maintenance Process................................................................................................. 25
3.1 Change Request (CR)......................................................................................................................... 25
3.1.1 CR Impact & Estimation............................................................................................................... 25
3.1.2 CR Scheduling............................................................................................................................. 25
3.1.3 CR Implementation...................................................................................................................... 26
3.2 Ticket................................................................................................................................................... 26
3.3 Application coding................................................................................................................................ 26
3.4 Pitney Bowes IBM coding standards................................................................................................... 26
3.5 Testing................................................................................................................................................. 26
3.5.1 Application Testing...................................................................................................................... 26
3.5.2 Component Testing..................................................................................................................... 27
3.5.3 System Testing............................................................................................................................ 27
3.6 Tools............................................................................................................................................ 27
3.6.1 XPEDITOR.................................................................................................................................. 27
3.6.2 File-AID........................................................................................................................................ 27
3.6.3 File-AID IMS (FAIMS).................................................................................................................. 28
3.7 CONFIGURATION MANAGEMENT.................................................................................................... 29
3.7.1 Remedy....................................................................................................................................... 29
3.7.2 Endevor....................................................................................................................................... 29
3.7.2.1 FEATURES OF ENDEVOR...................................................................................................... 30
3.7.2.2 BENEFITS of ENDEVOR system............................................................................................. 30
3.7.2.3 Invoking ENDEVOR................................................................................................................. 31
3.7.2.4 ENDEVOR main menu............................................................................................................. 31
3.7.2.5 ENDEVOR Display Options Panel............................................................................................ 32
3.7.2.6 ENDEVOR Foreground Panel.................................................................................................. 32
3.7.2.7 ENDEVOR Background Panel.................................................................................................. 32
3.8 Application development libraries........................................................................................................ 33
3.8.1 PANVALET source objects.......................................................................................................... 33
3.8.2 PANVALET load module objects................................................................................................. 33
3.9 JCL Coding Standards......................................................................................................................... 34
3:10 Reference Link................................................................................................................................... 46
Preface
Purpose
This manual is intended as a reference document, which should be used to get an overview of the
Order Entry to Cash Systems of Pitney Bowes Inc. In addition it also aims to give a brief
introduction to the outsourcing project that is being carried out by WIPRO.
This is an evolving document. This document will be initiated and completed during the transition
phase. Nevertheless, since this document is intended for reference, there will be updates to this
document during the service phase as well.
Audience
Those who would like to get an overview of the functional aspects of the Order to Cash
Systems.
Any new team member who gets inducted into the team
Those who start at off-shore facilities of Wipro on the project
Others, who are involved with the outsourcing project, could also use this document as a reference
document.
Chapter 1 - Introduction
Pitney Bowes is a $4.1 billion global provider of integrated mail and document management
solutions. The company serves over 2 million businesses of all sizes in more than 130 countries
through dealer and direct operations. Has 4 business units – Document Messaging technologies,
Global mailing systems, Information based systems, Pitney Bowes Management Services. There
are several products and services included under Mail and Parcels, Production Mail, Financial
services, Outsourcing services, Professional services.
Pitney Bowes aims to leverage the business knowledge within its Order to Cash System by re-
deploying them towards development of strategic projects. In pursuance of this aim, Pitney Bowes
has decided to out source the maintenance of some of its existing applications and ongoing projects
to Wipro. Order to Cash is one such application selected for transition to Wipro.
Order to Cash (OTC) systems involves the processes that affect customers, from the time an order
is taken until receipt of payment. The system processes orders for the Pitney Bowes Global Mailing
Systems.
OTC includes 30 different applications. The major applications are Order Entry, Billing, Invoice
generator & Accounts receivables (A/R).
For Order to Cash System, the following maintenance and enhancement activities should be carried
out:
Carry out Maintenance/Enhancements based on the Specs from Pitney Bowes team
Carry out the Production Support activities by providing On-Call support round the clock.
Change or add documents that support the system.
This chapter gives details of the phases for this outsourcing project, its technical environment, and
application summary.
The project is to transfer the responsibility of maintenance support of Order to Cash system from
Pitney Bowes to Wipro, which will be carried out in two phases namely:
Transition Phase
Service Phase
During the transition phase, the Wipro team will take over the responsibilities from Pitney Bowes
in various stages as per the plans detailed in Appendix A - Schedule of Activities for Transition, of
this document.
During the Service phase, Wipro will take full ownership of the systems and provide
maintenance support services from onsite as well as offshore.
--- Pitney Bowes Support System --- X = Terminate --- 17:35 ----- 03/04/15 --
F1=HELP F2=SPLIT F3=END F4=RETURN F5=RFIND F6=RCHANGE
F7=UP F8=DOWN F9=SWAP F10=LEFT F11=RIGHT F12=RETRIEVE
production libraries.
7 TRNS – Transform Transform is the tool used to create
online screens and generate the
source code related to the Online
programs.
8 DB utilities used to access various
IMS DB tools
10 Endevor for version control Endevor is used only for JCLs &
documentation
12 FTEXP – File transfer utility
13 FA – File aid File access
14 ZEKE – Job scheduler to access the
scheduling information
15 Xpeditor for debugging Xpeditor is the de-bug tool.
23 DASD To estimate the DASD that has to be
allocated for datasets
30 & 35 CSS – Compiler sub system CBLMVS
– Used to compile the source code
31 GDG – create Generation datasets
34 CJRST – Class J restore To restore the archived Job logs
Option – 1 PANVALET
( PAC )
1 = PAN > PANVALET
2 = PANDIR > PANVALET DIRECTORY LISTING UTILITY
3 = PANDELT > PANVALET MEMBER DELETE UTILITY
4 = PANXFER > PANVALET MEMBER TRANSFER UTILITY
5 = PANREST > PANVALET LIBRARY RESTORE UTILITY
6 = PCEZ > PANVELET COMPARE UTILITY for EASYTRIEVE
7 = PC > PANVALET COMPARE UTILITY for COBOL
8 = PANLSU > PANVALET LIBRARY SEARCH UTILITY
9 = PANMSU > PANVALET Member string search utility
10 = PANDXFM > PANVALET transform transaction delete
11 = PANAINA > PANVALET Inactivate PRODUCTION programs
Use Option-1 from this menu to browse, Edit or copy source code
Option –3 to delete source code member for a directory
Option –4 to transfer members across directories
Option – 5 to restore any panvalet library from the backups.
Option – 6 & 7 to compare source codes written in EZTRIEVE & COBOL
across directories
Option - 8 to search for a specific source code member across directories
Option - 9 to search for a string in source code present in a directory
Option – 3 SPITAB
Use this option to access the SPITAB tables. It can be used to move the SPITAB table definition
between libraries and list the contents of a table . Alternatively, the data from the SPITAB tables
can be listed by logging into SPITAB on IMSTEST.
Programs can be compiled using the compiler subsystem which can be accessed by Option 5 from
ISPF main menu or Option 30 or 35 from the Pitney Bowes Development Environment panel.
The compilation panel looks as shown below:
-- Job Card Parameters -- Only JOB CLASS parm below works for TRANSFORM --
User ID ==> CX04347 1 ACCOUNT ==> DMIS
JOB CLASS ==> C MSGCLASS ==> X Program
ROUTE PRINT ==> R3 PROG ID ==> ORUU0109 name
F1=HELP F2=SPLIT F3=END F4=RETURN F5=RFIND F6=RCHANGE
F7=UP F8=DOWN F9=SWAP F10=LEFT F11=RIGHT F12=RETRIEVE
After you fill in all the appropriate fields and select the option, hit enter to get to the next panel ,
which looks as shown below
After entering all the fields hit enter to submit the compilation job
From ISPF main menu option 3.8 will take you to SDSF main menu.
You can also set filters from the pull down menu available on this panel to list only the jobs of
your interest.
Note :- You will have to use the same SDSF utility to look at Production job spools too. You will
notice that the Production job logs are archived regularly on a daily basis. To look at the archived
job logs, use option 34 from the Pitney Bowes development Environment panel.
From ISPF, execute the command ‘RR’ to bring up the tool to submit a run request.
The panel looks as shown below:
Should be a Y if the
job is run for a prod
issue
No. of tapes used
Fill in all the required information in the above panel, put a ‘Y’ in the
“SECTION COMPLETE “ option and hit enter.
In the next two panels fill in the required information and the special instructions and hit enter to
submit the run request. A background job is submitted and you will be notified by a Run request
number once the submission is processed. Note down the RR number and get it approved by your
supervisor for the Run request to be processed by the Operations.
Run Requests continually are being misused. They are not intended to bypass the normal
application turnover process. The following standards are effective immediately.
1. Information Management is the central repository for all Run Request submissions. Canadian
personnel must sign onto Information Management directly and enter the appropriate
information. Mailing Systems users will continue to use the front end ISPF panel for entering
Run Requests.
2. The deadline for submission of Run Requests is 2:00 PM. Any Run Request received after
2:00 PM will not be processed until the next day unless it is in response to a production
incident. If in response to a incident, an Information Management problem record must be
entered in the Run Request record. A special field has been set up for this purpose. Data will
be verified by the Danbury Data Center before getting submitted. If the submission of the Run
Request will occur after 2:00 PM, the Data Center should be alerted prior to 2:00 PM to be on
the lookout.
3. All Run Requests will be printed by the Data Center at 2:00 PM and processed overnight. No
Run Requests will be processed during the day unless authorized by Data Center Management.
4. Run Requests will be processed at the end of the production schedule. The data center will
attempt to fix any Run Request using the same guidelines as production. Those Run Requests
where no call is requested will be left for the submitter to fix in the morning. Under no
circumstances will unsupported Run Requests from the night before be processed during the
following day. The submitter must fix the error and resubmit the Run Request to be scheduled
the next evening.
5. Project Manager's are accountable for using the Information Management system by 2:00 PM
and approve each Run Request for their area of responsibility. This can be done by entering
the free form text area of the record and putting a short comment indicating your approval.
This information is time and userid stamped so the Data Center can verify that the appropriate
approvals have been received.
6. Any Run Request which has a pre-requisite or dependency of a production job must be
supported. This includes all test schedules.
7. The job name for all Run Requests must take the following format both internally and
externally: RRXXBBBZ, where XX equates to the application being processed, BBB equates
to the submitters initials and Z equates to the run letter. The only exception to this rule is when
InfoPac (Canadian Report Management System) needs to be updated. In this case, the internal
job name must be production in order for the report to go out to InfoPac.
8. Any Run Request using over 100 cylinders of DASD space for any given dataset must receive
prior approval from the Storage Administration group, this includes SYSDA. Stargare
Administration contact personnel include: Ross Cook - 421-3407, Andy Switz - 421-3777 or
Jim Forrest - 421-3887.
9. Run Requests will not be processed during the freeze unless they are in response to a
production incident. The freeze schedule for the year can be found in dataset
DDPCN.USER.NEWS or in the Information Management database under the change
component of FREEZE.
From ISPF, submit the command ‘PSBGEN’ . The panel looks like as shown below
Enter the PSB name you want to Generate and hit enter to submit the request. A back ground job
will be submitted to process your request.
Note :- Please make sure that the PSB source is without any errors before you submit it for gen.
Once coded and tested to move the source code and load to the Stage and Production libraries,
follow the below procedure
Execute command MIGU;1 to prepare the move for the source components to
DDPCN.TRNSFRMA.PANLIB. The panel looks as shown below:
Put a Y to
process the
data
This can be a N to
avoid too much
processing time
After submission, a source transfer request number is displayed on the screen. A JCL is generated
for a batch job submission in DDPCN.MIGRSYS.JCL.
To submit the batch job to migrate the components, go to option MIGU;2. The panel will display
the pending requests that need to be processed. Select the request number and hit enter to process
the source transfer request.
When source is migrated, it is also recompiled before the source and load are moved to Staging
libraries.
The Order to Cash application is a group of different applications and is called OTC (Order to
Cash). It includes applications like Order Entry, Billing, Invoice Generator, Accounts Receivables
(A/R), General ledger, Sales reporting, Meter License etc. There are around 30 different
applications. The major applications (modules) are the Order Entry, Billing, Invoice generator &
Accounts receivables (A/R).
Using the Order Entry system the customer orders are entered into the system. The different ways
the Orders get into the system are through 1) Manually by Order Entry screens 2) Telephone
Ordering System 3) Small Office Division system.
Billing module (Periodic Billing or PEB) has several different sub-modules like Periodic
Equipment billing, Renewal Billing, One time billing or Initial Billing, EMA Billing, Renewal
EMA billing
Applications are changed based upon the creation of Change Requests (CRs) and scheduling of it in
a release or issuing a Ticket. The application changed is designed and program specifications are
written. Each program specification involves changing source code and creating production
deliverable. The programmer changes the source code, creates a deliverable and tests the changes.
At the end of each phase (specification, coding and testing) walkthroughs are conducted.
All work performed by any Pitney Bowes Order To Cash Team Member originates from one of
these four type-of-work areas.
Processing Customer Changes through Change Request (CRs)
Fixing Production Problems through Tickets.
Developing and Maintaining Project Documentation through Document Change Request
(DCRs)
Project Management
The changes required for the Order To Cash (OTC) system are initiated using the Change Requests
(CRs).
There are three major steps involved in the CR process.
The first step in the CR process is to select the CR for impact analysis and estimation. If the
requirement is not clear then meeting will be arranged with the Project Manager. The Project
Manager will clarify the questions or he/she will discuss the questions with the client and the
answers will be forwarded to the group that raised the questions. The Project Manager will also
update the definition document of the CR if there is any change after discussing with the client and
notify the groups through the persons who attend the impact meeting. The groups will do the
estimate based on the new version of the CR.
3.1.2 CR Scheduling
The estimated CRs are picked up based on order of priorities, for discussion to prepare the
schedules. The purpose of these discussions is to arrive at an appropriate date by which the CR can
be implemented. The scheduling of CRs depends on the availability of resources and the CRs
priorities.
3.1.3 CR Implementation
For the selected CRs a high level design and a low level design will be prepared. After coding and
unit testing, the program will be turned over to system testing. There will be reviews conducted
after each of these activities. After system testing is successfully completed, the programs are
turned over to production on the release date.
3.2 Ticket
Ticket can be issued by anyone (including application programmers) when a production program
has problems being run or the output is incorrect.
Tickets are raised using REMEDY and normally do not have an associated documentations. If a
ticket is scheduled for a release, it must have tasks just like CR.
Create a maintenance log in each csect (source module) being updated with the Project
Name, name of person who made the change, change date, release date and detailed description
of the change. Place the maintenance log at the end of the others so that the most current
maintenance log is last.
Insert the date and name of the person when new lines are added/deleted from the Source
Code.
3.5 Testing
The application programmer must test the source changes made to the module to ensure that the
new code performs as per the requirements. This is component testing.
The Application programmer must test the system to check if the changes made flows through the
system without any problems, gives the desired results and meets the user expectations.
3.6 Tools
3.6.1 XPEDITOR
XPEDITOR is the testing/debugging component for COBOL, PL/I, and Assembler programs in
TSO, MVS (batch), CICS and IMS environments. The supported features include program analysis
commands, execution control, intelligent breakpoints, Pseudo code with COBOL source update,
batch connect and changing data on the run. The supported databases include VSAM, IMS, DB2
and IDMS.
3.6.2 File-AID
File-AID is a general-purpose file and data manipulation tool for use in application development,
maintenance and production support activities. It runs under the dialog manager of ISPF and
operates similar to ISPF.
Features
Edit and browse data files in one of three modes: Formatted, which uses COBOL and PL/I
record layouts as templates over the data. - Vertical uses COBOL and PL/I record layouts. -
Character, displays the data as it would be seen in ISPF. Quickly and easily populate test
datasets on-line.
Use existing COBOL and PL/I record layouts for data definition and manipulation.
Use extensive file conversion and selection capabilities to eliminate the need to create “one-
time” file manipulation programs.
Perform global “find and change” operations on all of the members of a partitioned dataset,
or just on those members you select.
Allocate, delete and inquire about PDS and sequential files, as well as VSAM files,
alternate indices and paths on-line.
Compare two datasets and create a report of the differences at the record or field level.
Perform extensive data and file manipulation in batch mode.
Also eliminates the ISPF restrictions on record length, dataset organization and file size.
File-AID for IMS is an interactive, full-screen system designed for the application development and
maintenance environment that enables you to edit, browse, extract, and load IMS data bases. File-
AID for IMS significantly reduces the time required creating and maintaining IMS test databases,
to view database information, and to perform production-troubleshooting activities.
The File-AID for IMS system consists of three products: File-AID for IMS/ISPF, File-AID for
IMS/DC, and File-AID for IMS/CICS.
File-AID for IMS/ISPF runs as a dialog under TSO/ISPF and can access both on-line and
off-line IMS databases.
File-AID for IMS/DC runs as a non-conversational MPP under IMS/DC.
File-AID for IMS/CICS runs as a pseudo-conversational transaction under CICS and can
access IMS databases that is allocate on-line to CICS.
Uses existing COBOL and PL/I segment layouts as templates to display data in the Browse,
Edit, and Selection functions. Layouts can also be used to enter data when in Edit and to data
print when in Browse or Edit.
Uses existing DBDs to define the database structure.
Supports retrieval of database dataset names defined to the RESLIB dataset through
Dynamic Allocation.
Provides support for HDAM, HIDAM, HISAM, SHISAM, HSAM, Fast Path databases, and
secondary indexes.
Supports extended functions such as reformatting databases with the companion product
File-AID/MVS.
Provides an audit trail of all changes made during an edit session.
Supports editing and browsing in secondary index sequence.
Provides support for graphic characters through DBCS (Double-Byte Character Set)
support.
Allows formatted printing of one or more database segments to SYSOUT or to a file.
Provides a graphic hierarchy displays that shows segment relationships, current position,
and segment information.
For more information refer to File-AID for IMS 4.0 user manual.
3.7.1 Remedy
Remedy is the standard tool that is being used by Pitney Bowes for the purpose of tracking and
managing the Tickets. It is the software where Tickets can be issued against a software/hardware
product like an application, or against an organization such as the service center. Remedy can be
accessed through https://pbremedy.pb.com/arsys.
3.7.2 Endevor
Overview
In order to use ENDEVOR, specific ISPF libraries must be allocated on NODETEST. The
ENDEVOR panels are not available on the production systems.
Application development libraries
Application libraries for all ENDEVOR source code, object libraries and load libraries are defined
on the NODETEST machine. Production only needs to process against the load module and
DBRM libraries. Production and the application programmers do use the JCL and copylibs for
system setup and debugging.
Control Features
ENDEVOR establishes, maintains, and protects a control library of source programs, JCL, and
card-image data files.
Monitoring Features
ENDEVOR provides automatic monitoring of all source program development and maintenance
activity within its libraries.
Security Features
ENDEVOR has security features allowing users to restrict access to individual programs, and
allowing managers or administrators to restrict the use of any particular ENDEVOR facility.
ENDEVOR is a reliable and powerful programmer tool for creating, storing, and maintaining
source program code.
ENDEVOR system solves the problems inherent in self-programmed systems. It does this by
providing direct communication between ISPF Edit and ENDEVOR library. The benefits of
ENDEVOR direct communication include:
Ease of use.
Use of ENDEVOR and ISPF without ENDEVOR/TSO as intermediary.
Reduced overhead by bypassing the extra reading, writing, and DASD space required for
temporary data sets.
Use of the full range of ENDEVOR and ISPF facilities.
Display of standard ISPF error messages.
Maintenance and display of ENDEVOR level stamps.
Display of ENDEVOR user-comment records.
Protection of production status or LOCKed ENDEVOR members.
ENDEVOR can be invoked from TSO command line by typing in ENDE and pressing <Enter>.
Display an element
Add or update an element into stage 1
Retrieve or copy an element
Execute the Generate Processor for this element
Move an element to the next inventory location
Delete an element
Print elements, changes and detail change history
Explicitly sign-in an element
The application development libraries hold source objects, load module objects and production
deliverables.
There are two types of libraries. The application test library, which is used by the application
programmer to change source code and then test the changes and the Panvalet, controlled
development/system test/production libraries, which holds the development/system test and
production source and deliverables.
Application specification and coding involves creating source code, copylibs, dbrms, DB2 DDL
and other objects needed for an application.
Application programmers change source using Panvalet libraries that have the high level node as
‘DDPCN’.
Once the source code is changed, it may be tested from the DDPCN.TRNSFRMA.PANLIB library.
If source is being changed, the source is usually compiled into application test libraries. The
application library is are: PDPCN.USER.LOADLIB
JOB STATEMENT
AA = Application identifier
(new application identifiers are defined by DCA)
NNN = Three digit job identifier defined by applications / Data Center Administration.
D = Daily (Specify run days i.e.: Monday - Friday, Monday - Saturday, etc.
W: Weekly (Once a week)
B: Bi-monthly (Twice a month)
M: Monthly (Once a month)
Q: Quarterly (Every three months)
A: Annually (Once a year)
Programmer name field should specify a brief description of the job. A maximum length of 20
characters is allowed.
CLASS=
REGION = 5M (megabytes)
PDPCN.USER.LOADLIB - When executing a non IMS batch program other than a system utility,
i.e., IEBGENER, IDCAMS, IEFBR14, etc.
All remaining JOB STATEMENT parameters must be omitted, as they will default to Pitney
Bowes standards. Exceptions must be communicated to Data Center Administration.
EXEC STATEMENT:
The step name must contain the first five characters of the job name, followed by three digits to
identify the step number, preferably in increments of 10.
The first two characters of the program name must match the application identifier.
If executing a PROC, use the available standard PROCS, i.e., SAS, BMCUPROC,
IMSPBMP, etc.
COND parameter coding, preferably COND=(0,NE), should be coded on all steps, except on the
first step and ZEKE STEPCHK, but may be tailored to the applications specific needs. The COND
parameter MUST NEVER be coded on the ZEKE STEPCHK unless authorized by Data Center
Administration.
ZEKE STEPCHK must be coded as the last step of the job in order to include it in our automated
scheduling package. Example:
.
//STEPCHK EXEC PGM=ZEKESET
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
SET ABEND IF HIGHCOND GT 0
//
In the above example, the job will abend if any step returns a condition code higher than 0.
Questions on ZEKE STEPCHK coding should be directed to the ZEKE schedulers in Operations.
DD STATEMENT (DASD):
DSN=
With the implementation of DFSMS (Data Facility Systems Managed Storage) - an additional
character has been added to the PDPCN high level qualifier to indicate the frequency that the
dataset is being created. This change enables the data center to perform DASD management more
efficiently by allowing migration of DASD datasets on a dataset level and not on a volume level.
Examples:
1. PDPCNT - Temporary files that go to SYSDA and are deleted at either end of job or at end of
step.
2. PDPCND - Files that are catalogued on PROD volumes on a daily basis and are used either
from job to job or from cycle to cycle.
The second node of the dataset must match the step name except for Database utilities. The
following apply for Database utilities:
X = D for daily
= M for monthly
= W for weekly
= A for annual
The third node of the data set is up to the user, but should be meaningful, such as the datasets
ddname. For Database utilities, the third node must equal the job name while the fourth node must
equal the Database name. See examples below:
UNIT= SYSDA for datasets which are used only for the duration of the job. These datasets should
be deleted at the end of the job. Datasets larger than 100 cylinders must have an IDCAMS delete
statement immediately following the datasets last reference.
UNIT= PROD for data sets required by job(s) other than the creating job. Any data sets greater
than 500 cylinders must be approved by the Storage Administrator in Storage Administration.
DCB= MDLDSCB must be the first parameter if creating a GDG dataset. Additional DCB info
may be coded as required.
Omit DCB information. You must specify BLOCK CONTAINS 0 RECORDS for the file. This
allows the system to determine the optimal block size for the dataset resulting in: 1.) Less DASD
space used, and 2.) Faster processing time.
EZTRIEVE PROGRAMS
EZTVFM datasets should be allocated in blocks using the following syntax: (see Appendix C for
additional details)
SPACE=(4096,(100,100))
Datasets up to 10 cylinders should be allocated in tracks. Example
SPACE= (TRK,(150,10),RLSE)
The RLSE sub parameter must be coded unless a GSAM data set is being pre-allocated for IMS.
SYSOUT= When SYSOUT is required by a program, code SYSOUT=* under most situations.
NEVER code SYSOUT=X.
SYSOUT= (class, form name) where class is defined as follows for the Stamford Data Center
(SDC).
1 - Simplex
2 - Duplex
3 - Accounts Payable checks
4 - Quadruplex
6 - Special Forms
K - Payroll Checks
R - Payroll reports
SYSOUT=(class, form name) where class is defined as follows for the Mailroom.
SYSOUT=(class, form name) where class is defined as follows for RMDS reports.
DD STATEMENT (TAPE):
The second node of the dataset must match the step name except for Database utilities. The
following apply for Database utilities:
Fastscan (HSSR) unload must have HSSR as the second node Image Copies must have VAULT3X
as the second node where
X = D for daily
= M for monthly
= W for weekly
= A for annual
The third node of the data set is up to the user, but should be meaningful, such as the datasets
ddname. For Database utilities, the third node must equal the job name while the fourth node must
equal the Database name. See examples below:
UNIT= TAPE36
For datasets larger than 500 cylinders unless DASD is authorized by the Storage Administrator in
Data Center Administration.
For tapes leaving the Data Center and are absolutely guaranteed to be returned.
TAPE - For tapes which must be sent offsite, i.e., Credit Union, Microfiche etc. UNIT=TAPE
should only be used if Vendors cannot accept cartridges.
DCB= MDLDSCB must be the first parameter if creating a GDG dataset. Additional DCB info
may be coded as required.
LABEL= EXPDT=99000 must be coded when creating a GDG dataset. Multiple GDG files must
be stacked onto one tape and must be coded as follows:
LABEL=(n, SL, EXPDT=99000) where n is equal to the file number on the tape.
RETPD=60 days or less for non GDG datasets. Exceptions must approved and must be coded as
follows:
Only Standard label tapes (SL) are supported. Exceptions must be approved by the TLMS
specialist.
LABEL= (n, SL, RETPD=ddd) where n is equal to the file number on the tape and ddd is equal to
the number of days the dataset is to be retained.
Code VOL= (, RETAIN) for data sets that are used later in the same job.
Under no circumstances should a USER code EXPDT=99365 without prior approval from Data
Center Management. This parameter puts a permanent retention on the tape file.
An OFFSITE SHIPMENT FORM must be completed and sent to the Operations TLMS Specialist
in the Danbury Data Center to send a tape offsite.
SYSTEM UTILITIES
IEBGENER: - IEBGENER requires only the following DD statements. Other DD statements will
not be allowed to be put into production.
Do not specify DCB information on output data set(s) unless the output is being reformatted.
When creating a print file destined for RMDS, eliminate the IEBGENER step by writing the file
directly to RMDS via the user program.
IDCAMS: - If IDCAMS is used for the sole purpose of deleting datasets, replace IDCAMS with
PBSDR15. PBSDR15 will identify the amount of space used by a dataset before deletion takes
place. The data center and I/S can use this information to accurately identify the amount of DASD
required.
IDCAMS (PBSDR15) requires the following DD statements but others could be used
Specify LRECL and RECFM on the output dataset to allow the system to determine the optimal
block size.
Specify NOSCRATCH on the delete statement when deleting a tape dataset.
Always initialize a VSAM file with low values (HEX 0's). A high values record will cause
increased I/O's by inserting rather than adding, significantly adding to the job's runtime.
Omit CISIZE for batch files; VSAM will default to an optimal size.
Do not code SORTWK data sets; SORT will dynamically allocate them for you.
Do not specify DCB for sort-out data set unless the file is being reformatted.
For input datasets on TAPE, specify estimated file size. (Estimated file size parameter does not
work for DASD datasets. SORT looks at the actual input to get the file size.)
2) //SYSIN DD *
SORT FIELDS=(100,27,A)
OPTION FILSZ=E1500000
ICETOOL:-
This utility is used to combine multiple invocations of SORT or IEBGENER in one step.
See DFSORT Programming Application Guide release 11.1 and up for specific information.
The program name and the PSB name must be the same.
If BMP is an update, it must take checkpoints and have checkpoint restartability. The checkpoint
frequency must be control card driven NOT program driven).
IMS BMP's and DB2 batch cannot co-exist in the same job. IMS BMP's run
on System A while DB2 batch runs on System C.
If a BMP is an inquiry with a PROCOPT=G it must take checkpoints. This is required because a
PROCOPT=G specifies read with integrity. Therefore, locks are held which tie up the PI
Enqueue/Dequeue pool. The checkpoint frequency must be control card driven not program
driven.
All trace statements such as exhibits and displays must be removed from application programs
before they are turned over to production.
When executing the IMSPBMP proc, the only DD statements allowed outside of the proc are as
follows:
Other application input or output data sets the checkpoint frequency DD statement
Use IEFBR14 to pre-allocate GSAM files. The DCB parameter must specify the LRECL, RECFM
and DSORG parameters. Omit the RLSE parameter.
Example: DCB=(LRECL=80,RECFM=FB,DSORG=PS)
DSORG=PS indicates the file is sequential and the system will optimize the block size
automatically.
Code the space parameter with RLSE in the step in which the GSAM file is written to.
Optimal block sizes must be specified for EZTRIEVE programs. Block size must be specified and
cannot exceed 12288 when destined for a laser printer.
APPLICATION PROGRAMS
COBOL: - Do not specify DCB information on non IMS output data set(s).
Code BLOCK CONTAINS 0 RECORDS allowing system determine optimal block size.
Use the = DASD Fast Path Access Code to determine the optimal block size.
DDNAME EZTVFM should be allocated SPACE= (4096, (100,100)) unless the file is written to.
I/O Counts - To determine the amount of space and I/O's processed by a dataset check the
following:
Look at the $SYSMSG dataset of your SDSF (3.8) output. Message IEF237I informs the User
what volume each DD is allocated to. Message ACTRT002 informs the User the number of I/O's
that went to each file.
An I/O, as shown in message ACTRT002 is the writing of 1 block of data. If you let SDB (System
Determined Block size) do the work, 2 I/O's equal 1 track of data because SDB uses half track
blocking as its optimal size.
For example: If the record size is 80 bytes, the optimal block size for a 3390 DASD device is
27,920. Since 1 track of a 3390 can hold more than 56,000 bytes, 2 blocks of 27,920 (27,920 X 2 =
55,840) can fit on one track. Consequently, 2 I/O's equal 1 track of data. Less I/O's means less
CPU time (see message IEF374I above) which means that your job runs faster. This equation is
true for most user written programs. Sort uses its own method of I/O called EXCP but gives you
the number of records and block size in its message dataset.
All datasets conforming to the standards will no longer require a UNIT parameter to be coded:
Userid.TSO....
DCOPuid.DCOP.... (copier only)
DDPCN.TSO....
DDPCN.DCOP.... (copier only)
DDPCND.... \
DDPCNW.... \ when getting ready to turn things over
DDPCNM.... \ to production, you can test using the
DDPCNQ.... \__production standards.
DDPCNA.... /
DDPCNB.... /
DDPCNS.... /
DDPCNO.... /
If standards are not followed and no unit parameter is coded, the datasets will be directed to
SYSDA and deleted current night (@19:45).
Any sequential datasets under SMS management that is used for backup purposes only will be
automatically directed to DFHSM ML2 for 425days.
LAST HLQ
Until further notice, TSO datasets being created with UNIT=TAPExx specified will go to tape
regardless of the dataset name.
Production temporary datasets (those going to Unit=SYSDA) must have a High Level Qualifier of:
PDPCNT or PIMSNT (Temporary GSAM)
Datasets with the above HLQ get deleted after 3 days not referenced.
Production permanent DASD datasets (Those going to UNIT=PROD) must have a HLQ of (The
last character follows the job name standards for frequency). Unit parameter is not needed when
properly named.
XXXX - Signifies the name of the region: i.e., TEST, PROD, etc.