Tuesday, April 27, 2010

Intra partition TDQ and Extra partition TDQ

INTRA PARTITION TD QUEUEs :- It is a group of sequential records which are produced by the same and / or different transactions within a CICS region. These Qs are stored in only one physical file (VSAM) in a CICS region, which is prepared by the system programmer. Once a record is read from a queue, the record will be logically removed from the queue; that is the record cannot be read again.

"Intrapartition" refers to data on direct-access storage devices for use with one or more programs running as separate tasks. Data directed to or from these internal queues is referred to as intrapartition data; it must consist of variable-length records. All intrapartition transient data destinations are held as queues in the same VSAM data set, which is managed by CICS. An intrapartition destination requires a resource definition containing information that locates the queue in the intrapartition data set. Intrapartition queues can be associated with either a terminal or an output data set. When data is written to the queue by a user task, the queue can be used subsequently as input data by other tasks within the CICS region. All access is sequential, governed by read and write pointers. Once a record has been read, it cannot be read subsequently by another task. Intrapartition data may ultimately be transmitted upon request to the terminal or retrieved sequentially from the output data set.

Typical uses of intrapartition data include:

* Message switching
* Broadcasting
* Database access
* Routing of output to several terminals (for example, for order distribution)
* Queuing of data (for example, for assignment of order numbers or priority by arrival)
* Data collection (for example, for batched input from 2780 Data Transmission Terminals)


EXTRA PARTITION TD QUEUEs :- It is a group of sequential records which interfaces between the transactions of the CICS region and the systems outside of CICS region. Each of these TDQs is a separate physical file, and it may be on the disk, tape, printer or plotter.


Extrapartition queues (data sets) reside on any sequential device (DASD, tape, printer, and so on) that are accessible by programs outside (or within) the CICS® region. In general, sequential extrapartition queues are used for storing and retrieving data outside the CICS region. For example, one task may read data from a remote terminal, edit the data, and write the results to a data set for subsequent processing in another region. Logging data, statistics, and transaction error messages are examples of data that can be written to extrapartition queues. In general, extrapartition data created by CICS is intended for subsequent batched input to non-CICS programs. Data can also be routed to an output device such as a printer.

Data directed to or from an external destination is referred to as extrapartition data and consists of sequential records that are fixed-length or variable-length, blocked or unblocked. The record format for an extrapartition destination must be defined in a TDQUEUE resource definition by the system programmer.


Note: If you create a data set definition for the extrapartition queue using JCL, the DD statement for the data set should not include the FREE=CLOSE operand. If FREE=CLOSE is specified, attempts to read the queue after the queue has been closed and then re-opened can receive an IOERR condition.

LIKE & REFDD Parameter

With SMS, use the LIKE or REFDD parameter to copy data set attributes from a model data set:

The LIKE parameter copies the attributes of an existing cataloged data set to the new data set that you are defining on a DD statement.
Use the REFDD parameter to specify attributes for a new data set by copying attributes of a data set defined on an earlier DD statement in the same job.
The following attributes are copied to the new data set from (1) the attributes specified on the referenced DD statement, and (2) for attributes not specified on the referenced DD statement, from the data class of the data set specified by the referenced DD statement:


Data set organization
Record organization (RECORG) or
Record format (RECFM)
Record length (LRECL)
Key length (KEYLEN)
Key offset (KEYOFF)
Type, PDS or PDSE (DSNTYPE)
Space allocation (AVGREC and SPACE)


//STEP01 EXEC PGM=IEFBR14
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
//IN12 DD DSN=TEST.SYAM.TEST.FILE0,
// DISP=(,CATLG,DELETE),LRECL=80,RECFM=FB,
// UNIT=SYSDA,SPACE=(CYL,(1,5),RLSE)
//OUT2 DD DSN=TEST.SYAM.TEST.FILE2,
// REFDD=*.IN12,DISP=(,CATLG,DELETE),
// SPACE=(CYL,(1,5),RLSE)
//OUT1 DD DSN=TEST.SYAM.TEST.FILE,
// LIKE=TEST.SYAM.SPUFI.IN,
// DISP=(,CATLG,DELETE)


Note: DO NOT use LIKE= to model the characteristics of a PDS member coded in JCL, a GDG, or a temporary data set. Also, please note that the EXPDT or RETPD dates are NOT copied by LIKE.
LIKE should not be coded on the same DD statement with the SYSOUT, DYNAM or REFDD parameters. Vsam Files can also be used as model files.

HSM Dataset Level Commands....


To recover a dataset issue the following command in ISPF option 6 to check for any backup listings

HLIST DSNAME ('YOUR.DATA.SET.NAME.HERE') BOTH

Which will produce the following listing

ARC0138I NO MCDS INFORMATION FOUND FOR DATASET,
ARC0138I (CONT.) YOUR.DATA.SET.NAME.HERE,
,
DSN= YOUR.DATA.SET.NAME.HERE BACK FREQ = *** MAX VERS=***
,
BDSN=HSM.BACK.T532800.YOUR.DATA.SET.NAME.HERE.J9315 BACKVOL=MR1190 FRVOL=TSO108
BACKDATE=09/01/08 BACKTIME=19:01:37 CAT=YES GEN=000 VER=001 UNS/RET= NO,
RACF IND =NO BACK PROF=NO,
,
ARC0140I LIST COMPLETED, 4 LINE(S) OF DATA OUTPUT,
***


If a BDSN dataset is available you can go for the recovery of that dataset with the following command.
If a BSDN dataset is not available you can manually backup the dataset using HBACKDS command.
HRECOVER ('YOUR.DATA.SET.NAME.HERE') GENERATION(0) REPLACE

Where the REPLACE option will overwrite the original dataset name

GENERATION specifies that you want to recover a particular backup version of a specific data set. For gennum, substitute the relative generation number of the backup version of the data set that you want to recover. Zero is the latest created backup version, one is the next to the latest created version, and so forth, up to the maximum number of versions existing for the data set.

HMIGRATE - migrate a data set
HRECALL - recall a migrated data set
HBACKDS - create a backup version of a data set
HBDELETE - delete backup version(s) of a data set
HRECOVER - recover a backup version of a data set
HDELETE - deletes a migrated data set
HLIST - lists HSM migration and backup control data set records
HQUERY - displays outstanding HSM requests



MCDS – Migration Control DataSets
BCDS – Backup Control DataSets
OCDS - Offline Control DataSets


Batch processing

Batch process for RECALL

//DEFEP10 EXEC PGM=IKJEFT01
//SYSOUT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
HRECALL 'YOUR.MIGRATED.DATA.SET.NAME.HERE' NOWAIT
HRECALL 'YOUR.MIGRATED.DATA.SET.NAME.HERE' NOWAIT
HDELETE 'YOUR.MIGRATED.DATA.SET.NAME.HERE' PURGE
/*
//


Batch process for DELETE…… The DELETE will be converted to HDELETE here

//DEFEP10 EXEC PGM=IDCAMS
//SYSOUT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DELETE 'YOUR.MIGRATED.DATA.SET.NAME.HERE' PURGE
/*
//



WAIT specifies that you want to wait for the HRECALL command to complete. If you are recalling data sets from tape, we recommend that you specify the NOWAIT parameter because the operator must mount the tape before the recall can complete.

NOWAIT specifies that you do not want to wait for the HRECALL command to complete.

PURGE is an optional parameter you use if you want to delete a migrated data set while it is within its retention period.






LIST command


Use this command to find out information about backups or migrated files. It is easiest to use as a line command on ISPF option 3.4 as shown below. HLIST DSNAME(/) BOTHHLIST DSNAME(/) MCDSHLIST DSNAME(/) BCDS

HBACKDS Command
You use this command to manually backup a dataset. You may want to do this before you make changes. The 3.4 line command can simply beHBACK /

HRECOVER Command
You use this command to restore a data set from the backup. From ISPF option 3.4, this can simply be HRECOVER / REPLACE

HBDELETE Command
If you want to delete a lot of backups, you will want to batch them up and run them as a job. When you do this, DFHSM will issue all the commands at once and queue them up. If the queue is too large, DFHSM will abend! You can avoid this by using the WAIT parameter as shown below. HSM will then process each delete one at a time. HSEND WAIT BDELETE (filename1) HSEND WAIT BDELETE (filename2) HSEND WAIT BDELETE (filename3) HSEND WAIT BDELETE (filename4)

HMIGRATE Command
You use this command to manually migrate a data set. You may want to do this a 'quick fix' to resolve space problems. At its simplest, the 3.4 line command is HMIG DSNAME(/) or HMIG DSNAME(/) ML2

HDELETE Command
If you use the ISPF line command 'D' or ' DEL ' to delete migrated files, DFHSM will recall the file first, which is a waste of time and resource. If you use HDELETE, then DFHSM deletes a migrated data set without recalling the data The ISPF 3.4 line command is simply HDEL /

HRECALL Command
This command will bring a data set back to primary disk. You do not need to recall a file manually; DFHSM will recall it automatically if you try to use it. However, it can be a pain waiting for a lot of files that are archived to tape, you may want to recall them by command. You will also need t use the command if autorecall is having problems. DFHSM recall is a file by file operation, you cannot batch up requests and recall a lot of files at the same time like FDRABR. The ISPF 3.4 line command is simply HRECALL /

Undoing Changes with the UNDO Command


The UNDO command can be entered on the command line to undo changes that you have made while editing a dataset. If you make a change and then press ENTER, you can undo the change by positioning the cursor on the COMMAND line and typing:

+-----+ UNDO ENTER +-----+

The changes you have made since the last time you pressed ENTER will be undone. You can use the UNDO command over and over again to undo changes made during the edit session.

You cannot use the UNDO command unless RECOVERY is set to ON.

To set Recovery On type the following commands in the command line
REC ON
then
PROF LOCK

Note: Locking your profile will save the changes to the profile dataset otherwise you will have to set the changes every time you login
profile zdefault in the command line will default your profile to the default settings.

Xpeditor Commands.....

AFTER Breakpoint after execution of line

BEFORE Breakpoint before execution of line

BOTTOM Scrolls to bottom of currently displaye data

COUNT Sets execution counters to gather test coverage statistics

DELETE Removes all XPEDITOR commands (e.g. breakpoints)

DLEFT Scroll data in Keep/Peek window to left- can specify amount

DRIGHT As above to the right

END Terminates current function and returns to previous screen

EXCLUDE Excludes data lines from displaying in the source

EXIT Terminates the current test session

FIND Searches for character strings, data names and COBOL structures.

GO 1 Walks through code (equivalent PF9)

GO Goes to next breakpoint (equivalent to PF12)

GOBACK Changes the program logic and returns higher level module

GOTO Repositions the current execution pointer

HELP Displays info about error message or gives tutorial info.

IF Establish a conditional expression in a block of inserted lines

INSERT Temporarily insert XPEDITOR/TSO commands in the program

INCLUDE Include command executes a predefined test script member

KEEP Displays the values in a chosen field in Keep window

LEFT Scrolls source listing to left by specified amount

LOCATE Scrolls to particular line number.

MEMORY Displays memory for a specified location

MONITOR Records program execution in a buffer.

MOVE Changes the contents of program variables

PAUSE Sets a pause breakpoint within inserted lines or commands

PEEK Displays values of program variables.

RESET Restores excluded lines in source screen

RETEST Begins a new test of the same program

REVERSE Reviews the execution path that led to the current breakpoint.

RIGHT Scrolls the source to the right by a specified amount

SET Overrides XPEDITOR/TSO defaults.

SHOW Displays breakpoints, diagnostic info or

SKIP Temporarily bypasses the execution of a statement

SOURCE Changes the module shown on the source display during Interactive debugging

TOP Goes to the top of the data

UP Scrolls to the top of data

WHEN Indicates when a specified condition is true or when program variable changes value.

WS Displays Working storage

ACCEPT Accepts the data from I/P file or from Instream data



line command B => for putting breakpoint
Keep => for monitoring variable values
F12 => for execution till the next breakpoint is encountered.
mon or monitor => to monitor the flow of execution (it is used if you want to go in reverse flow after some time) Rev or Reverse => after typing this command and then pressing F9 will take the flow in reverse flow (but for this monitor is must otherwise it will not remember the reverse flow) If you want to again go in forward direction, again type rev commmand. To delete monitor command is "delete mon"
When command => It is used to keep breakpoint till a variable reaches a value. For example you want the program to execute till "A" = xyz. then command will be when A = 'xyz' and press enter.
After this press F12. The execution will stop as soon as A gets value as 'xyz'.
After this do not forget to give "delete when" command.


Some interesting facts about GDG


  • You are only allowed to have one version (V00-V99) per generation (G0001 - G9999) for a GDG
  • When a new VERSION of an existing generation is created, the older version is deleted rather than kept.
  • To catalog a version number other than V00, you must use an absolute generation and version number.
  • The maximum number of generations that you can associate with a base is 255
  • For a GDG base, the maximum length is 35. For all other dsnames, the maximum length is 44.

For an existing GDG with version 00 for e.g. TST5.AAA.TEST.GDG.G0001V00 if a new version is created i.e. TST5.AAA.TEST.GDG.G0001V01 the older version i.e. V00 will be deleted automatically.

NOEMPTY means that when the 30 generation limit is reached, the oldest will be deleted, but all the others will be retained.

EMPTY means that when the limit is reached, all the 30 current files will be deleted and the process will start again with the new file.

SCRATCH means uncatalog and delete files when they exceed the generation limit. You could code

NOSCRATCH which means just uncatalog old generations but do not delete them.


One last thing.

How do you delete a GDG base?
If you just use 'D' against a 3.4 file listing the delete will fail with the error 'GDG base or VSAM file'. If you use the IEFBR14 program with your file name and DISP=(OLD,DELETE) coded you will get a 'dataset not found' JCL error.


There are two ways to do it

Use 'DEL' against a 3.4 file listing
Use IDCAMS with DELETE file.name PURGE in the control statements

How to retrieve the current date in COBOL

ACCEPT WS-DATE FROM DATE the result will fetch date in YYMMDD format 080218
ACCEPT WS-CURR-DATE FROM DATE YYYYMMDD the result will fetch date in YYYYMMDD format 20080218



EXCP CPU SRB CLOCK SERV PG PAGE SWAP VIO SWAPS STEPNO
43 .00 .00 .00 1930 0 0 0 0 0 1

I did some simple tests with DB2 calls and plain vanilla cobol.
Here are the results...

SELECT CURRENT_DATE FROM SYSIBM.SYSDUMMY1 is another option which is not recommended as the same would cause additional DB calls and affect the performance of the program.


EXCP CPU SRB CLOCK SERV PG PAGE SWAP VIO SWAPS STEPNO
281 .00 .00 .00 9802 0 0 0 0 0 1


EXCP: The number of EXCPs (Execute Channel Program) performed during the measurement interval.
SRB: The number of CPU seconds consumed in SRB mode during the measurement interval. This does not include any SRB time consumed by the Application Performance Analyzer measurement task.
SERV: CPU SERV + I/O SERV + OTH SERV


Things to try out:
Why do we select current date from SYSIBM.SYSDUMMY1 and what will happen if we select date from any other table.