TechnoBase DBMS | MySQL | Unix/Linux | Microsoft .NET | Miscellaneous
Mainframes
Mainframe basics ---- Storage management ---- Job management ---- TSO/ISPF ---- Job Control language(JCL) -------- Procedures ---- Datasets ---- Utilities -------- DFSORT -------- SuperC CICS ---- CICS maps ---- CICS application programming ---- CICS administration -------- CICS system transactions -------- CICS startup/shutdown COBOL ---- COBOL basics ---- Datatypes and variables ---- Statements ---- File handling ---- Subprograms ---- Pointers VSAM ---- IDCAMS Assembler ---- Instructions -------- Machine instructions -------- Assembler instructions ---- Macros ---- Assembler programming DB2 ---- The DB2 Catalog and Directory ---- DB2 Datatypes ---- SQL -------- Statements -------- Functions ---- DB2 Application programming -------- Locking -------- Isolation levels -------- Using Explain -------- Program preparation ---- DB2 utilities and tools IMS DB ---- Database organization -------- Hierarchical sequential databases -------- Hierarchical direct databases -------- Fastpath databases ---- Database description (DBD) ---- Data language/I (DL/I) IDMS DB ---- The network model ---- Data definition ---- COBOL commands


Terminology
OS - MVS, Z/OS, TSO, ISPF
Storage - DASD, VTS, Spool, SDSF, CA-1 Tape Mgmt
Memory - VIO, Virtual storage, Addressing
Job Control - JES, JCL, CA-7, Initiators
Systems Programming - Rexx, CLIST
Application Programming - Assembler, COBOL, PL-I
OLTP - CICS, Tandem
Database - DB2, IMS DB, IDMS, VSAM
Tools - FileAid, InSync, DFSORT, SyncSort, IDCAMS, CA-ShareOption/5
Debuggers - Xpediter, CA-Intertest, TraceMaster
Source Control - CA-Endevor, ChangeMan, Panvalet, CA-Librarian


MAINFRAME BASICS

Operating systems

The MVS (Multiple Virtual Storage) OS was first released by IBM in 1974 and evolved over time into extended architecture as the hardware evolved. Eventually it became part of OS/390 and then z/OS.

LPARs and Nodes

Mainframes and midrange systems can be divided into smaller logical system images. LPARs are logical partitions of the mainframe hardware. Each LPAR runs a separate copy of the operating system. Large RS/6000 systems can be physically subdivided into nodes running separate copies of the OS.

An LPAR can be defined to access any I/O device connected to the mainframe hardware. An RS/6000 node can only access I/O devices physically connected to that node.

Initial Program Load (IPL)

The hardware IPL process is the main phase that occurs during the overall OS initialization process.

The main processes performed during system initialization are the creation of the system component address spaces, the initialization of subsystems and the loading of components which tailor the system. The SYS1.PARMLIB dataset and the SYSn.IPLPARM datasets, are read by NIP (Nucleus Initialization Program) during the IPL. These datasets are the main components of the system initialization that define the parameters of a particular system.

Some system software changes require that an IPL be performed in order to install the changes which are referred to as scheduled IPLs.

Unscheduled IPLs are performed when a system failure occurs to reset the system software to its initial status before the failure occurred. The commands that are entered by the console operator are the same as are entered for a scheduled IPL. However, when an unscheduled IPL is required because of a system failure, the system programmer will usually request a stand-alone dump to help determine the cause of the failure. During the stand-alone dump, the entire contents of real and virtual storage are dumped.

There are two types of consoles used to support the MVS environment.
  • The 3090 console (also called system console) is used to initiate the IPL. The 3090 console has a series of panels called control frames that are used to perform system functions at the hardware level.
  • The other console is the MVS console which is where most console activity is performed. Operators use the MVS console to issue MVS and JES commands and to receive messages from the system relating to system activity.

    Once JES is started, the IPL is considered completed but the system initialization continues by starting other systems that are required for the processing environment (e.g., VTAM, TSO, security).

    24-bit and 31-bit addressing

    Initial versions of MVS used 4 byte (32 bits) words to store virtual memory addresses, but only the last 3 bytes (24 bits) actually represented an address, the other 8 bits were essentially available to the programmer to do as they pleased. Many programs used these bits for passing flags. The maximum addressable memory was 16 MB.

    Later the 24 bit scheme was altered to use the entire word for addressing, bringing access to a 2 GB address space. The reason for 31 bits being used instead of 32 bits is because of the flags often stored in the first 8 bits. To save customers from rewriting huge volumes of 24 bit code, IBM ensured that MVS could run it without modification. This was achieved by setting the top bit as the 24 or 31 bit mode bit. If the bit was set, the address took up the following 31 bits, if it was zero then the next 7 bits were ignored and only the last 24 bits represented the address.

    The Dispatcher Process

    The MVS dispatcher is a routine within the Supervisor component of the OS which determines which units of work will be allowed to execute next - i.e. given control of the processor until the next interrupt occurs. It maintains queues of dispatchable UOWs, each with an associated priority (dispatching priorities are independent of swap priorities) and whenever the dispatcher is given control it selects the highest priority ready UOW to dispatch.

    Dispatchable units of work are represented by control blocks of two types - Task control blocks (TCBs) and Service request blocks (SRBs). TCBs represent tasks executing within an address space, such as user programs - but there are several TCBs associated with each address space, so more than one task could be running in any one address space at any one time. SRBs represent 'requests to execute a service routine' - they are usually initiated by system code executing from one address space to perform an action affecting another address space.


    STORAGE MANAGEMENT

    Direct-Access Storage Devices (DASD)

    The 'direct access' implies data can be accessed directly rather than having to progress sequentially through the data.

    DASD types
    	Unit  Bytes/Track    Tracks/Cyl.    Cyls./Vol.  Megabytes/Vol.
    
    	3380A    47,476         15            885            630
    	3380E    47,476         15          1,770          1,260
    	3380K    47,476         15          2,655          1,890
    	3390     56,664         15          2,226          1,892
    Attributes of DASD volumes

    Volumes have two types of attributes, both represented by various bits in the UCB (Unit Control Block) that describe the volume.

    The mount attribute describes if and under what circumstances the volume may be dismounted from the unit on which it resides. Modern disk volumes cannot be dismounted, but tape volumes can. The mount attribute indicates if the volume is permanently resident, reserved or removable. Normally, all disk volumes are mounted as permanently resident.

    Volumes also have a use attribute that determines the types of datasets that go on the volume. The use attribute, also represented by a bit in the UCB, has a value of either PUBLIC, STORAGE, or PRIVATE. Normally, SMS-managed volumes are mounted PRIVATE. PUBLIC volumes are normally used only to hold temporary datasets, while volumes with a use attribute of STORAGE may hold either temporary or permanent datasets, usually short-lived work datasets.

    Accessing data

    Start Input/Output - a machine-level instruction in a mainframe that starts a channel program and initiates I/O.

    EXCP (Execute Channel Program) - this routine initiates data transfer to or from DASD. It is higher level than start-I/O, though not as high-level as the access methods QSAM and VSAM.

    An access method defines the technique that is used to store and retrieve data. Access methods have their own data set structures to organize data, system-provided programs (or macros) to define data sets and utility programs to process data sets.

    BSAM (Basic Sequential Access Method) - access method for storing or retrieving data blocks in a continuous sequence, using either a sequential access or a direct access device.

    QSAM (Queued Sequential Access Method) - an extended version of the BSAM, wherein a queue is formed of input data blocks that are awaiting processing or of output data blocks that have been processed and are awaiting transfer to auxiliary storage or to an output device.

    System-managed storage (SMS)

    A storage environment without SMS is analogous to an airport without air traffic controllers. Allocations and deletions occur with little or no control, on whichever volume the person performing the allocation happens to choose. Some volumes may be highly utilized in terms of both space and performance while others are sitting idle. In a storage environment, a collision can be said to occur when a data set allocation fails because there is no space on the volume on which the allocation was attempted.

    SMS addresses this problem by placing device selection under the control of the system. The system does this using a policy established by the storage administrator who defines a volume pooling structure made up of storage groups. The storage administrator also writes straightforward automatic class selection (ACS) routines that define which data sets can be allocated in which storage groups. Using these ACS routines, the storage administrator can allow the system to control as much or as little allocation of storage groups as desired.

    When a new allocation occurs, the system uses the ACS routines to determine a set of storage groups in which the data set is eligible to be allocated. The system then considers criteria such as space and performance to select the specific volume or volumes on which to perform the allocation. This can help:
  • Reduce the number of out of space abends
  • Reduce device fragmentation
  • Balance allocations across a pool of devices
  • Improve storage utilization

    Virtual Tape solution (VTS)

    Virtual tape is a separate storage device that manages less-frequently needed data so that it appears to be stored entirely on tape cartridges when many parts of it may actually be located in faster storage, such as a hard disk.

    Virtual Tape Library (VTL) solutions are storage subsystems that emulate a tape library and use RAID-protected hard drives to store the data. Typically the VTL solution has its own file system, emulates multiple tape libraries, and provides additional performance boosts through load balancing and large block sizes. VTL solutions support their own file system that provides larger storage capacity and reliability.

    The programming for a virtual tape system is sometimes called a virtual tape server (VTS). VTS can be used with a hierarchical storage management (HSM) system in which data is moved as it falls through various usage thresholds to slower but less costly forms of storage media. VTS may also be used as part of a storage area network (SAN) where less-frequently used or archived data can be managed by a single VTS server for a number of networked computers.

    VTS offloads from the main computer the processing involved in deciding whether data should be available in the faster disk cache or written onto a tape cartridge. VTS also can manage data so that more of the space on a tape cartridge is actually used.


    Virtual Input/Output (VIO)

    UNIT=VIO may be specified for any temporary DASD allocation for batch jobs. No actual DASD tracks are allocated, rather, the system reserves space in Expanded Memory for all the virtual tracks allocated to VIO. All other DD statement parms are the same as a normal DASD allocation. VIO should only be used for temporary data, as the data will not be kept after the job ends.

    VIO is significantly faster than DASD, since no I/O is actually performed (paging I/O will take place as the batch job's address space is swapped in and out), however, use of VIO must be limited to smaller allocations, since large allocations will tax the paging subsystem.
    	// JOB
    	//ASM EXEC PGM=IEV90,PARM='OBJ,NODECK',REGION=5120K
    	//SYSLIB DD DSN=SYS1.MACLIB,DISP=SHR
    	//       DD DSN=TTU.MACLIB,DISP=SHR
    	//SYSUT1 DD DSN=&&SYSUT1,UNIT=VIO,SPACE=(1700,(600,100))
    	//SYSUT2 DD DSN=&&SYSUT2,UNIT=VIO,SPACE=(1700,(300,50))
    	//SYSUT3 DD DSN=&&SYSUT3,UNIT=VIO,SPACE=(1700,(300,50))

    JOB MANAGEMENT

    JES (Job Entry Subsystem)

    JES is a program that receives jobs into the system and processes all output data that is produced by the jobs.

    Job Entry Subsystem 2 (JES2) statements supply the necessary information to increase the efficiency of reading, scheduling, and printing jobs.

    JES2 statements are optional and immediately follow the job statement. JES2 statements have /* in the identifier field instead of //.

    Common JES2 statements
    /*MESSAGE - used to convey information to the system operator(s).
    /*JOBPARM - contains parameters that influence how the job is processed.
    /*OUTPUT - Used to manipulate an output dataset.
    /*ROUTE - specifies the destination of the printed output.
    /*XMIT - indicates a job or data stream to be transmitted to another JES2 node or eligible non-JES2 node.

    Submitting JES2 commands from a JOB
    	//STEP1 EXEC PGM=IEBGENER 
    	//SYSPRINT DD SYSOUT=*
    	//SYSIN DD DUMMY
    	//SYSUT2 DD SYSOUT=(*,INTRDR)
    	//SYSUT1 DD DATA,DLM='$$'
    	/*$jes2 command #1
    	/*$jes2 command #2
    	$$

    CA-7 Scheduler

    CA-7 is a workload management system that assists data center management in planning and improving the overall performance of the production environment. It maintains a database containing job and data set information, execution requirements, documentation, schedules and processing dates and times.

    CA-7 automatically submits jobs, enforces predecessor conditions, and tracks the progress of each job on MVS.

    CA-7 Commands

    LACT list all active jobs LJCL,JOB=XXX list the JCL of a job LJES,Q=PRN,JOB=XXX list prior run information LJOB,JOB=XXX,LIST=NODD list basic information for job XXX LJOB,LIST=RQMT,JOB=XXX list the requirements left for job XXX LJOB,LIST=TRIG,JOB=XXX list jobs that trigger/get triggered by job XXX LPRNN,LIST=ALL,JOB=XXX list the last date and time job XXX was executed LQ list the late queue LQ,JOB=XXX display a job with requirements LQ,SEQ=JOB list all active jobs, plus all jobs waiting to execute in alphabetical order



    Job Initiators

    The job of the initiator is to process batch jobs. Batch jobs originate from workstation submissions and are placed on an input queue based on job class and priority and remain on the input queue until the scheduler determines that an initiator is available to process the job. Then the scheduler directs the initiator to begin executing the job. Each initiator can process only one job at a time. Once the job has been selected, the initiator executes the job, step by step, until the job completes, keeping the scheduler advised of the its progress. Any output from the job will be found on the output queue and the initiator is available to run another job.


    MVS system codes
    	001  I/O error - An I/O error condition occurred while processing BSAM, QSAM etc.
    	002  I/O Invalid Record encountered during a QSAM/BSAM GET/PUT/WRITE operation
    	004  OPEN ERROR
    	008  I/O SYNAD ERROR - The error occurred during processing of a SYNAD routine
    	013  OPEN ERROR
    	028  PAGING I/O ERROR
    	0CX  PROGRAM CHECK EXCEPTIONS: A program interruption occurred with no routine specified to handle this type of interruption
    		0C1 - Operation
    		0C4 - Protection/Addressing
    		0C5 - Addressing 
    		0C6 - Specification
    		0C7 - Data
    		0CB - Decimal Divide (Usually Divide by zero) exception
    	106  Error occurred during a link, load, attach or xctl macro instruction. Bad address in load module
    	122  Job cancelled by operator with a dump
    	213  Error reading DSCB or dataset not in the volume. Volume contained more than 16 extents of the dataset
    	214  Error during execution of close macro instruction on a tape dataset. Error reading user label on a tape dataset
    	222  Job cancelled by operator or authorised TSO user without requesting a dump
    	322  The job exceeded the time limit provided
    	522  TSO session was automatically cancelled due to inactivity
    	613  I/O error in positioning of tape. Invalid label read/write from tape
    	622  TSO session cancelled by the operator
    	637  Error occurred at an end-of-volume for data set on tape or an end-of-volume during	concatenation. Concatenation of data sets with unlike DCB attributes
    	706  Non-executable Program  
    	804  Insufficient Virtual Storage. Error occurred during execution of eu, lu, or vu form of GETMAIN macro instruction
    	806  Unable to load, link program. Program not found. I/O error when BLDL routine attempted to search library
    	80A  Insufficient Virtual Storage. Error occurred during execution of an r-form GETMAIN macro instruction
    		Solution: increase the amount of memory available to the program by entering '/*JOBPARM R=nnn' statement (where 'nnn' is the 
    		region (memory) estimate in KB (1024 bytes) units)
    	813  Error during execution of open macro for tape dataset
    	878  Insufficient Virtual Storage
    	737  Error occurred at end-of-volume or during allocation of secondary quantity for the dataset. For concatenated PDS datasets, a specified 
    		member was not found. Missing member name was detected by BLDL
    	913  Security violation on protected dataset
    	A14  I/O error
    	B14  Error during close operation on a PDS opened for output to a member. Duplicate name in PDS directory. No space left for PDS.
    	B37  Insufficient DASD Space. Error was detected by the end-of-volume routine. Dataset used up all the 16 extents
    	D37  Insufficient DASD space. Dataset used up all the primary space and no secondary space was requested
    	E37  Insufficient DASD space. No volumes were available

    TSO/ISPF

    TSO or Time Sharing Option is a general-purpose service which is used to perform tasks in an interactive environment. i.e users will be using a terminal, or a microcomputer which functions as a terminal, to connect to the TSO service. While connected, or logged on, users can issue commands and the computer will respond to the commands.

    The Interactive System Productivity Facility/Program Development Facility (ISPF/PDF) is a component of TSO that facilitates interaction with the TSO service.

    TSO commands

    Language processing commands
  • asm - invoke assembler prompter and assembler
  • cobol - invoke cobol prompter and cobol compiler
  • pli - invoke pli optimizing compiler
  • plic - invoke pli checkout compiler

    Program control commands
  • call - load and execute the specified load module
  • link - invoke link prompter and linkage editor
  • loadgo - load and execute program
  • run - compile, load, and execute program
  • test - test user program

    Data management commands
  • allocate - allocate a data set
  • copy - copy a data set
  • delete - delete a data set
  • edit - create, edit, and/or execute a data set
  • format - format and print a text data set
  • free - release a data set
  • list - display a data set
  • listalc - display active data sets
  • listbc - display messages from operator/user
  • listcat - display user catalogued data sets
  • listds - display data set attributes
  • merge - combine data sets
  • protect - password protect data sets
  • rename - rename a data set

    System control commands
  • account - modify/add/delete user attributes
  • ispvcall - loads the call trace program. To end, type the same command again
  • operator - place terminal in operator mode

    Session Control
  • exec - invoke command procedure
  • help - invoke help processor
  • logoff/logon - end/start terminal session
  • profile - define user characteristics
  • send - send message to operator/user
  • terminal - define terminal characteristics
  • time - log session usage time
  • when - conditionally execute next command

    Foreground initiated background commands
  • cancel - cancel background job
  • output - direct output medium for background job
  • status - list status of background job
  • submit - submit background job

    Access Method Service Commands
  • alter - alter attributes in catalog entries
  • define - define user catalogs, data spaces, clusters, page spaces, nonvsam datasets, alias names and GDGs
  • export/import - move a cluster or user catalog entry from/into the system in which the command is executed
  • print - list all or part of an indexed sequential, sequential or VSAM dataset
  • repro - copy VSAM clusters, catalogs and non-vsam datasets
  • verify - verify end of file

    TSO/ISPF editor commands
    	delete all x	- deletes all lines excluded from view
    	delete all nx	- deletes all lines not excluded from view
    	exclude all or x all	- excludes all source lines from view
    	exclude all 'string' or x all 'string'	- excludes all lines containing the string from view
    	flip	- reverses screen view by excluding all lines not excluded from view and bring forth excluded lines
    	recovery on
    	num
    	unnum
    	renum
    	reset
    	save
    	end
    	cancel
    	undo
    	find
    	change
    	copy (member)
    	create (member)
    	replace(member)
    	edit(member)
    	hex on/off
    	sort
    Line commands

    Line commands used in the ISPF editor include:a, b, bnds, c, cc, col, d, dd, i, lc, lcc, m, mm, o, oo, r, rr, uc, ucc, ), )), (, ((

    Back



    Job Control Language(JCL)

    JCL provides the means of communicating between an application program and the OS and computer hardware. JCL consists of control statements that introduce a computer job to the OS, request hardware devices, direct the OS in terms of running applications and scheduling resources.

    JCL statements syntax

  • Must begin with // (except for the /* statement) in columns 1 and 2
  • Is case-sensitive (lower-case is just not permitted)
  • name field - is optional and must begin in column 3 if used. Must code one or more blanks if omitted. It identifies the statement so that other statements or the system can refer to it. It can range from 1 to 8 characters in length, and can contain any alphanumeric or national (@ $ #) characters.
  • operation field - specifies the type of statement: JOB, EXEC, DD, or an operand command. Stands alone and must begin on or before column 16
  • operand field - contains parameters separated by commas and must end before column 72. Parameters are composites of prescribed words (keywords) and variables for which information must be substituted
  • comments field - optional. Comments can be extended through column 80, and can only be coded if there is an operand field
  • All fields, except for the operands, must be separated by one blank


    JCL statements

    JOB - specifies the name of the job and its parameters
    EXEC - executes a program or a procedure within the job
    DD - specifies an input or output dataset

    Other statements include
  • //command - enters an MVS system operator command through the input stream; primarly used by the operator
  • //COMMAND - specifies an MVS or JES command that the system issues when the JCL is converted
  • //CNTL and //ENDCNTL - marks the beginning/end of one or more program control statements
  • IF/THEN/ELSE/ENDIF - specifies conditional execution of job steps within a job
  • //INCLUDE - identifies a PDS or a PS that contains JCL statements to include within the job stream
  • //JCLLIB - libraries the system will search for Include groups and Procs
  • //OUTPUT - processing options that JES is to use for printing a sysout dataset
  • //PROC and //PEND - beginning/end of an instream or catalogued procedure
  • //SET - designs and assigns initial values to symbolic parameters used when processing JCL statements
  • //XMIT - transmits input stream records from one node to another

    Operands in the JOB statement

  • USER - identifies the user executing the job to the system
  • TIME - Total machine minutes allowed for the job to execute
  • MSGCLASS - output class for the job log or JES (Job Entry Subsystem) messages. MSGCLASS=J is the default (8.5"-11" hole paper)
  • REGION - indicates the amount of storage to be allocated to the job. A value of 0K or 0M allows the program to request the largest available region size.

    Operands in an EXEC statement

  • REGION - indicates the storage to be allocated to the step. A REGION= value on the JOB statement will override any REGION= specified on an EXEC statement in the job. So it is a better practice to specify the REGION parameter on the EXEC statements than on the JOB statement
  • COND - The COND parameter specifies whether or not a job step is to be executed, based on return codes (or ABENDs) from previous steps.
    	COND=(code,operator)
    	or
    	COND=abend-test
    code - a number from 0 through 4095. This number is compared with the return codes issued in all previous steps.
    operator - any one of GT,GE,EQ,LT,LE,NE. The operator is used to compare the code to the return code from each previous step. If the comparison is false, then the current step (with the COND parameter) is bypassed.
    abend-test - either EVEN or ONLY. EVEN specifies that the job step is to be executed even if a previous job step has ABENDed. ONLY specifies that the step is to be executed only if a previous job step has ABENDed.

  • PARM - Can be used to pass upto 100 characters of data to the program being executed.
  • TIME - Specifies the maximum amount of time that a job step may use the CPU. The total time for all steps may not exceed the time specified in the JOB statement


    The DD (Data Definition) statement

    	//ddname DD UNIT=unittype,DSN=userid.name,
    	//       DISP=(beginning,normal-end,abnormal-end),
    	//       SPACE=(TRK,(primary,secondary,directory)),
    	//       RECFM=xx,LRECL=yy,MGMTCLAS=retainx
    	
  • ddname - data definition name; a 1-8 character word of user's choice, must begin with a letter or $, @, #
  • UNIT = unittype - type of I/O device - disk, tape, etc. UNIT=SYSDA refers to the next available disk storage device
  • DSN=userid.name - Dataset name, can contain up to 44 characters including periods
  • MGMTCLAS - specifies the name of the Management Class which is a set of specifications for the way the storage occupied by the data set should be treated by SMS

    The DISP parameter

    The DISP parameter describes the current status of the data set (old, new, or to be modified) and directs the system on the disposition of the dataset (pass, keep, catalog, uncatalog or delete) either at the end of the step or if the step abnormally terminates. DISP is always required unless the data set is created and deleted in the same step.
    	//	DISP = (beginning, normal-termination, abnormal-termination)
    	
    	possible values:
    	beginning - NEW, OLD, SHR, MOD
    	normal-termination - CATLG, KEEP, PASS, DELETE, UNCATLG
    	abnormal-termination - DELETE, KEEP, CATLG, UNCATLG

    The DCB parameter

    The following subparameters are part of the DCB parameter
    RECFM=xx specifies the record format and can be one or more of the following characters
    	F - fixed-length
    	V - variable-length
    	U - undefined-length
    	FB - fixed and blocked
    	FBA - fixed, blocked, with ANSI carriage control characters
    	VB - variable and blocked
    	VBA - variable, blocked, with ANSI carriage control characters
    LRECL=yy specifies the length of records
  • equal to the record length for fixed-length records
  • equal to the size of the largest record plus the 4 bytes describing the record's size for variable-length records
  • omit the LRECL for undefined records
  • LRECL can range from 1 to 32760 bytes

    BLKSIZE=zz specifies the blocksize if it is wished to block records
  • must be a multiple of LRECL for fixed-length records
  • must be equal to or greater than LRECL for variable-length records
  • must be as large as the longest block for undefined-length records
  • BLKSIZE can range from 1 to 32760 bytes

    BUFNO=n specifies the number of buffers to be assigned to the DCB

    DSORG=org indicates the dataset organization
  • PS - Physical sequential
  • DA - Direct
  • PO - Partitioned data set (PDS)
  • VS (for VSAM data sets) is not needed and should not be given

    OPTCD=code specifies optional services to be performed by the control program. Many codes are possible; four are of particular interest
  • B - disregard EOF (end of file) labels on tape; treats multi-volumes as a single data set
  • J - indicates the output contains Table Reference Characters (TRCs) to print with more than one character set
  • Q - translation to or from ASCII is required
  • Z - requests reduced error recovery for magnetic tape input; good for problem tapes


    Parameters for tape datasets

    The LABEL parameter
    LABEL=(seqno,labeltype,expdt or rtndt,IN/OUT)
    
    	seqno - the data set sequence number, specifies the relative position of the data set on a tape volume
    	SL - indicates that the tape has IBM standard labels
    	AL - ANSI labels
    	NL - no labels
    	IN - specifies that a data set is to be used for input only
    	OUT - specifies that a data set is to be used for output only
    Including input data as part of the JCL job stream

    By using either DD * or DD DATA. If the input data contains record switch // in col 1 and 2 then DD DATA can be used as below.
    	//USERIDA  JOB CLASS=A,MSGCLASS=X,MSGLEVEL=(1,1)
    	//STEP1   EXEC PGM=REVLMOD
    	//SYSUT2   DD  DSN=SYS2.LINKLIB,DISP=OLD
    	//SYSIN    DD  DATA,DLM=@$
    	  .............data ................
    	@$
    	//*
    Procedures

    Catalogued and instream procedures

    Procedures are combinations of JCL steps that are stored in a separate library so that they can be reused across multiple JCLs. Procs can be also coded instream within a JCL. Procs are called from the JCL using the EXEC statement.

    A procedure contains one or more steps; each step consists of an EXEC statement that identifies the program to be executed and DD statements that define the data sets used. The program requested on the EXEC statement must exist in the system or private library defined by a STEPLIB DD statement.

    A cataloged procedure must not contain JOB statements, delimiter statements, null statements, JOBLIB DD statements, or DD statements with * or DATA coded in the parameter field.

    Procedures are usually placed in a PDS which is referenced in a JCL by the JCLLIB statement that just follows the JOB statement. JCLLIB ORDER=(proclib1,proclib2.. ).

    	//UPDATE   PROC  CORE=,INPUT=,VOLI=,OUTPUT=,VOLO=
    	//UPDATE1  EXEC  PGM=WEEKLY,REGION=&CORE
    	//INPUT    DD  DSN=&INPUT,UNIT=TAPE,VOL=SER=&VOLI,
    	//         DISP=OLD
    	//OUTPUT   DD  DSN=&OUTPUT,UNIT=TAPE,VOL=SER=&VOLO,
    	//         DISP=(,KEEP)
    Instream procedures are mostly used for testing catalogued procedures and in setting up a set of JCL for repeated use during a single job.

    An instream procedure is subject to the same restrictions as a regular procedure - it must not contain JOB statements, delimiter statements, null statements, JOBLIB DD statements, or DD statements with * or DATA coded in the operand field. Instream procs are ended by a PEND statement.

    Back


    Datasets

    Partitioned and Sequential Datasets

  • Partitioned Datasets (PDS) consist of multiple files (members) within a single data structure and are created by specifying directory blocks in the SPACE parameter of the DD statement i.e SPACE(TRK,(primary,secondary,directory blocks)). Usually 1 directory block can hold 5 members.
  • Sequential Datasets (PS) are created by specifying no directory blocks in the SPACE parameter i.e SPACE(TRK,(primary,secondary))

    Generation Data Group (GDG)

    Consists of a set of related datasets (generations) plus a catalog structure (base) to keep track of the datasets. Generations of a GDG can have like or unlike DCB attributes and dataset organizations. They can be on disk or on tape, or mixed. They must all be cataloged. Other than the way they are named and tracked, generation data sets are like any other datasets.

    	//  EXEC PGM=IDCAMS
    	//SYSIN  DD *
    	   DEFINE GENERATIONDATAGROUP	-
    		(NAME  (MAB.GDGTEST)  LIMIT (20)  SCRATCH NOEMPTY)
    	/*

    Temporary datasets

    Temporary data sets are used for storage needed only for the duration of the job. If the DISP parameter doesn't delete the data set by the end of the job, the system will delete it. Deleting a tape data set dismounts the tape, whereas deleting a dataset on a DASD volume releases the storage.

    A data set is marked temporary by omitting the DSN parameter or by coding DSN=&&dsname. The system assigns a unique name to the data set when the DSN parameter is omitted, and any subsequent steps using the dataset refer back to the DD statement.

    Back


    Utilities

    Icegener

    Used to copy PS or PDS members and to produce an edited sequential dataset or PDS.

    Submit another job from within a JCL

    ICBGENER output is directed to the Internal Reader which takes the input and sends it to JES2 (or JES3) so it can be processed. (in ISPF when a JCL is submitted, it is picked up by the Internal Reader and sent to JES)
    	//STEP050 EXEC PGM=ICEGENER
    	//SYSPRINT DD SYSOUT=*
    	//SYSUT1 DD DSN=SYSPDA.JCLLIB(JCL2),DISP=SHR
    	//SYSUT2 DD SYSOUT=(A,INTRDR)
    	//SYSIN DD DUMMY
    The above step reads and submits the contents of SYSPDA.JCLLIB(JCL2). This technique can be used to simulate a job scheduler, when the first piece of JCL has finished processing, an IEBGENER step could be added to submit the next job in the sequence.

    Another reason to use this technique might be to add the contents of a dataset to a job as instream data.

    	//STEP070 EXEC PGM=IEBGENER
    	//SYSPRINT DD SYSOUT=*
    	//SYSIN DD DUMMY
    	//SYSUT2 DD SYSOUT=(A,INTRDR)
    	//*
    	//SYSUT1 DD *,DLM=##
    	//SYSPDAJ2 JOB (XYX000),CLASS=A,MSGCLASS=B,MSGLEVEL=(1,1),
    	// NOTIFY=SYSPDA
    	//*
    	//STEP010 EXEC PGM=IEBGENER
    	//SYSPRINT DD SYSOUT=*
    	//SYSIN DD DUMMY
    	//SYSUT2 DD SYSOUT=P
    	//SYSUT1 DD *
    	##
    	// DD DSN=PROD.CLIENT.DATA,DISP=SHR
    	//*
    In the above example the JCL starting at line 7 (//SYSPDAJ2 JOB...) and ending at line 14 (//SYSUT1 DD *) is read and the contents of the dataset PROD.CLIENT.DATA are appended to it before submission.

    When the JCL is included as instream data, as above, it is necessary to use the DLM (Delimiter) to indicate that what follows is data and where the data ends.


    DFSORT

    DFSORT is a program used to sort, merge or copy information.

    Datasets used in sort

  • SORTIN - Input dataset for a sort application
  • SORTIN01-SORTIN99 - Input datasets for a merge
  • SORTWK01-SORTWK32 - Intermediate storage datasets for a sort
  • SORTOUT - Output dataset for a sort/merge
  • SORTOFxx - Used for multiple output datasets created using INCLUDE and/or OMIT statements, xx = valid alphanumeric characters
  • SYSIN - Input file for sort/merge control statements
  • $ORTPARM - Used to pass SYNCSORT parameters in invoked sorts or in IEBGENER (BTRGENER) applications

    The SORT statement format

    	SORT FIELDS=(s,l,t,w),<SKIPREC=n>,
    			<EQUALS>,
    			<NOEQUALS>
    s is the starting position on the file.
    l is the length of the field.
    t is the type of field, CH-character, ZD-zoned decimal, PD-packed decimal, BI-binary
    w is set to either A or D to signify whether the sort is in Ascending order or Descending Order.
    Multiple sort fields can be specified on the same statement by specifying s,l,t,w multiple times, separating each with commas.


    Field types
  • ZD (Zoned Decimal) - A method of representing decimal numbers by using one byte to represent each digit. The last byte represents both the last digit and the number's sign. Each byte consists of a decimal digit on the right and the zone code 1111 (F hex) on the left, except for the rightmost byte where the sign code replaces the zone code.
  • PD (Packed Decimal) - This representation stores decimal digits in each nibble of a byte. On IBM mainframes, the sign is indicated by the last nibble. A, C, E, and F indicate positive values, and B and D indicate negative values.

    Examples of SORT
    	SORT FIELDS=(1,10,CH,A)
    	SORT FIELDS=(5,5,CH,A,40,4,PD,A)
    	SORT FIELDS=(50,5,BI,A,15,4,PD,A,12,2,ZD,D)
    	SORT FIELDS=COPY	- the file will be copied without being sorted
    	SORT FIELDS=(1,5,A,50,10,A),FORMAT=CH	- used when all sort fields are of same type
    The MERGE Statement

    Merge is generally used to add records from a dataset to another dataset. Merge allows up to 16 files to be merged. The files being merged must be the same format and have been sorted. The merge statement fields are specified in exactly the same way as on the SORT statement.
    	MERGE FIELDS=(s,l,t,w)
    FORMAT and FIELDS=COPY can also be specified like SORT.
    	MERGE FIELDS=(5,10,CH,A)
    	MERGE FIELDS=(10,5,ZD,D,34,4,PD,A)
    	MERGE FIELDS=(3,5,A,17,3,A),FORMAT=CH
    SUM FIELDS=(s,l,t) - Summarizes (consolidates) records that have equal control fields (sort keys). SUM FIELDS=NONE removes duplicates w/o summarizing.

    	SUM FIELDS=(21,5,PD,58,3,ZD)
    
    	SORT FIELDS=(1,18,A),FORMAT=CH
    	SUM FIELDS=NONE       ----> all duplicates in columns 1 to 18 will be removed
    
    	INREC FIELDS=(328,1,125,11)
    	SORT FIELDS=(1,1,CH,A)
    	SUM FIELDS=(2,11,ZD)  ---- summarize on a ZD column
    	OUTREC FIELDS=(1,1,2:2,11,ZD,EDIT=(STTTTTTTTTTT),SIGNS=(,-))
    Control statements/Options

  • SKIPREC

    SKIPREC=n instructs the sort to skip 'n' records before sorting the input file.
    	SORT FIELDS=(18,9,CH,A),SKIPREC=5
  • EQUALS/NOEQUALS

    When specified, EQUALS acts to preserve the original order of records that contain equal control fields. For e.g using the EQUALS option on an alphabetical listing of names being sorted by zip code, the output has alphabetical order intact within equal zip codes. The EQUALS option decreases sort efficiency slightly and should therefore be used only when necessary.

  • OUTFIL

    Sort input into multiple output files. Both methods below use the OUTFIL keyword, and are identical except in how the DDNAME is specified.

    Method 1 allows DDNAMEs to be specified. The sort file will be split into 2 files, CUST1 and CUST2, depending on whether the first character is A or B. CUST1 and CUST2 must be defined as DDNAMES in the JCL.
    	SORT FIELDS=(1,10,CH,A)
    	OUTFIL FNAMES=CUST1,INCLUDE=(1,1,CH,EQ,C'A')
    	OUTFIL FNAMES=CUST2,INCLUDE=(1,1,CH,EQ,C'B')
    Method 2 uses DDNAMEs defined by DFSORT in the form SORTOFxx. The sort file is split into 2 files, SORTOF1 and SORTOF2 which must be defined as DDNAMEs in the JCL.
    	 SORT FIELDS=(1,10,CH,A)
    	 OUTFIL FILES=1,INCLUDE=(1,1,CH,EQ,C'A')
    	 OUTFIL FILES=2,INCLUDE=(1,1,CH,EQ,C'B')
  • INREC/OUTREC

    The layout of a record can be changed both before and after it is sorted using the INREC and OUTREC keywords by specifying the fields to be included in the new record. INREC reformats the record layout before it is passed through the sort, while OUTREC will reformat the record layout after the record has been sorted.
    	SORT FIELDS=(1,10,CH,A)
    	INREC=(1,10,CH,50,4,PD,40,4,PD)
    
    	SORT FIELDS=(1,10,CH,A)
    	OUTREC=(1,10,50,4,40,4)
    
    	SORT FIELDS=(1,10,CH,A)
    	OUTREC FIELDS=(1:10,50:53,40:43)
    All three examples will reformat the record so that it consists of the first 10 bytes of the input record, followed by the 4 bytes starting at position 50, followed by the 4 bytes starting at position 40.

  • STOPAFT
    	SORT FIELDS=COPY,STOPAFT=100
    	OUTREC FIELDS=(1,80)	-- copies the first 100 records (80 cols) of a file to another file
  • INCLUDE/OMIT COND
        SORT FIELDS=COPY
        INCLUDE COND=(1,6,CH,EQ,C'JACKIE')
    
        SORT FIELDS=COPY
        OMIT COND=(41,3,CH,EQ,X'000000')
    IceTool Control statements

    * Put duplicates in DUPS and non-duplicates in NODUPS
    SELECT FROM(DATA) TO(DUPS) ON(5,8,CH) ALLDUPS DISCARD(NODUPS)
    * Put records with 5 occurrences (of the key) in EQ5
    SELECT FROM(DATA) TO(EQ5) ON(5,8,CH) EQUAL(5)
    * Put records with more than 3 occurrences (of the key) in GT3, and records with 3 or less occurrences in LE3.
    SELECT FROM(DATA) TO(GT3) ON(5,8,CH) HIGHER(3) DISCARD(LE3)
    * Put records with 9 or more occurrences in OUT2.
    SELECT FROM(DATA) ON(5,8,CH) LOWER(9) DISCARD(OUT2)
    * Put last of each set of duplicates in DUP1
    SELECT FROM(DATA) TO(DUP1) ON(5,8,CH) LASTDUP


    SuperC

    SuperC is a fast and versatile program that can compare
  • two sequential data sets
  • two complete partitioned data sets
  • members of two partitioned data sets
  • concatenated data sets
    	//COMPARE  EXEC PGM=ISRSUPC,PARM=('LINECMP,CHNGL,UPDCNTL')
    	//STEPLIB  DD   DSN=ISPF.LOAD,DISP=SHR
    	//NEWDD    DD   DSN=SAMPLE.PDS(TEST1),DISP=SHR
    	//OLDDD    DD   DSN=SAMPLE.PDS(TEST2),DISP=SHR
    	//OUTDD    DD   SYSOUT=A
    	//DELDD    DD   DSN=SAMPLE.UCTL1,DISP=OLD
    	//SYSIN    DD   *
    		CMPCOLM 2:72
    	/*
    Process statements in SuperC

    Process statements are made up of a keyword followed by an operand or operands and are passed to SuperC in an input file.
  • CMPCOLM - compares only the specified columns ( e.g CMPCOLM 1:70 72 74 or CMPCOLM 1:70,72,74). The two variations CMPCOLMN and CMPCOLMO can be used to specify columns in the new and old files respectively, separately
  • CMPLINE - compares lines within the specified limits only. e.g CMPLINE TOP 55 BTM 99
  • DPLINE - causes a line containing a specific set of characters to be dropped from the comparison. For e.g DPLINE '$' will exclude all lines containing the $ character
  • NCHGT/OCHGT - change a set of characters in the new/old datasets prior to the comparison to temporarily mask a compare difference

    Examples:
    	NCHGT 'ABC2', 'ABC1'         - Replace all 'ABC2' strings to 'ABC1'
    	OCHGT 'ABCD', 'PRQS', 1:50   - Replace within columns 1 to 50 only
    	NCHGT X'7B01',':1',6         - Change hexadecimal characters '7B01' to ':1'. Must start in column 6
    	NCHGT 'PREF???', 'NPREF'     - Change strings with prefix 'PREF' followed by any three characters to 'NPREF'.

    Back



    CICS

    CICS (Customer Information Control System) is a teleprocessing monitor from IBM that was originally developed to provide transaction processing for IBM mainframes. It controls the interaction between applications and users and lets programmers develop screen displays without detailed knowledge of the terminals being used.

    CICS is also available on non-mainframe platforms including the RS/6000, AS/400 and OS/2-based PCs.

    Translation

    Because the compilers (and assemblers) cannot process CICS commands directly, an additional step is needed to convert the program into executable code. This step is called translation, and consists of converting CICS commands into the language in which the rest of the program is coded.

    CICS provides a translator program for each language, to handle both EXEC CICS and EXEC DLI statements. There are three steps: translation, compilation (assembly) and link-edit.

    Note: For EXEC SQL, additional steps are needed to translate the SQL statements and bind.

    Commarea

    The Commarea (Communications area) is used to pass user data between programs and DFHCOMMAREA is its required symbolic name. If a DFHCOMMAREA is not specified in the Linkage section then a one byte DFHCOMMAREA is placed at compile time. (The compiler will also insert the USING DFHCOMMAREA after the Procedure Division statement).

    The maximum size of DFHCOMMAREA is 32K and it is recommended by IBM that it not exceed 24K.

    The length of the Commarea is returned to the program in the variable EIBCALEN.
    	LINKAGE SECTION.
    	01  DFHCOMMAREA.
    		05  DFH-COMM-AREA	PIC X(2000).
    Conversational and pseudo-conversational Programs

    In conversational mode, the program sends a message to the terminal, waits for the user to respond and sits idle till the response is received from the terminal without allowing any other operation.

    A pseudo-conversational program attempts a conversation with a terminal user and terminates the task after sending a message with a linkage for the next task. When the user completes the response the next task is automatically initiated. Pseudo-conversational programs use the CICS resources such as control tables efficiently.

    Reentrant and Quasi-reentrant Programs

    A reentrant program is a program which does not modify itself so that it can reenter into itself and continue processing after an interruption by the OS which during the interruption executes other OS tasks including OS tasks of the same program.

    A quasi-reentrant program is a reentrant program under the CICS environment i.e a CICS program that does not modify itself. It can reenter to itself and continue processing after interruption by CICS. Under CICS, program interruption and reentry occur not at an SVC time but at the time of a CICS command which may consist of many or no SVCs. So quasi-reentrant is a CICS/multithreading term used to distinguish it from reentrant which is an OS/multithreading term.

    In order to maintain quasi-reentrancy, a CICS program must do the following
  • Only Constants are defined in the ordinary data area (WS section). These constants are never modified or shared by tasks.
  • A unique storage area - Dynamic Working Storage (DWS) - is acquired by the program dynamically for each task by issuing the CICS macro equivalent GETMAIN. All variables will be placed in this DWS for each task. All counters are initialized after the DWS has been acquired.
  • The program must not alter the program itself. If it alters itself after a CICS macro or command, it must restore the alteration before the subsequent CICS macro or command.

    EXEC Interface stubs

    Each CICS application program must contain an interface to CICS. This takes the form of an EXEC interface stub, which is a function-dependent piece of code used by the CICS high-level programming interface. The stub, normally provided in the SDFHLOAD library, must be link-edited with the application program to provide communication between the code and the CICS EXEC interface program, DFHEIP. These stubs are invoked during execution of EXEC CICS and EXEC DLI commands.

    Execute Interface Block (EIB)

    The EIB lets the program communicate with the execute interface program, which processes CICS commands. It contains system information like the terminal id, time of day and response codes that can be used but not changed by the program. The EIB is copied into the program when it is compiled. The variables are populated by the system when the program runs.

    Variables in EIB

    EIBAID holds the current AID key pressed EIBCALEN contains the length of the COMMAREA. EIBCALEN = 0 denotes no COMMAREA EIBCPOSN gives the position of the cursor in the screen (binary halfword) EIBDATE gives the task date (00YYDDD) EIBTIME gives the task time (0HHMMSS) EIBTRNID Transaction of the task EIBTRMID Terminal ID of the task


    TSQ and TDQ

    These are special areas of storage that can be used to keep information and pass it between programs. Temporary Storage Queues (TS Queues) can be accessed either sequentially or randomly, while Transient Data Queues (TD Queues) can only be accessed sequentially. In addition, TDQs must be declared to CICS before use.


    Main tables used by CICS

  • System Initialization Table (SIT) - contains information necessary for CICS system initialization
  • Terminal Control Table (TCT) - identifies devices defined to CICS
  • File Control Table (FCT) - register the control information of all files, which are used under CICS. FCT contains the name/type/valid operations of each file and whether the records can be read sequentially or randomly, deleted or modified
  • Journal Control Table (JCT) - describes the system and user logs in use
  • Program Control Table (PCT) - lists the transactions to be executed by CICS
  • Processing Program table(PPT) - All CICS application programs and BMS mapsets are registered here
  • Resource Control Table (RCT) - Controls the CICS-DB2 interface


    CICS MAPS

    Basic Mapping Support (BMS)

    The objective of BMS is to free the application Program from device dependent codes and format.

    A screen defined through BMS is called a Map. A group of maps, which are link-edited together is called a Mapset.

    There are two type of maps - Physical and Symbolic.

    Physical and Symbolic Maps

    A Physical map is an assembly language program, created and placed in a load (program) library. It controls the screen alignment plus sending and receiving of constants and data from and to the terminal and has the terminal information.

    Sample Physical Map:
    	*	NESA MAP.  USED IN PROGRAM DCNESAMP.
    	DCNESAS DFHMSD TYPE=MAP,MODE=INOUT,LANG=COBOL,			+
    				TIOAPFX=YES,STORAGE=AUTO
    	DCNESAM DFHMDI SIZE=(24,80),CTRL=(FREEKB,PRINT)
    	END     DFHMDF POS=(1,1),ATTRB=(UNPROT,NORM,IC),LENGTH=3
    			DFHMDF POS=(1,5),ATTRB=(ASKIP,DRK),LENGTH=1
    			DFHMDF POS=(1,25),ATTRB=(ASKIP,NORM),LENGTH=22,	+
    				   INITIAL='NERDC NESA Transaction'
    			DFHMDF POS=(5,7),ATTRB=(ASKIP,NORM),LENGTH=5,	+
    				   INITIAL='Date:'
    	DATE    DFHMDF POS=(5,20),ATTRB=(ASKIP,NORM),LENGTH=8
    	DATEJ   DFHMDF POS=(6,20),ATTRB=(ASKIP,NORM),LENGTH=5
    			DFHMDF POS=(7,7),ATTRB=(ASKIP,NORM),LENGTH=5,	+
    				   INITIAL='Time:'
    	TIME    DFHMDF POS=(7,20),ATTRB=(ASKIP,NORM),LENGTH=8
    			DFHMDF POS=(8,7),ATTRB=(ASKIP,NORM),LENGTH=7,	+
    				   INITIAL='Termid:'
    	TERMID  DFHMDF POS=(8,20),ATTRB=(ASKIP,NORM),LENGTH=4
    			DFHMDF POS=(9,7),ATTRB=(ASKIP,NORM),LENGTH=9,	+
    				   INITIAL='Last Key:'
    	KEY     DFHMDF POS=(9,20),ATTRB=(ASKIP,NORM),LENGTH=5
    			DFHMDF POS=(22,1),ATTRB=(ASKIP,NORM),LENGTH=22,	+
    				   INITIAL='Enter END to Terminate'
    			DFHMSD TYPE=FINAL
    			END
    A Symbolic Map defines the map fields used to store variable data referenced in a COBOL program. They may be placed by BMS into a Copy library and be added to the Cobol program at the compile time.

    Sample Symbolic map:
    	01  SX420MI.
    		02  FILLER PIC X(12).
    		02  DATEL    COMP  PIC S9(4).
    		02  DATEF    PICTURE X.
    		02  FILLER REDEFINES DATEF.
    			03 DATEA    PICTURE X.
    		02  DATEI  PIC X(8).
    		02  TERMIDL    COMP  PIC S9(4).
    		02  TERMIDF    PICTURE X.
    		02  FILLER REDEFINES TERMIDF.
    			03 TERMIDA    PICTURE X.
    		02  TERMIDI  PIC X(4).
    		02  TIMEL    COMP  PIC S9(4).
    		02  TIMEF    PICTURE X.
    		02  FILLER REDEFINES TIMEF.
    			03 TIMEA    PICTURE X.
    		02  TIMEI  PIC X(8).
    		02  STATEL    COMP  PIC S9(4).
    		02  STATEF    PICTURE X.
    		02  FILLER REDEFINES STATEF.
    			03 STATEA    PICTURE X.
    		02  STATEI  PIC X(2).
    BMS Macros used to generate Maps

    DFHMSD - Data Facility Hierarchical Map Set Definition, used to define a mapset.
    DFHMDI - Map Definition Information, used to define a map within a mapset.
    DFHMDF - Map Data Field, used to define a field in the map.

    The DFHMSD Macro

    	MAPSN DFHMSD TYPE=DSECT,		X
    		 CTRL=FREEKB,DATA=FIELD,LANG=COBOL,	X
    		 MODE=INOUT,TERM=3270,TIOAPFX=YES
    
    	MAPSN - the name of the mapset to be created
    	TYPE= - whether a copybook member is to be generated (TYPE=DSECT) or an object library member is to be created (TYPE=MAP)
    	CTRL= - the characteristics of the 3270 terminal
    	DATA=FIELD - specifies that data is passed as contiguous fields
    	LANG=COBOL - the source language for generating the copy library member
    	MODE=INOUT - the mapset is to be used for both input and output
    	TERM= - the terminal type associated with the mapset
    	TIOAPFX=YES - fillers should be included in the generated copy library member
    The DFHMDI Macro

    	MAPNM DFHMDI COLUMN=1,DATA=FIELD,		X
    		     JUSTIFY=(LEFT,FIRST),LINE=1,	X
    		     SIZE=(24,80)
    
    	MAPMN - the name of the map
    	COLUMN=1,LINE=1 and JUSTIFY=(LEFT,FIRST) - establish the position of the map on the page
    	DATA=FIELD - data is passed as a contiguous stream
    The DFHMDF Macro

    	FNAME  DFHMDF POS=(1,5),LENGTH=10,		X
    		  ATTRB=(UNPROT,BRT,FSET),		X
    		  INITIAL='XXXXXXXXXX',PICIN='X(10)',	X
    			  PICOUT='X(10)',COLOR=RED
    	*
    	DOB    DFHMDF POS=(2,5),LENGTH=8,		X
    		  ATTRB=(UNPROT,NORM,NUM,ASKIP),	X
    		  INITIAL='00000000',PICOUT='9(8)'
    	 
    	First in the definition is the field name ("FNAME" and "DOB") followed by the macro DFHMDF
    	POS=(x,y) the line/column where field is to be placed
    	LENGTH= length of the field to be generated
    	ATTRB= list of attributes for the field. UNPROT means data can be typed in the field, BRT means the field intensity is BRighT, NUM 
    			that the field is numeric only
    	INITIAL= initial value for the field
    	PICIN= and PICOUT= specify a picture clause for the field. This allows editing characters such as Z to suppress leading zeros
    	COLOR= colour of the field. MAPATTS=COLOR must be specified on the DFHMDI macro to use the COLOR option
    Once all the fields to be included on the map have been specified (the maximum number of fields is 1023), a final DFHMDF macro with the operand TYPE=FINAL is specified to indicate the end of the map.
    	DFHMDF TYPE=FINAL
    MDT
    The MDT or modified data tag is the last bit in the attribute byte for each screen field. It indicates whether the corresponding field has been changed.

    Back


    CICS APPLICATION PROGRAMMING

    CICS COBOL considerations

  • The Environment division is empty in a CICS program. No select statements are allowed.
  • The Data division does not have a file section.

    File Handling

    CICS file control offers access to data sets that are managed by
  • Virtual storage access method (VSAM)
  • Basic direct access method (BDAM)

    CICS file control allows programmers to read, update, add, and browse data in VSAM and BDAM data sets and delete data from VSAM data sets. CICS data tables can also be accessed using file control.

    A CICS application program reads and writes its data in the form of individual records. Each read or write request is made by a CICS command.

    To access a record, the application program must identify both the record and the data set that holds it. It must also specify the storage area into which the record is to be read or from which it is to be written.

    Error conditions

  • DUPKEY - record is retrieved via an alternate index in which the key used is not unique
  • DUPREC - attempt to add a record to a data set in which the key already exists
  • LENGERR - The length specified for an output operation exceeds the maximum record size; the record is truncated. Or an incorrect length is specified
  • MAPFAIL - It is raised by the RECEIVE MAP command when the user entered no data but pressed the AID key or the Clear or PA keys. The simplest way to prevent the MAPFAIL condition is to check EIBAID and not issue a RECEIVE MAP if PA or CLEAR keys were hit
  • NOTFND - Record not found
  • PGMIDERR - Occurs if a program or map cannot be loaded into memory
  • QIDERR - Occurs when the symbolic name identifying the QUEUE to be used with TS or TD requests cannot be found

    Transaction abend codes

    When abnormal conditions occur, CICS can send a message to the CSMT transient data destination containing the transaction ID, the program name and the abend code.

  • AICA - program timed out. Probably loop which does not involve any EXEC CICS call
  • ABM0 - the map name in the COBOL program does not match the map name in front of the DFHMDI assembler macro. The names of the COBOL and assembler programs should match. This error can occur when executing SEND/RECEIVE MAP commands
  • AEI0 - Unknown program. Programs are being automatically installed the first time they are requested. The fact that PPT was not created indicates that the program is not in the loadlib, that it is not executable (as a result of compile or link errors) or something else prevented the automatic install.
  • AEY9 - CICS/DB2 interface probably down. If DB2 is up then try DSNC STRT and look for possible error messages on CICS screen and using PCOMMAND
  • AICG, AKC3, ATND, ATNI - task has been purged or terminated
  • APCT - the mapset name in the COBOL program does not match the mapset name in front of the DFHMSD assembler macro. This error can occur when executing SEND/RECEIVE MAP commands
  • ASRA - The task has terminated abnormally because of a program check, a data exception, similar to a 0C7 code in batch programming
  • AEIx/AEYx abends - indicate that an exception has occured, and RESP (or NOHANDLE) is not is use
    	AEI0 - PGMIDERR
    	AEI9 - MAPFAIL condition
    	AEIO - DUPKEY condition
    	AEIN - duplicate record (DUPREC)
    	AEID - EOF condition
    	AEIS - file not open (NOTOPEN)
    	AEIP - invalid request (INVREQ)
    	AEY7 - user is not authorised to use a resource (NOTAUTH)
  • DSNC or -922 - Problem accessing DB2. If RC=-922 with PLAN ACCESS then
    - check that DB2 RDO setup is OK
    - Check for any security problems, especially when -922 occurs without abend DSNC
    - Check if the authid in question has been granted necessary privileges to access the plan and that the plan has been bound as necessary

    Back


    CICS ADMINISTRATION

    CICS system transactions

  • CBRC - database recovery control
  • CEBR - temporary storage browse
  • CECI/CECS - command level interpreter/syntax checker. CECI, which invokes the CICS command-level interpreter, can be used to enter an EXEC CICS command, check its syntax and modify it, if necessary. CECS or the command-level syntax checker, also invokes the command-level interpreter and is used to check the syntax of an EXEC CICS command, but not process it
  • CEDA/CEDB/CEDC - resource definition online (RDO)
  • CEDF - execution diagnostic facility - CEDF is packaged with CICS and is one of the basic debugging aids available for testing. CEDF intercepts a transaction at initiation and termination of each program, before and after execution of any EXEC CICS or EXEC SQL command, and at task termination. The programmer can then view the parameters being passed back and forth as well as the responses passed back after the execution of the command. Storage areas, such as working storage, can be browsed and parameters changed to simulate specific conditions, etc. For cedf on mirror txns, use 'cedx cvmi'.
  • CEMT - master terminal transaction. The Master Terminal operator controls system components using the master terminal txn CEMT and can dynamically change system control parameters. Four modes can be used - perform, inquire, set and discard.
    	cemt i fil(FIL1)/db2tran(TRN1)/conn/db2conn
    	cemt set prog(PGM1) newc	- newcopy of a changed program
    	cemt perform shutdown		- shuts down the CICS region
    	set task(tasknum) purge/forcepurge	- terminate a task
    	set terminal(term) outservice purge	- set terminal out of service
  • CESN/CSSN - sign on
  • CSSF - sign off


    CICS startup/shutdown

    When CICS is started up, a CICS System Initialization process is started. It involves many activities like
  • obtaining the required storage for CICS execution from the private area in the CICS address space, above and below the 16 MB line
  • setting up CICS system parameters for the run, as specified by the system initialization parameters
  • loading and initializing the CICS domains
  • loading the CICS nucleus with the required CICS modules

    Startup types

  • Initial - starts with no reference to any system activity recorded in the CICS global catalog and system log from a previous run of CICS.
  • Cold - starts with limited reference to any system activity
  • Warm - after a normal shutdown
  • Emergency - after an abnormal shutdown, restoring recoverable resources

    CICS can be started in either of the two ways:
  • by the MVS START command to start CICS as a started task.
  • by submitting a CICS batch job to the MVS internal reader

    Shutting down CICS

    Normal shutdown - done by the 'cemt perform shutdown' transaction or the 'exec cics perform shutdown' command
    Immediate shutdown - caused by
  • the 'cemt perform shutdown immediate' transaction
  • the 'exec cics perform shutdown immediate' command
  • cics system abend
  • program check
    Uncontrolled shutdown - caused by
  • power failure
  • machine check
  • O/S failure

    Back



    COBOL

    COBOL (COmmon Business Oriented Language) is a high-level programming language first developed by the CODASYL committee (Conference on Data Systems Languages) in 1960 and later, responsibility for developing new COBOL standards was assumed by the American National Standards Institute (ANSI). Three ANSI standards for COBOL have been produced: in 1968, 1974 and 1985.

    COBOL BASICS

    A COBOL program is organized into four divisions.

    The Identification division contains the following paragraphs - PROGRAM-ID (compulsory), AUTHOR, INSTALLATION, DATE-WRITTEN, DATE-COMPILED, SECURITY.

    The Environment division has two sections

  • Configuration Section - deals with the characteristics of the source and object computers and has three paragraphs

    SOURCE-COMPUTER - computer configuration on which the intermediate code is produced OBJECT-COMPUTER - computer configuration on which the object (intermediate code) program is to be run SPECIAL-NAMES - relates the implementation-names used by the COBOL system to the mnemonic-names used in the source program

  • Input-Output Section - deals with the information needed to control the transmission and handling of data between external media and the object program. This section is divided into two paragraphs

    FILE-CONTROL - names and associates the files with external media I-O-CONTROL - defines special control techniques to be used in the object program

    	ENVIRONMENT DIVISION.
    	CONFIGURATION SECTION.
    	SOURCE-COMPUTER. S390.
    	OBJECT-COMPUTER. S390.
    	INPUT-OUTPUT SECTION.
    	FILE-CONTROL.	SELECT File-In ASSIGN TO INFILE.
    			SELECT File-Out  ASSIGN TO OUTFILE.

    The Data Division describes the data that the object program is to accept as input to manipulate or produce as output and is subdivided into the following sections
  • File Section - defines the structure of data files. Each file is defined by a file description (FD) entry and one or more record descriptions written immediately after the FD entry
  • Working-Storage Section - describes records and noncontiguous data items which are not part of external data files but are developed and processed internally. It also describes data items whose values are assigned in the source program and do not change during the execution of the object program
  • Linkage Section - appears in the called program
  • Communication Section - describes the data item in the source program that will serve as the interface between the Message Control System (MCS) and the program
  • Report Section - contains one or more report description (RD) entries, each of which forms the complete description of a report

    The Procedure division is composed of a paragraph, a group of successive paragraphs, a section, or a group of successive sections. A procedure-name is a word used to refer to a paragraph or section in the source program in which it occurs. It consists of a paragraph-name (which can be qualified) or a section-name.

    The end of the Procedure Division and the physical end of the program is that physical position in a COBOL source program after which no further procedures appear.

    A section consists of a section header followed by zero, one, or more successive paragraphs.

    A paragraph consists of a paragraph-name followed by a period and a space, and by zero, one, or more successive sentences.

    Column Positions

    	Position            Contents
    	--------            -------- 
    	1-6		Sequence or Line Number (leave blank)
    	7		Asterisk (*) for a comment line
    			Hyphen (-) for a line with the continuation of a non-numeric literal
    			Blank for all other lines
    	8-72		COBOL program elements
    			Area A - Column Positions 8 to 11 (both inclusive)
    			Area B - Column Positions 12 to 72 (both inclusive)
    	73-80		Program Identification (leave blank)

    DATATYPES & VARIABLES

    PICture clauses

    PIC (short for PICture) clauses describe the size and type of data for each field.
    	PIC Clause			Usage						
    	----------			-----
    	X, X(n)				Alphanumeric data of 1/n characters
    	9, 9(n)				Numeric data
    	A, A(n)				Alphabetic data
    	V				marks a decimal point as in PIC 9(4)V99, does not take any storage
    	S				Sign as in S9(3)V99
    
    	Insertion Characters
    	--------------------
    	. (point)			Inserts a decimal point as in PIC 999.99, takes a byte of storage
    	, (comma)			Inserts a comma. Takes a byte
    	B				Inserts a blank
    	/				Inserts a slash
    	0				Inserts a zero
    
    	Zero suppression characters
    	---------------------------
    	Z				Suppresses leading zeroes, e.g Z(9), ZZZ99.99 etc.
    	$				Right-most leading zero will print a $. e.g $$$,$$9.99
    	*				Suppresses all characters (For protection of vital data).
    					All * will be printed.
    
    	Sign Control
    	------------
    	+				For positive numbers. e.g +999.99, 999+
    	-				For negative numbers  e.g -9(6), 999.99-
    
    	Accounting characters
    	---------------------
    	DB				Indicates Debit. e.g $$$99.99DB
    	CR				Indicates Credit e.g $(5)9V99CR
    			Both CR and DB will print only if negative

    Level numbers

    Level numbers are used to group fields. Valid level numbers are 01-49, 66, 77 and 88. Level numbers 01 and 77 begin in Area A while level numbers 02-49, 66 and 88 begin in area B.

    66 - The level number 66 is reserved for variables which rename (overlap) other variables using the RENAMES clause.

    77 - A working storage field can be declared with a level number of 77. The 77 must be in column 8, the field cannot be a group-level field and the field cannot be part of a group-level field.

    88 - A field declared with a level number of 88 is known as a "condition name". This name can be used anywhere a condition can be used and is generally more readable. Condition names are declared immediately after the field they are associated with and use no storage.
    	01  WS-ACCT-TYPE	PIC 999.
    		88  CHECKING-ACCT	VALUE 100 110 210 300.
    		88  SAVINGS-ACCT	VALUE 150 175.
    	
    	In the Procedure division:
    		IF CHECKING-ACCT
    			statement(s)
    		ELSE
    			IF SAVINGS-ACCT
    				statement(s)
    			END-IF
    		END-IF

    Storing numeric datatypes

  • COMP or COMPUTATIONAL
    The storage length of a binary COMP field is
    	PIC 9 to PIC 9(4)	2 bytes
    	PIC 9(5) to PIC 9(9)	4 bytes
    	PIC 9(10) to PIC 9(18)	8 bytes
    	Sign is stored in the most significant bit. Bit is on if -ve, off if +ve
  • COMP-1
    Single precision floating point value, stored in 4 bytes.
  • COMP-2
    Double precision floating point value, stored in 8 bytes.
  • COMP-3
    Comp-3 stores two digits per byte, in BCD form - with the sign after the least significant digit (C,E or F for +ve, B or D for -ve). The length of a Comp-3 field is calculated as (# of digits + 1) /2 rounded up.

    Reference Modification - allows referencing a portion of a field
    	IF WS-FIELD (startpos:length) = 'ABC'
    Both startpos & length values have to be specified. COBOL treats all such references as alphanumeric.

    The REDEFINES clause- allows having multiple field definitions for the same piece of storage. The same data then can be referenced in multiple ways.
    	01  WS-NUMBER-X	PIC X(8).
    	01  WS-NUMBER	REDEFINES WS-NUMBER-X PIC 9(6)V99.
    WS-NUMBER-X and WS-NUMBER refer to the same 8 bytes of storage.

    A redefinition must have the same level number as the field it is redefining and must immediately follow the field (i.e. the redefinition of a 05-level field must be the next 05-level field)

    The RENAMES clause - permits alternative, possibly overlapping, groupings of elementary items.
    	66  dataname1 RENAMES dataname2 THROUGH dataname3.
    Rules:
  • A record can have any number of RENAMES clauses.
  • All RENAMES entries referring to data items within a given logical record must immediately follow the last data description entry of that record.

    HIGH-VALUES and LOW-VALUES

    HIGH-VALUES and LOW-VALUES are figurative alphanumeric constants and may have different values in different programs. They are defined as one or more of the character with the lowest/highest position in the program collating sequence.

    The program collating sequence may be specified by the PROGRAM COLLATING SEQUENCE clause in the OBJECT-COMPUTER paragraph using an alphabet-name defined in the SPECIAL-NAMES paragraph. For the default native collating sequence, LOW-VALUES has the value X'00' and HIGH-VALUES has the value X'FF'.

    LOW-VALUES and HIGH-VALUES are useful when sorting alphanumeric data according to the program collating sequence.


    STATEMENTS

    Arithmetic statements

    ADD,SUBTRACT,MULTIPLY,DIVIDE,COMPUTE

    The ROUNDED clause - valid with any of the math verbs.
    	ADD IN-AMOUNT TO WS-BALANCE ROUNDED.
    	MULTIPLY IN-HOURS BY IN-RATE GIVING WS-PAY ROUNDED.
    	COMPUTE WS-CUBE ROUNDED = WS-NBR ** WS-POWER.
    The ON SIZE ERROR clause, when used with one of the math verbs, allows COBOL to detect truncation and divide-by-zero situations and execute a specified instruction instead of doing the calculation. The one statement can be a PERFORM. A statement can also be specified for 'NOT ON SIZE ERROR'.
    	DIVIDE WS-TOTAL BY WS-COUNT
    		GIVING WS-PCT ROUNDED
    		ON SIZE ERROR MOVE 0 TO WS-PCT
    	END-DIVIDE.

    Compiler directing statements

  • CONTROL
  • COPY
  • EJECT
  • REPLACE
  • SKIP
  • TITLE
  • USE

    Conditional statements

    EVALUATE
    	EVALUATE PCB-STAT
    		WHEN SPACES
    			DISPLAY 'EMPLOYEE SEGMENT = ' EMP-SEG-IO-AREA
    			PERFORM C050-INQUIRY-EDUCATION-SEG
    		WHEN 'GE'
    			DISPLAY 'EMPLOYEE NOT FOUND FOR ' 400-EMPLOYEE-NUM
    		WHEN OTHER
    			PERFORM C075-INQUIRY-ERROR
     	END-EVALUATE.
    	
    	Another format is
    	
    	EVALUATE TRUE
    		WHEN cond-1
    			statements
    		WHEN cond-2
    			statements
    		....
    		WHEN OTHER
    	END-EVALUATE.

    Flow control statements

    CONTINUE & NEXT SENTENCE

    CONTINUE has no effect upon the execution of the program and is used within a conditional phrase of another statement when no action is desired when the condition occurs.

    NEXT SENTENCE, on the other hand, branches the control to the statement following the next period.

    Perform statement

    PERFORM executes the specified paragraph and then control will return to the statement following the perform. There are no restrictions as to the physical placement of a paragraph compared to the perform statement that executes it.
        PERFORM OPEN-IN-FILE.
    	PERFORM READ-EMP-FILE
    		WHILE FILE-STATUS = 0.
    	PERFORM READ-INPUT
    		UNTIL END-SWITCH = 1.
    	PERFORM PARA-CALC-ANNUAL-TAX
    		VARYING MONTHFIELD FROM 1 BY 1 UNTIL MONTHFIELD = 13.
    	PERFORM PARA-100-LOOP  7 TIMES.
    	PERFORM PARA-START-PROCESS THRU PARA-END-PROCESS.
    There is also an in-line perform where a block of code appears between a PERFORM and END-PERFORM. No paragraph name is specified. For e.g
    	PERFORM UNTIL WS-END-OF-FILE
    		statement(s)
    	READ IN-FILE
    		AT END MOVE 'Y' TO WS-END-OF-FILE-SW
    	END-READ
    	END-PERFORM

    Search statement

    SEARCH (serial search) examines each table entry starting at the beginning, whereas SEARCH ALL (binary search) starts looking at the mid-point of the table and works its way toward the argument depending upon if its too high or too low.

    SEARCH can be used for unsorted tables, while SEARCH ALL is only useful if the table is sorted.
    	SEARCH table-name 
    		[AT END statements-1]
    		WHEN condition
    		...........
    	[END-SEARCH]
    Example for SEARCH
    	01 SALES-TAX-TABLE.
    		05 WS-TABLE-ENTRIES OCCURS 1000 TIMES INDEXED BY TABLECOUNT.
    			10 WS-ZIPCODE	PIC 9(5).
    			10 WS-TAX-RATE	PIC V9(3).
    
    	SET TABLECOUNT TO 1
    	SEARCH TABLE-ENTRIES
    		AT END MOVE 0 TO WS-SALES-TAX
    		WHEN ZIP-IN = WS-ZIPCODE (TABLECOUNT)
    			COMPUTE WS-SALES-TAX = WS-TAX-RATE (TABLECOUNT) * UNIT-PRICE-IN * QTY-IN
    	END-SEARCH
    Example for SEARCH ALL
    	01 SALES-TAX-TABLE.
    		05 WS-TABLE-ENTRIES OCCURS 1000 TIMES
    				ASCENDING KEY IS WS-ZIPCODE INDEXED	BY TABLECOUNT.
    			10 WS-ZIPCODE	PIC 9(5).
    			10 WS-TAX-RATE	PIC V9(3).
    
    	SEARCH ALL WS-TABLE-ENTRIES
    		AT END MOVE 0 TO WS-SALES-TAX
    		WHEN WS-ZIPCODE (TABLECOUNT) = ZIP-IN
    			COMPUTE WS-SALES-TAX = WS-TAX-RATE (TABLECOUNT) * UNIT-PRICE-IN * QTY-IN
    	END-SEARCH
    Sort statement

    SORT - Used to sort a file. It requires a work file area that is defined in the FILE SECTION, just like any other file, except it is an SD instead of an FD.
    	SORT sd-file-name ON ASCENDING KEY sd-field-name
    		USING fd-input-file-name
    		GIVING fd-output-file-name
    Multiple fields can be used in the sort, listed in the desired order. DESCENDING KEY can be specified instead of ASCENDING KEY and both can also be combined.

    The SORT statement will open and close both the input and output files automatically. The field(s) to be sorted on must be defined in the SD of the sort file.

    An INPUT PROCEDURE can be specified instead of an input file, allowing the flexibility of selecting specific records to be sorted or to do other types of processing before the sort. Likewise, an OUTPUT PROCEDURE can be used instead of an output file. An INPUT PROCEDURE requires a RELEASE statement and an OUTPUT PROCEDURE requires a RETURN statement.
    	SORT sd-file-name ON ASCENDING KEY sd-field-name
    		INPUT PROCEDURE IS paragraph-1
    		OUTPUT PROCEDURE IS paragraph-2
    This statement will execute paragraph-1, perform the sort and then execute paragraph-2. An INPUT PROCEDURE can be used with GIVING and an OUTPUT PROCEDURE can be used with USING. Each of these options allows the THRU option (i.e. paragraph-a THRU paragraph-b).

    The clause 'WITH DUPLICATES IN ORDER' can be included in the statement (after the last ASCENDING/DESCENDING KEY). This will cause any records with the same value(s) for the sort field(s) to be kept in their original order. Not specifying this will not necessarily change their original order, but there is no guarantee.

    String Manipulation statements

    STRING, UNSTRING, INSPECT

    The STRING statement is mainly used to concatenate and format two or more strings. A typical use of STRING is to concatenate elementary alphanumeric data items from a file record to a single data item for printing.
    	STRING CITY DELIMITED BY SPACE  ", "  STATE DELIMITED BY SIZE  " "  ZIP DELIMITED BY SIZE
    		INTO ADDRESS-LINE2-OUT
    	END-STRING.
    UNSTRING performs the opposite function of STRING, it breaks a text string into two or more data items (often called tokens), based on a specified delimiter.
    	05  ADDRESS-LINE2-IN     PIC X(30).
    	...
    	UNSTRING ADDRESS-LINE2-IN
    		DELIMITED BY ", " OR " "
    			INTO CITY STATE ZIP
    			ON OVERFLOW DISPLAY "FIELD(S) TO SMALL"
    	END-UNSTRING.
    The INSPECT statement can locate, count and manipulate any character in a string.

    Counting Characters

    The INSPECT ... TALLYING statement can be used to count the occurrences of a character in a string, or to determine its position.
        INSPECT dataItem-1 TALLYING dataItem-2 FOR dataItem-3.
    This complicated syntax allows a variety of counting options: all occurrences of a specific character (ALL), only those at the beginning of a string (LEADING), or occurrences of any character (the CHARACTER option). The BEFORE or AFTER INITIAL clause allows starting counting at a particular position in the string. In all cases, dataItem-1 is the inspected string, and the (numeric) dataItem-2 will hold the resulting tally.

    Replacing Characters

    The INSPECT ... REPLACING statement is used to replace a character in a string with another.
        INSPECT dataItem-1 REPLACING dataItem-2 BY dataItem-3.
    Converting Characters

    In COBOL-85, character replacement is made simpler by the INSPECT ... CONVERTING statement:
    	INSPECT dataItem-1 CONVERTING { dataItem-2 | lit-1 } TO { dataItem-3 | lit-2 }
    This form allows multiple character replacements to be specified much more compactly than the INSPECT ... REPLACING statement. Each occurrence of characters specified for conversion are changed to the corresponding character by position.
    	*** convert all alphabetic characters in a string to uppercase
    	INSPECT STATE-NAME CONVERTING "abcdefghijklmnopqrstuvwxyz" TO "ABCDEFGHIJKLMNOPQRSTUVWXYZ".
    INITIALIZE sets all of the values for fields in the same group-level field to either 0 or spaces, depending on the field's definition. INITIALIZE will not change the value of any FILLER items. If a non-FILLER item has a VALUE clause it will still get ZEROS or SPACES and the VALUE is ignored.


    ARRAYS/TABLES

    The OCCURs clause

    The OCCURS clause is used for
  • defining a series of input or output fields, each with the same format
  • defining a table in working-storage to be accessed by each input record
    	01 TEMP-REC.
    		05 TEMPERATURE OCCURS 24 TIMES PIC 9(3).
    A subscript is used in the procedure division to indicate the specific item within the array.
    	DISPLAY TEMPERATURE (23).
    An OCCURS clause may be used on levels 02 to 49 only. It is not valid on the 01 level since it must be used for defining fields not records.

    Variable length tables

    Variable length tables are defined using the OCCURS....DEPENDING ON clause.
    	01 WS-ZIP-TBL OCCURS 100 TO 999 TIMES DEPENDING ON WS-COUNT
    		INDEXED BY WS-IDX.
    		05 WS-ZIP	PIC X(5).
    		05 WS-TOWN	PIC X(20).
    	77 WS-COUNT	PIC S9(3) COMP-3 VALUE 101.

    Sorted Tables

    A sorted table can be defined using the ASCENDING KEY/DESCENDING KEY clauses in the table definition.
    	01 WS-ZIP-TBL OCCURS 100 TO 999 TIMES DEPENDING ON WS-COUNT
    			ASCENDING KEY WS-ZIP INDEXED BY WS-IDX.
    		05 WS-ZIP	PIC X(5).
    		05 WS-TOWN	PIC X(20).
    	77 WS-COUNT	PIC S9(3) COMP-3 VALUE 101.

    INTRINSIC FUNCTIONS

    The CURRENT-DATE function returns a 20-character alphanumeric field laid out as follows
    	01  WS-CURRENT-DATE-FIELDS.
    		05  WS-CURRENT-DATE.
    			10  WS-YEAR	PIC 9(4).
    			10  WS-MONTH	PIC 9(2).
    			10  WS-DAY	PIC 9(2).
    		05  WS-CURRENT-TIME.
    			10  WS-HOUR	PIC 9(2).
    			10  WS-MINUTE	PIC 9(2).
    			10  WS-SECOND	PIC 9(2).
    			10  WS-MS	PIC 9(2).
    		05  WS-DIFF-FROM-GMT	PIC S9(4).
    Reference modification is normally used to only grab the part required.
    	MOVE FUNCTION CURRENT-DATE TO WS-CURRENT-DATE-FIELDS.
    	MOVE FUNCTION CURRENT-DATE (1:8) TO WS-TODAY.
    	MOVE FUNCTION CURRENT-DATE (9:6) TO WS-TIME.

    Intrinsic numeric functions

    	COMPUTE WS-RESULT = FUNCTION SQRT (WS-NUMBER).
    	COMPUTE WS-RESULT = FUNCTION MOD (WS-INTEGER-1, WS-INTEGER-2).
    	COMPUTE WS-RESULT = FUNCTION REM (WS-NUMBER-1, WS-NUMBER-2).
    Geometric functions - SIN, COS, TAN
    Logarithmic functions - LOG, LOG10
    Math functions - FACTORIAL
    Random number generation - RANDOM


    COMPILER DIRECTIVES

    COBOL programs can use compiler directives to specify special options and to override default compiler options. Compiler options can be included in the COBOL source code by using the CBL or PROCESS keywords at the top of the program or be passed in the PARM parameter in the EXEC statement of the compilation JCL.

    ADV/NOADV - ADV indicates that records described in files that use the WRITE...ADVANCING option have not reserved the first byte for the control character, and the compiler is to add one byte on its own. (Default is NOADV.)
    APOST - The apostrophe(') is used to delineate (enclose) character literals (default QUOTE or Q)
    BATCH (BAT) / NOBATCH (NOBAT) - When using the BATCH option, several programs and/or subprograms may be compiled in one invocation of the compiler (Default is NOBATCH)
    DBCS/NODBCS - DBCS Support
    DYNAM (DYN) - dynamically load subprograms invoked through the CALL 'literal' statement. NODYNAM (NODYN) instructs the compiler to make the subprograms available to the Linkage Editor. DYNAM implies RESIDENT.
    LIST/NOLIST - Object code listing
    LOAD(LOA)/NOLOAD(NOLOA) - LOAD (default) specifies that the object module is to be written to SYSLIN for processing by the Linkage Editor or Loader
    MAP/NOMAP - storage map listing
    OPTIMIZE(OPT)/NOOPTIMIZE(NOOPT) - OPTIMIZE (default) causes the compiler to optimize the object code and reduce the storage requirements of the object program
    RENT/NORENT - Generate as a (non) reentrant object program
    SEQ/NOSEQ - SEQ (default) causes checking of sequence numbers in the source program and if missing or out of order, a warning message is issued
    SOURCE(SOU)/NOSOURCE(NOSOU) - NOSOURCE suppresses the listing of the COBOL source program. (Default is SOURCE)
    SSRANGE/NOSSRANGE - SSRANGE generates code that checks if subscripts (including ALL subscripts) or indexes try to reference an area outside the region of the table
    TRUNC(TRU)/NOTRUNC(NOTRU) - Truncation on movement of COMPUTATIONAL fields. NOTRUNC (default) allows truncation to be according to the storage capacity of a field (halfword or fullword). NOTRUNC indicates that if the sending field allows more digits than the receiving field, truncation will be according to the number of digits specified in the PICTURE clause. For e.g PIC S9 COMP requires a halfword storage. TRUNC would truncate values greater than one digit, NOTRUNC truncates values which require more than a halfword of storage (values greater than 32767)


    AMODE (Addressing Mode)

  • AMODE(xx) - indicates no. of addressing bits used by the program
  • AMODE=ANY - program may use any of the addressing techniques available

    RMODE (Residency Mode)

  • RMODE(24) - program must be loaded into memory below the line
  • RMODE(31) - can be loaded either below or above the line.
  • RMODE=ANY - can be run in either 24 bit (below)or 31 bit memory (above)

    Sample JCL for compiling
    	//IGYCRCTL EXEC PGM=IGYCRCTL,PARM=(NOTERM,LIB,TEST,'')
    	//*
    	//SYSLIB   DD DISP=SHR,DSN=SYS1.MACLIB
    	//SYSIN    DD DISP=SHR,
    	//		DSN=&SRCDSN(&PGMNME)
    	//SYSPRINT DD SYSOUT=*
    	//SYSOUT   DD SYSOUT=*
    	//SYSPUNCH DD DUMMY
    	//SYSUT1   DD UNIT=SYSDA,SPACE=(CYL,(2,2))
    	//SYSUT2   DD UNIT=SYSDA,SPACE=(CYL,(2,2))
    	//SYSUT3   DD UNIT=SYSDA,SPACE=(CYL,(2,2))
    	//SYSLIN   DD DSN=&&CBLOBJ,
    	//	DISP=(NEW,PASS,DELETE),UNIT=SYSDA,
    	//	SPACE=(CYL,(2,2),RLSE),
    	//	DCB=(RECFM=FB,LRECL=80,BLKSIZE=0)
    	//IEWL     EXEC PGM=IEWL,
    	//	PARM='(XREF,LIST,LET,MAP)'
    	//SYSLIB DD DISP=SHR,DSN=SYS1.COB2LIB
    	//	 DD DISP=SHR,DSN=CEE.SCEELKED
    	//*
    	//SYSUT1   DD SPACE=(CYL,(5,2)),UNIT=SYSDA
    	//SYSPRINT DD SYSOUT=*
    	//SYSOUT   DD SYSOUT=*
    	//SYSLIN  DD DSN=&&CBLOBJ,DISP=(OLD,DELETE,DELETE)
    	//*
    	//SYSLMOD  DD  DSN=&LDDSN(&PGMNME),DISP=SHR
    	//*

    FILE HANDLING

    File Types

    Sequential - rows unsorted, accessed sequentially
    Indexed - access is through a key field
    Relative - access is through a relative record number (RRN)
    Line sequential - similar to sequential files, available only in some machines


    Reading files

    	2000-WALK-SEQ-FILE.
    		PERFORM 2100-READ-RECORD THRU 2100-EXIT
    			UNTIL WS-EOF-SW = 'Y'.
    	2000-EXIT. EXIT.
    
    	2100-READ-RECORD.
    		READ SEQ-FILE INTO WS-RECORD
    			AT END  MOVE 'Y' TO WS-EOF-SW
    				GO TO 2100-EXIT			        		
    	2100-EXIT. EXIT.
    The INVALID KEY clause

    The INVALID KEY clause is used with any type of non-sequential I-O statement and specifies a statement to be executed if the command fails. Any statement such as READ or WRITE with any relative or indexed file should include this clause.
           MOVE WS-KEY TO FILE-KEY.
           READ INDEX-FILE
              INVALID KEY PERFORM 300-RECORD-NOT-FOUND
           END-READ.
    Variable length files

    Records in a sequential file can be either fixed or variable in length. Variable-length records save disk space and are beneficial in many applications like those that generate many small records, with occasional large ones.
    	fd var-length-file
    	recording mode is v.
    
    	01 var-length-record.
    	    05 var-field1	pic x(30).
    	    05 var-length	pic S9(4) comp-3.
    	    05 var-field2	pic x(1) occurs 100 to 400 times
    					depending on var-length.
    File open modes

  • INPUT - A file opened in INPUT mode may be accessed only via the READ verb (plus the START verb, if the file is INDEXED or RELATIVE)

  • OUTPUT - May be accessed only via the WRITE verb. If the file exists, its contents are emptied (so that the file contains only those records written during execution of the program)

  • EXTEND - Applies only to SEQUENTIAL files, may be accessed only via the WRITE verb. The file must exist prior to being opened (unless the word OPTIONAL appeared in the SELECT statement for that file) and any records written are placed after the ones already there

  • I-O - Both reading and writing of records may be carried out, via the READ and REWRITE verbs. (The WRITE and START verbs may be applied too, if the file is INDEXED or RELATIVE)

    File Access modes

  • Sequential - Applicable for Sequential, Indexed and Relative files
  • Random - Applicable for Indexed and Relative files only
  • Dynamic - Combination of both Sequential and Random access. Applicable for Indexed and Relative files only

    File operations

    Operations that can be performed on sequentially accessed files
    	          +--------------------------------------+
    		  |               M o d e                |
    	Operation |                                      |
    		  |  INPUT     OUTPUT    EXTEND     I-O  |
    		  +---------+---------+----------+-------+
    	READ      |    x    |         |          |   x   |
    		  +---------+---------+----------+-------+
    	WRITE     |         |    x    |    x     |       |
    		  +---------+---------+----------+-------+
    	REWRITE   |         |         |          |   x   |
    		  +---------+---------+----------+-------+
    Operations that can be performed on indexed files
    	           +-------------------------------------+
    	File       |         |     O p e n   M o d e     |
    	Access     |         |                           |
    	Mode       |   Verb  |  INPUT     OUTPUT    I-O  |
    		   +---------+---------+---------+-------+
    	SEQUENTIAL |    READ |    x    |         |   x   |   (sequential form only)
    		   |   WRITE |         |    x    |       |   (sequential form only)
    		   | REWRITE |         |         |   x   |   (sequential form only)
    		   |  DELETE |         |         |   x   |
    		   |   START |    x    |         |   x   |
    		   +---------+---------+---------+-------+
    	RANDOM     |    READ |    x    |         |   x   |   (random form only)
    		   |   WRITE |         |    ?    |   x   |   (random form only)
    		   | REWRITE |         |         |   x   |   (random form only)
    		   |  DELETE |         |         |   x   |
    		   |   START |         |         |       |
    		   +---------+---------+---------+-------+
    	DYNAMIC    |    READ |    x    |         |   x   |   (either form)
    		   |   WRITE |         |    x    |   x   |   (either form)
    		   | REWRITE |         |         |   x   |   (either form)
    		   |  DELETE |         |         |   x   |
    		   |   START |    x    |         |   x   |
    		   +---------+---------+---------+-------+

    File status codes

    File status codes are made of two digits, the first indicates one of 5 classes
    0 - I/O operation successful
    1 - File "at end" condition
    2 - Invalid key
    3 - Permanent I/O error
    4 - Logic error

    00 I/O operation successful 02 Duplicate record key found (READ ok) 04 Length of record too large (READ ok) 10 File AT END 14 The valid digits of a read RRN are greater than the size of the relative key item of the file 16 Program tried to read file already AT END 22 Program attempted to write a record with a key that already exists 23 Record not found 24 attempted to write record to a disk that is full 30 I/O operation unsuccessful, no further information available 34 attempted to write record to a disk that is full 35 tried to open non-existent file for INPUT, I-O or EXTEND 37 tried to open line sequential file in I-O mode 41 tried to open file that is already open 42 tried to close file that is not open 43 tried to delete/rewrite a record that has not been read 44 tried to write/rewrite a record of incorrect length 46 tried to read a record where the previous read or START has failed or the AT END condition has occurred 47 tried to read a record from a file opened in incorrect mode 48 tried to write a record from a file opened in incorrect mode 49 tried to delete or rewrite a record from a file opened in incorrect mode


    Dynamic allocation of files

    Using environment variables to allocate files

    At the time of opening a file, the COBOL system checks if the name refers to a DDNAME in the JCL. If not found in the JCL, then it looks for an environment variable of the same name. If not found, the file status is set to 35, and the open fails. If the variable is located, but the allocation info is incorrect (i.e a syntax error or a misspelt dataset name), then the file status is set to 98 and again the open fails. If the variable is set up with a valid allocate statement, then the system will successfully allocate and open the file.

    Environment variables can be defined and made available not only to the program that set them, but to any program that is subsequently called.

    Using the putenv C routine to dynamically allocate files
    	CBL NODYNAM
    	IDENTIFICATION DIVISION.
    	*
    	SELECT OUTPUT-FILE ASSIGN TO DYNAMDD.
    	*
    	FD OUTPUT-FILE.
    	01 OUTPUT-RECORD PIC X(80).
    	*
    	WORKING-STORAGE SECTION.
    	01 ADDR-POINTER USAGE IS POINTER.
    	01 DYNALLOC.
    		05 FILLER PIC X(12) VALUE 'DYNAMDD=DSN('.
    		05 DYN-DSN PIC X(17).	
    		05 LINE1 PIC X(22) VALUE ' NEW TRACKS SPACE(1,1)'.
    		05 LINE2 PIC X(27) VALUE ' CATALOG STORCLAS(STANDARD)'.
    		05 LINE3 PIC X(15) VALUE ' MGMTCLAS(WORK)'.
    
    	PROCEDURE DIVISION.
    	*
    	MOVE 'VRKPROD.TEST.PS1)' TO DYN-DSN.
    	SET ADDR-POINTER TO ADDRESS OF DYNALLOC.
    	CALL 'putenv' USING BY VALUE ADDR-POINTER RETURNING RC.
    	MOVE RC TO RC-DISPLAY
    	IF RC NOT = ZERO
    		DISPLAY 'PUTENV FAILED. RC = ' RC-DISPLAY
    	ELSE
    		DISPLAY 'PUTENV SUCCESSFUL. RC = ' RC-DISPLAY
    	END-IF.
    
    	OPEN OUTPUT OUTPUT-FILE.
    	MOVE 'THIS IS A SAMPLE RECORD' TO OUTPUT-RECORD.
    	WRITE OUTPUT-RECORD.
    	CLOSE OUTPUT-FILE.


    CALLING SUBPROGRAMS

    The CALL statement is used to call another program. Any fields passed are in the calling program's WORKING-STORAGE SECTION and in the called program's LINKAGE SECTION.

    The USING clause on the CALL specifies the fields to pass. The called program lists these fields in the USING clause of the PROCEDURE DIVISION. The fields don't need to have the same name but the definitions must match.
    	CALL 'PGM2.OBJ'.
    	CALL 'PGM6F.OBJ' USING BY REFERENCE/CONTENT/VALUE WS-FLD-1 WS-FLD-2 WS-FLD-3
    	END-CALL.
    BY REFERENCE - the corresponding data item in the calling program occupies the same storage area as the data item in the called program.
    BY CONTENT - the called program cannot change the value of this parameter.
    BY VALUE - the value of the argument is passed, not a reference to the sending data item.

    The GOBACK and EXIT PROGRAM statements are used for passing control to a calling program from a subprogram. (STOP RUN will terminate all currently running programs)

    Static and dynamic calls

    Static call -
    Linked into the calling program at compile time. This ensures that the program can ALWAYS be executed. However
    1. The code is duplicated for every program that includes this static call.
    2. All programs that include this static call need to be relinked when a change is made in the called static program.

    Static call is identified by the quotes around the called module name in the CALL statement.
    	call 'STATIC_1' Using PARM1, PARM2, PARMLAST.
    Dynamic call -
    Resolved at execution time (run time).
    1. A dynamically called program must be loaded at first reference (when it is called for the first time)
    2. There always is only one copy of a dynamically loaded program, regardless the number of programs that are (dynamically) calling this program
    3. The calling program will abend when the program cannot be loaded unless the ON EXCEPTION clause is used
    	CALL ws-the-2nd-Dynamic-Program	USING Parm1, Parm2
    	ON EXCEPTION PERFORM takeActionWhenCallAboveFails
    	END-CALL
    Comparison -
  • As a statically called program is link-edited into the same load module as the calling program, a static call is faster than a dynamic call
  • Regardless of whether it is called, a statically called program is loaded into storage; a dynamically called program is loaded only when it is called
  • A dynamically called program can be deleted using a CANCEL statement after it is no longer needed in the application (and not after each call to it). Statically called programs cannot be deleted using CANCEL, so static calls might take more main storage

    Passing parameters to a COBOL program from JCL

  • via PARM - This technique uses a PARM=parameter keyword on the EXEC statement in JCL. The COBOL program requires a LINKAGE SECTION.
    	//CBLPARS2 EXEC PGM=CBLPARC1,PARM='datastring'
    When the data string is passed from JCL to COBOL it is preceded with a two-byte binary value that specifies its length. For e.g a two-byte value of x'000A' followed by a ten character data string. If the COBOL program is executed from JCL without a parameter. the two-bytes hold x'0000'.
    	LINKAGE SECTION.
    	01  PARM-BUFFER.
    		05  PARM-LENGTH		pic S9(4) comp.
    		05  PARM-DATA		pic X(256).
    	PROCEDURE DIVISION using PARM-BUFFER.
  • via SYSIN - The COBOL program requires an "ACCEPT parameter from SYSIN" to be coded. If the SYSIN statement is missing in the JCL the ACCEPT will ABEND with a "File not found" message. To avoid this, use a //SYSIN DD DUMMY statement in the JCL when a parameter is not being passed.


    POINTERS

    Pointers are useful for allocating memory dynamically. Pointers are declared and used as follows
    	05 P1 USAGE IS POINTER.
    
    	LINKAGE SECTION.
    	01 PARM1.
    		05 ACCTNUM  PIC 9(7).
    		05 ACCTTYPE PIC XXX.
    
        PROCEDURE DIVISION.
        SET ADDRESS OF PARM1 TO P1.	

    REPORT GENERATION

    Printer Control statements

    The BEFORE/AFTER ADVANCING phrases cause one or more carriage return and line feed characters to be sent to the printer after/before after the line is printed. A carriage return is a command sent to a printer that causes the print head to return to the left margin of the paper. A line feed causes the print head to move down one line.
    	WRITE a-record BEFORE ADVANCING a-number.
    The advancing phrase also can be used to force the printer to eject the last page and prepare to start a new one. The advancing command below sends a form feed to the printer after printing a line.
    	WRITE a-record BEFORE ADVANCING PAGE.
    Printer carriage control characters

    A data set can be printed using either machine or ASA carriage control. Machine carriage control uses unprintable hexadecimal characters in column 1 of each record of the data set to control vertical spacing.

    ASA carriage control uses printable characters in column 1 of each record of the data set to control vertical spacing; the ASA carriage control characters are translated into machine carriage control by the JES2 component of the OS before the output is printed.

    ASA Line-Oriented Carriage Control Characters
    	ASA CODE (EBCDIC)  	 ACTION BEFORE WRITING PRINT LINE
    
    	blank			 Advance one line
    	0			 Advance 2 lines
    	-			 Advance 3 lines
    	+			 Do not advance
    ASA Channel Oriented Carriage Control Characters
    	ASA CODE (EBCDIC)        ACTION BEFORE WRITING PRINT LINE
    		  
    	1			 Skip to Channel 1
    	2			 Skip to Channel 2
    	...				...
    	9			 Skip to Channel 9
    	A			 Skip to Channel 10
    	B			 Skip to Channel 11
    	C			 Skip to Channel 12

    Back



    VSAM

    VSAM (Virtual Storage Access Method) is a file system used in mainframes featuring:
  • a format for storing data independently of the type of DASD on which it is stored
  • routines for sequential or direct access and for access by key, relative address or relative record number (RRN)
  • options for optimizing performance
  • a multifunction service program (Access Method Services - IDCAMS) for setting up catalog records and maintaining data sets

    VSAM types

    Key Sequential DataSet (KSDS)

  • Records may be processed sequentially or randomly based on a key value (indexed file)
  • Records are in collating sequence by key field
  • Direct access by key or RBA. A record's RBA can change unlike an ESDS
  • Alternate indexes allowed
  • Space given up by a deleted or shortened record becomes free space. Free space is used for inserting and lengthening records
  • Spanned records, Extended format and Compression are allowed

    SELECT clause for a KSDS file
    	SELECT ksds-file
    	ASSIGN TO system-name
    	ORGANIZATION IS INDEXED
    	ACCESS MODE IS {SEQUENTIAL / RANDOM / DYNAMIC}
    	RECORD KEY IS data-name-1
    	[ ALTERNATE RECORD KEY IS data-name-3  [WITH DUPLICATES ]] ...
    	[ FILE STATUS IS data-name-2 [vsam-code]]

    Entry Sequential DataSet (ESDS)

  • Records are processed one at a time in the order in which they were loaded, like a standard sequential data set
  • Records are in order as they are entered
  • Direct access by RBA
  • A record's RBA cannot change
  • A record cannot be deleted, but its space can be reused for a record of the same length
    	SELECT esds-file
    	ASSIGN TO system-name
    	ORGANIZATION IS SEQUENTIAL
    	[ACCESS MODE IS SEQUENTIAL]
    	[ FILE STATUS IS data-name-2 [vsam-code]]

    Relative Record DataSet (RRDS)

  • Records can be accessed based on their relative positions in the file, like a non-VSAM relative file
  • Records are in RRN order
  • Direct access by RRN
  • No alternate indexes allowed
  • A record's RRN cannot change
    	SELECT rrds-file
    	ASSIGN TO system-name
    	ORGNAIZATION IS RELATIVE
    	[ ACCESS MODE IS SEQUENTIAL / RANDOM / DYNAMIC]
    	[RELATIVE KEY IS data-name-1]
    	[ FILE STATUS IS data-name-2 [vsam-code]]
    Linear DataSet (LDS)

  • Contains data but no control information and can be accessed as a byte-addressable string in virtual storage
  • No processing at record level
  • Access with Data-In-Virtual (DIV)


    Operations on a VSAM file

    Possible operations
    KSDS - open (input/output/i-o), start, read, write, rewrite, delete, close
    ESDS - open (input/output/i-o/extend), read, write, rewrite, close
    RRDS - open (input/output/i-o), start, read, write, rewrite, delete, close

    	OPEN { INPUT / OUTPUT / I-O /EXTEND} File-name ...
    
    	START file-name
    		[KEY IS { = / > / >= / EQUAL TO / GREATER THAN / NOT LESS THAN...} data-name]
    		[ INVALID KEY imperative-statement ]
    
    	READ file-name [ NEXT ] RECORD
    		[ INTO identifier ]
    		[ AT END imperative-statement ] [NOT AT END imperative-statement]                
    	END-READ
    
    	WRITE record-name
    		[ FROM identifer ]
    		[INVALID KEY imperative-statement ] [NOT INVALID KEY imperative-statement]
    	END-WRITE
    
    	REWRITE record-name
    		[ FROM identifer ]
    		[ INVALID KEY imperative-statement ]
    	END-REWRITE
    
    	DELETE file-name RECORD
    		[ INVALID KEY imperative-statement ]
    	END-DELETE
    
    	CLOSE file-name ...
    The START command

    Positions the file pointer on a particular record. Relational operators can be used.
    	MOVE WS-KEY-VALUE TO VSAMTEST-KEY. 
    	START VSAMTEST-FILE KEY IS GREATER THAN VSAMTEST-KEY.

    Back


    IDCAMS

    Access Method Services (AMS) is a general-purpose utility that provides a variety of services for VSAM files. AMS is also called IDCAMS because IDC is IBM's prefix for VSAM.

    AMS Modal commands
    	IF LASTCC(or MAXCC) >(or <,= etc..) value -
    	THEN -
    	DO -
    	command set (such as DELETE, DEFINE etc..)
    	END
    	ELSE -
    	DO -
    	command set
    	END
    LASTCC - Condition code from the last function (such as delete) executed
    MAXCC - Max CC that was returned by any of the previous functions
    SET - resets CCs
    	SET LASTCC (or MAXCC) = value
    The maximum condition code is 16. A CC of 4 indicates a warning and a CC of 8 is usually encountered on a DELETE of a dataset that is not present.

    DEFINE CLUSTER

    DEFINE CLUSTER command is used to define a VSAM file. Run the DEFINE USERCATALOG and DEFINE SPACE commands before, to set up the required catalog and space needed for the file. However, if catalog name is not specified through the DEFINE CLUSTER command, the system automatically uses the catalog based on the high level qualifier (HLQ) of the dataset name being defined.
    	DEFINE  CLUSTER (NAME(entry-name)
    			[OWNER(owner-id)]
    			[NONINDEXED | INDEXED | NUMBERED ]
    			[RECORD SIZE(avg max)]
    			[SPANNED | NONSPANNED]
    			[KEYS(Length Offset)]
    			VOLUMES(volser ...)
    			{CYLINDERS/TRACKS/BLOCKS/RECORDS/KILO/MEGABYTES} (primary [secondary])
    			[UNIQUE | SUBALLOCATION]
    			[FREESPACE(ci   ca)]
    			[IMBED]
    			[SHAREOPTIONS(options)]
    			[MODEL(entry-name)]
    		 [DATA	( [NAME(entry-name) ]
    			 [VOLUMES(volser ...)]
    			 [{CYLINDERS | TRACKS | BLOCKS |RECORDS} (primary [secondary]) ]
    			 [CONTROLINTERVALSIZE(bytes) ] ) ]
    		 [INDEX  ( [NAME(entry-name) ]
    			   [VOLUMES(volser ...)]
    			   [{CYLINDERS | TRACKS | BLOCKS |RECORDS} (primary [secondary]) ] ) ]
    			   [CATALOG(name) ]
    Creating a KSDS dataset
    	//VKSDCRT1 EXEC PGM=IDCAMS
    	//SYSPRINT DD   SYSOUT=*
    	//SYSIN    DD   *
    	 DEFINE CLUSTER (NAME(VINCENT.SAMPLE.KSDS)	-
    			VOLUMES(V928 V277)	-
    			RECORDSIZE(80 80)	-
    			FREESPACE(10 15)	-
    			KEYS(6 0)	-
    			SHAREOPTIONS(2,3)	-
    			INDEXED)	-
    			DATA (NAME(VINCENT.SAMPLE.KSDS.DAT)	-
    				CYLINDERS(1000 600)	- 
    				FREESPACE(11 07)	-
    				CISZ(8192))	-
    				INDEX    (NAME(VINCENT.SAMPLE.KSDS.IDX)	-
    				TRACKS(1500 500)	-
    				CISC(4096))
    	/*
    	//*
    Creating an ESDS
    	//IDCAMS1 EXEC PGM=IDCAMS,REGION=2048K
    	//SYSPRINT DD  SYSOUT=*
    	//SYSIN    DD  *
    	/* DELETE ESDS CLUSTER*/
    		DELETE VSSORT.CLUSTER CLUSTER ERASE PURGE
    	/* DEFINE ESDS CLUSTER*/
    		DEFINE CLUSTER (	-
    			NAME ( VSSORT.CLUSTER)	-
    			VOLUMES ( MVS804 )	-
    			RECORDSIZE ( 158 158 )	-
    			RECORDS( 150000 0 )	-
    			NONINDEXED	-
    					  )	-
    			DATA (	-
    			NAME ( VSSORT.DATA )	-
    				  )
    	IF LASTCC NE 0	-
    		THEN SET MAXCC = 16
    	//*
    ALTERNATE INDEX (AIX)

    An AIX is a file that allows access to a VSAM KSDS dataset by a key other than the primary one. Alternate keys need not be unique unlike the primary key.
    	/* create an AIX on a KSDS file */
    	//S0001    EXEC PGM=IDCAMS
    	//SYSPRINT DD SYSOUT=(,)
    	//SYSIN    DD *
    	 ALTER   VIN.VSAM.KSDSFILE  NOREUSE
    	 DEFINE  ALTERNATEINDEX( -
    			NAME(VIN.VSAM.ALT) -	alternate index cluster
    			VOLUME(VO39M1) -
    			RECORDS(20 5) -
    			RECORDSIZE(75 115) -
    			NONUNIQUEKEY -
    			UPGRADE -		update aix along with base
    			KEYS(10 5)  -		alternate key
    			RELATE(VIN.VSAM.KSDSFILE) ) -	base cluster
    		  DATA(  NAME(VIN.VSAM.ALT.DATA))  -
    		  INDEX( NAME(VIN.VSAM.ALT.INDEX))
    Calculating the record length of alternate cluster
    Unique Case: 5 + ( alt-key-length + primary-key )
    Nonunique Case: 5 + ( alt-key-length + n * primary-key ), where n = # of duplicate records for the alternate key

    BLDINDEX

    BLDINDEX loads the Alternate Index dataset.

    Formats:
    BLDINDEX INFILE(ddname of base cluster) OUTFILE(ddname of AIX or path)
    BLDINDEX INDATASET(VSAM base cluster name) OUTDATASET(AIX file or path)


    IDCAMS uses internal sort work space when building an AIX and if it becomes exhausted during the BLDINDEX task, attempts to dynamically allocate external sort work datasets. Use external sort work datasets to be assured of successful execution as below.
    	//IDCUT1 DD   DSN=aannV.WORK1,VOL=REF=clustername,DISP=OLD,AMP='AMORG'
    	//IDCUT2 DD   DSN=aannV.WORK2,VOL=REF=clustername,DISP=OLD,AMP='AMORG'
    
    	Where: aannV Is the HLQ of the VSAM base cluster.
    		   clustername Is the associated VSAM base cluster dsname

    IDCAMS for building an AIX and Path

    	//STEP6     EXEC PGM=IDCAMS
    	//SYSPRINT  DD   SYSOUT=*
    	//DD1       DD   DISP=OLD,DSN=SDID.MVS.MEMBER.LISTV
    	//DD2       DD   DISP=OLD,DSN=SDID.MVS.MEMBER.LISTV.AIX
    	//SORTMSG   DD   DISP=SHR,DSN=SYS1.ICEISPM
    	//IDCUT1    DD   DISP=(,DELETE),AMP='AMORG',UNIT=SYSDA,SPACE=(CYL,12)
    	//IDCUT2    DD   DISP=(,DELETE),AMP='AMORG',UNIT=SYSDA,SPACE=(CYL,12)
    	//SYSIN     DD   *
    			BLDINDEX	-
    			SORTMESSAGEDD(SORTMSG)	-
    			EXTERNALSORT	-
    			INFILE(DD1) OUTFILE(DD2)
    		IF LASTCC = 0		-
    		THEN DO
    			DEFINE  PATH	-
    			(       NAME (SDID.MVS.MEMBER.LISTV.PATH)		-	path
    				PATHENTRY(SDID.MVS.MEMBER.LISTV.AIX)		-	aix cluster
    				NOUPDATE	)
    			END
    	/*
    	//*
    DEFINE PATH

    When processing the records of the base cluster via the alternate keys stored in alternate index, the base cluster is not accessed directly but through a catalog entry called path.

    The DEFINE PATH command creates an access route using the alternate index to primary index to data record.
    Format:
    	DEFINE  PATH  ( NAME (entry-name)
    		PATHENTRY(entry-name from Define AIX)
    		[ {UPDATE | NOUPDATE} ]
    		[CATALOG  (name)]
    The NAME parameter specifies the name of the path. PATHENTRY specifies the name of the AIX to which this path is related. The third parameter specifies whether the dataset could be updated or not while processing this Path. In general, a DEFINE PATH command is issued after the DEFINE AIX command.

    DEFINE GENERATIONDATAGROUP

    Creates a GDG base.
    	DEFINE  GENERATIONDATAGROUP
    		( NAME (VINCENT.SAMPLE.GDG) -
    		LIMIT (n) -             no. of generations allowed, max 255
    		[EMPTY/NOEMPTY ] -
    		[SCRATCH/NOSCRATCH]
    Noempty deletes only the oldest generation when limit is reached. Scratch specifies that uncataloged GDG generations are to be scratched.

    DELETE

    Deletes a VSAM dataset. If date-protected - the DEFINE cluster specifies a retention date and if it has not expired - then the PURGE option will be required to delete the data set. The default is NOPURGE.

    DELETE deletes the catalog entry of the cluster and marks the space used by the cluster as reclaimable. The data is not available but is still present until the area is reused. This introduces a security risk for sensitive data as it could be retrieved using some DUMP/RESTORE utilities. The ERASE function will write over the data area used by the cluster and the original data is destroyed. The default is NOERASE.
    	//	EXEC PGM=IDCAMS
    	//SYSPRINT DD  SYSOUT=*
    	//SYSIN    DD  *
    	 DELETE MFTEST.DATA.VKSD0080	-
    			PURGE	-
    			ERASE	-
    			CLUSTER
    	/*
    LISTCAT

    LISTCAT lists the cluster's catalog entry and has these command parameters:
  • ENTRIES identifies the entry to be listed
  • CLUSTER specifies that only the cluster entry is to be listed. If not specified, the cluster's data and index entries would also be listed
  • ALL specifies that all fields of the cluster entry are to be listed
    	LISTCAT -
    	ENTRIES(VINCEDS.EXAMPLE.KSDS1) -
    		CLUSTER -
    		ALL
    PRINT

    Detects an empty file. Returns 4 if file is empty.
    	PRINT INFILE(DD1) CHAR COUNT(1)
    REPRO

    The REPRO command copies VSAM and non-VSAM data sets, copies catalogs, and unloads and reloads VSAM catalogs.
    	//STEP1 EXEC PGM=IDCAMS
    	//IFILE    DD DISP=SHR,DSN=VINCENT.SAMPLE.KSDS.KS
    	//OFILE    DD DSN=VINCENT.SAMPLE.FLATFILE,
    	//     DISP=(,CATLG,DELETE),DCB=(LRECL=140,RECFM=FB,BLKSIZE=0),
    	//     SPACE=(CYL,(2,1),RLSE),UNIT=(SYSDA)
    	//SYSIN    DD *
    		REPRO INFILE(IFILE) -
    			OUTFILE(OFILE)
    	/*
    INFILE/IFILE/OUTFILE/OFILE - source & target data sets INDATASET/IDS/OUTDATASET/ODS - source & target ddnames

    Optional parms:

    FROMADDRESS(address)
    TOADDRESS(address) where 'address' specifies the RBA value of the key of the input record
    FROMNUMBER(rrn)
    TONUMBER(rrn) where 'rrn' specifies the RRN of the RRDS record
    FROMKEY(key)
    TOKEY(key) where 'key' specifies the key of the input record
    SKIP(number)
    COUNT(number) where 'number' specifies the number of records to skip or copy

    VERIFY

  • The VERIFY command is used to verify, and if necessary, update the end of file information in the VSAM catalog in order to make the catalog information consistent with the data file
  • VERIFY cannot be used for an empty VSAM file where the high used RBA (Relative Byte Address) in its catalog record is 0
  • The creation of a low value record in an empty VSAM file is normally the first step VSAM performs after a file is defined.
  • When a VSAM dataset is closed in an update program, VSAM will update the EOF information in both the VSAM catalog and the data file. If an update program or the OS fails, VSAM may not close the file properly and be unable to update information in the file's catalog record
  • The VSAM file's catalog record has the high used RBA that specifies the EOF address. If this field is not updated, the information stored in the catalog record does not agree with the actual contents of the file
  • After an abnormal termination involving a VSAM file, VERIFY must be executed to correct the catalog record information before the file is used again


    Initializing VSAM files

    VSAM files present a problem of requiring that at least one data record be initially loaded into the file before the file could be opened for input or update processing. This is because VSAM issues a VERIFY command upon opening a file to reset the EOF pointer. If the file has never been loaded, the VERIFY fails because the HI-USED-RBA is still zero. The HI-USED-RBA needs to be set to a value other than zero by writing a record to the VSAM file in "load" mode and deleting the record to empty the file while leaving the HI-USED-RBA at a non-zero value.

    Reorganization of VSAM files

    The reorganization process consists of unloading the data records (to a sequential file), DELETEing and DEFINEing the VSAM file again, and reloading the data from the sequential file. This reorganizes the VSAM file for more optimal processing by redistributing the free space throughout the file and eliminating split data blocks.


    VSAM PROPERTIES

    Control Interval Size (CISZ)

    The Control interval (CI) is a unit of data that is transferred between auxiliary storage and virtual storage when an I/O request is made. It contains records, free space and control information. Max size of the CI is 32K.

    Control information consists of RDF (Record Descriptor Field) and CIDF (Control Interval Descriptor Field). Every CI contains one CIDF (the last 4 bytes of the CI) that contains the offset and the length of free space in the CI. In case of fixed size records, each CI contains two RDFs of 3 bytes length each. In case of variable size records, there is a separate RDF for each record in the CI. The size of the data portion of the CI should be a multiple of 512 bytes or 2,048 bytes and the size of the index portion could be 512, 1024, 2048, 4096 and so on.

    Freespace

    Freespace is specified as Freespace(a b), where a is the percentage of freespace in each CI and b is the percent of free CIs in each CA. Use of freespace reduces CI/CA splits if used properly. Insufficient or excessive freespace can degrade performance and overutilize DASD. With insufficient freespace, CI/CA splits will degrade sequential processing as the dataset will not be stored in physical sequence. An excessive amount will waste DASD space due to the reduction of the effective blocking factor of the data. Also, extra I/O will be required to transfer partially filled CIs in sequential processing.

    When allocating freespace ensure that the record size plus the length of RDFs and CIDFs is taken into account. If there is an even distribution of inserts then it is best to specify the majority as CI freespace and a little CA freespace and if it is uneven then a little CI freespace and a lot of CA freespace.

    Shareoptions

    - a parameter in the DEFINE and specifies how an object can be shared among users.
    - coded as SHAREOPTS(a b), where a is the cross region share option i.e how two or more jobs on a single system can share the file, while b is the cross system share option i.e how two or more jobs on different MVS's can share the file. Usual value is (2 3).

    Recovery/Speed

    SPEED and RECOVERY are mutually exclusive options that are only applicable for the initial load of VSAM files.

    When RECOVERY is used, IDCAMS loads the data into a cluster and also preformats another Control Area with EOF records. If the system crashes during the load, a subsequent read of the file will identify how far the data load had gone before the failure occurred and be restarted from this point.

    Using SPEED does no preformatting of Control Areas with EOF records so the initial load takes place faster and fewer I/Os are performed.

    Apart from the overhead when using RECOVERY, it also requires the user to work out where the failure occurred and restart the load from the correct point. Except for large initial loads, it takes more time to work out the restart logic than simply to rerun the whole job again. Hence SPEED is preferable over RECOVERY when performing the initial load for all but large files.


    VSAM STATUS CODES 00 Operation completed successfully 02 Duplicate Key was found 04 Invalid fixed length record 05 The file was created when opened - Successful Completion 07 CLOSE with REEL or NO REWIND executed for non tape dataset. 10 EOF encountered 14 Attempted to READ a relative record outside file boundary 21 Invalid Key - Sequence error 22 Invalid Key - Duplicate Key found 23 Invalid key - No record found 24 Invalid Key - key outside boundary of file. 30 Permanent I/O Error 34 Permanent I/O Error - Record outside file boundary 35 OPEN, but file is empty 37 OPEN with wrong mode 38 Tried to OPEN a LOCKed file 39 OPEN failed, conflicting file (DCB) attributes 41 Tried to OPEN a file that is already open 42 Tried to CLOSE a file that is not OPEN 43 REWRITE without READing a record first 44 Tried to REWRITE a record of a different length 46 Tried to READ beyond EOF 47 READ from a file that was not opened I-O or INPUT 48 WRITE to a file that was not opened I-O or OUTPUT 49 DELETE or REWRITE to a file that was not opened I-O 91 Password or authorization failed 92 Logic Error 93 Resource was not available (may be allocated to CICS or another user) 94 Sequential record unavailable or concurrent OPEN error 95 File information invalid or incomplete 96 No DD statement for the file 97 OPEN successful and file integrity verified 98 File is Locked - OPEN failed 99 Record Locked - record access failed

    The AMP parameter

    The AMP parameter is used to complete information in an access method control block (ACB) for a VSAM data set. The ACB is a control block for entry-sequenced, key-sequenced, and relative record data sets.

    The AMP parameter has the following subparameters
  • AMORG - It is required in the following cases
    - when dataset access is through the ISAM interface program and the DD statement contains VOLUME and UNIT parameters or contains a DUMMY parameter
    - To open an ACB for a VSAM dataset, if the dataset is not fully defined at the beginning of the job step

  • BUFND=n - Specifies the number of I/O buffers that VSAM is to use for data records. The minimum (and default) is 1 plus the STRNO subparameter number. This value overrides the BUFND value specified in the ACB or GENCB macro, or provides a value if one is not specified. If STRNO is omitted, BUFND must be at least 2.

  • BUFNI=n - Specifies the number of I/O buffers that VSAM is to use for index records. Default = 1

  • BUFSP=n - the maximum amount of buffer space in bytes for the data and index components.

  • OPTCD=I/L/IL - indicates how the ISAM interface program is to process records that the step's processing program flags for deletion.
    I - requests, when the DCB contains OPTCD=L, that the ISAM interface program is not to write into the dataset records marked for deletion by the processing program. Without OPTCD=L in the DCB, the system ignores deletion flags on records.

    L - requests that the ISAM interface program is to keep in the dataset records marked for deletion by the processing program. If records marked for deletion are to be kept but OPTCD=L is not in the DCB, AMP=('OPTCD=L') is required.

    IL - requests that the ISAM interface program is not to write into the dataset records marked for deletion by the processing program. If the processing program had read the record for update, the ISAM interface program deletes the record from the dataset. AMP=('OPTCD=IL') has the same effect as AMP=('OPTCD=I') coded with the OPTCD=L in the DCB.

  • RECFM=F/FB/V/VB - identifies the ISAM record format used by the processing program. Must be coded when the record format is not specified in the DCB.

  • STRNO=n - Indicates the number of request parameter lists the processing program uses concurrently

  • SYNAD=module - Names a SYNAD exit routine. The ISAM interface program is to load and exit to this routine if a physical or logical error occurs when the processing program is gaining access to the dataset



    System Managed Buffering

    In System Managed Buffering (SMB), the system decides how many buffers to use for data and index portions, and also whether to use direct or sequential buffering.

    SMB is invoked via JCL, using
    	//ddname  DD DSN=vsam.cluster.name,AMP=('ACCBIAS=SYSTEM'),DISP=SHR

    Back



    HL Assembler

    Assembler features

  • Efficient processing and memory usage
  • Bit-level processing
  • System programming

    Registers

    Registers are specialized high-speed hardware within the mainframe used to perform high-speed arithmetic operations. There are different types of registers and the General Purpose Registers are more widely used. Each GPR is 64 bits (32-bits in older systems) in length. There are sixteen (16) different GPRs and they are identified by the numbers 0 through 15.

    If a 64-bit GP register is used in 31-bit addressing mode, only the right 32 bits are harnessed. In most cases, the content of a 32-bit GPR is treated as a 31-bit numeric value in the rightmost (low-order) bit-positions with a 1-bit sign in the leftmost (high-order) bit position (0 - positive, 1 - negative). Alternatively (and synonymously) the length (or width) of each GPR can also be expressed as 4-bytes or 1-fullword.

    Other types of registers are Floating Point Registers and Access Registers.


    Instructions

    Instruction format

    An Assembler language instruction has the following layout
    	LABEL space INSTRUCTION space OPERANDS space COMMENTS
  • The LABEL may be omitted.
  • The INSTRUCTION (operator or pseudo-op) conventionally goes in column 10.
  • The OPERANDS conventionally go in column 16.
  • Comments can go in the position shown above in the instruction layout, or there can be a whole comment line. A whole comment line is indicated with a '*' in column 1.

    Instructions can be classified into two categories
  • Machine Instructions
  • Assembler Instructions

    Machine Instructions

    Format

    The assembler requires the instructions (assembler/machine instructions and assembler macros) used in the source code to be coded in a specific syntax. Likewise, mainframe computers require that machine instructions in executable load modules follow a specific format.

    A machine instruction, in its internal (executable) form, contains an Operation Code (OpCode) and Operands. The first bytes of each instruction contain the OpCode and The rest of the bytes contain the operands. The OpCode indicates what action the CPU should take and Operands indicate information needed to carry out the OpCode. Operands are classified as Registers, Storage locations, or Immediate data. This means that the data associated with the instruction is contained in a register, a storage (memory) location, or 'immediately' imbedded within the instruction itself.

    Machine instructions, in their internal form, are either 1,2 or 3 halfwords (2,4 or 6 bytes) in length. So, in addition to telling the CPU what to do, the bit-configuration of the OpCode also indicates the length of the internal machine instruction.

    The first 2-bits of the OpCode indicate the length of the machine instruction.
    	
    	00		- one-halfword (2 bytes)
    	01 or 10	- two-halfwords (4-bytes)
    	11		- three-halfwords (6 bytes)
    Types
    	Instruction type	Meaning			Instruction Length		Data processed 
    	RR		register to register 			2 bytes			binary
    	RX		register to storage with index		4 bytes			binary
    	RS		register to storage 			4 bytes			binary
    	SI		Storage to immediate 			4 bytes			Any data type
    	SS		storage to storage with index 		6 bytes			Packed or character data
    A (Add) - Algebraically adds the value addressed by the second operand to the contents of the first operand register. If an overflow condition results because of the add, the carry into the sign bit is allowed and the carry out of the sign bit is ignored, and condition code 3 is set.

    Related Instructions
    AR (Add Registers), AH (Add Halfword), AL (Add Logical), ALR (Add Logical Registers), AP (Add Decimal/Add Packed), S (Subtract), SH, SL, SLR, SR, M, MH, MP, MR, D, DP, DR
    	A   R3,24(R5,R12)    Add quantity at R5 address + R12 address + 24 to R3
    	A   R14,FULLWORD     Add FULLWORD to R14
    	A   R0,=F'1'         Add 1 TO R0
    	A   R15,0(,R7)       Add value pointed to by R7 to R15
    	AP  COUNTER,=P'1'    Add 1 TO Counter

    BALR (Branch and Link Register) - 1.Branch to a subroutine in a different CSECT - provided that it can be invoked in the same addressing mode as the caller. In this case R14 and R15 are usually used for the return address and destination address respectively
    2.Branch to a subroutine in the same CSECT - provided that it can be invoked in the same addressing mode as the caller, which will normally be the case. It is not unusual to use other (less volatile) registers than R14 and R15 in this case
    3.Obtain addressability to the current routine

    Related Instructions
    BASR (Branch and Save Register), BCR, BAL, BASSM
    	BALR  R1,R2		Branch to the address in R2. Before the branch, place the address of the next instruction after BALR in R1

    BAKR (Branch and Stack) - Creates a linkage stack state entry and then branches to the address in the second operand register. The address in the first operand register when BAKR executes is saved in the newly created linkage stack entry as the return address; a PR (Program Return) instruction executed by the program invoked by BAKR will return to this save return address.

    Related Instructions
    BALR (Branch and Link Register), BASR (Branch and Save Register), BAS, BAL
    	BAKR  R14,R15		Create stack entry & branch to address in R15, return to addr in R14
    	BAKR  R14,0		Create a stack entry but do not branch
    	BAKR  0,R11		Create stack entry & branch to address in R11, but return to next sequential instruction since 'R1' field is zero
    BC (Branch on Condition) - Branch to a storage location whose address is specified by the second operand. The branch occurs if the condition code value last set within the program maps to the mask bit value specified in the first operand.

    	format:	BC   M1,D2(X2,B2)	Opcode:  47
    	e.g:	BC    8,0(R1,R3)	Branch to address in R1 + address in R3 if condition code 8 is set
    		BC    13,FIXIT		Branch to "FIXIT" label if condition code 0, 1 or 3 is set
    BCT (Branch on Count) - If the first operand has a non-zero value, subtracts 1 from it and branches to a storage location whose address is specified by the second operand.

    	format:	BCT   R1,D2((X2,B2)	Opcode:  46
    	e.g:	BCT   R5,LOOP		If R5 value > 0 and subtract 1 from R5 and branch to LOOP
    C (Compare) - Compares the value in the first register to the value at the second address, and sets the condition code based upon the result

    Related Instructions
    CR (Compare Registers), CH (Compare Halfword), CL (Compare Logical), CLR (Compare Logical Registers), CP (Compare Packed/Decimal)
    	CR  R4,R10		Compare values in R4 and R10
    	CP  0(2,R14),2(5,R14)	Compare 5 bytes field at R14 address + 2 to R14 address + 0 for length of 2
    	CP  COUNTER,=P'100'	See if counter > 100 decimal
    	
    CLC (Compare Logical Character) - Compares from 1 to 256 bytes addressed by the first operand address to an identical number of bytes addressed by the second operand and set the condition code based upon the result
    	format:	CLC   D1(L1,B1),D2(B2)		Opcode:  D5
    e.g: CLC 0(256,R3),10(R12) Compare 256 bytes at R3 address to 256 bytes at R12 address plus 10 decimal CLC =CL10' ',0(R1) Compare 10 byte literal blank string to 10 bytes at address in R1
    CLI (Compare Logical Immediate) - Compares the value at the storage location addressed by the first operand with a byte of immediate data, and sets the CC based upon the result.

    	format:  CLI   D1(B1),I2     Opcode:  95
    	e.g:     CLI   0(R2),255		Compare value at R2 address to decimal 255
    		 CLI   BLANK,C' '		Compare value at BLANK label to ' '
    		 CLI   X'10'(R14),X'80'		Compare value at R14 address + X'10' to X'80'
    ICM (Insert Characters Under Mask) - Inserts the consecutive bytes addressed by the second operand into one or more of the four bytes in the right half of the first operand register. The bytes are mapped by 'one' bits in the mask (M3 value in instruction format). The mask bits correspond one for one with the 4 bytes in the rightmost 32 bits (bits 32-63) of the first operand register; if a bit is on, a byte from main storage is inserted into the corresponding first operand register byte; if the bit is off, no insertion into that register byte occurs.
    	format:  ICM   R1,M3,D2(B2)		Opcode:  BF
    	e.g:     ICM   R11,B'1010',0(R4)	Insert 2 bytes at address in R4 into bits 32-39 and bits 48-55 of R11
    		 ICM   R9,2,SWITCH		Insert byte at SWITCH label into byte 6 (bits 48-55) of R9
    L (Load) - loads from a fullword in main storage into a GPR. The data to be loaded should be on a fullword boundary in main storage (the address of the word should be divisible by 4).

    Related Instructions
    LG (Load 64 bits), LR (Load Register), LGR
    	L   R6,4(R5,R12)	Load R6 from R5 address + R12 address + 4
    	L   R14,FULLWORD	Load R14 from "FULLWORD"
    	L   R0,=A(MYDATA)	Load a literal ADCON
    	L   R15,X'22'(,R15)	Load FROM R15 address + X'22'
    	LG  R0,=D'1077256'	Load doubleword literal value
    	LGR R6,R5
    LA (Load Address) - loads the address specified by the second operand into a GPR. The values in the X2 (index reg.), B2 (base reg.) and D2 (displacement value) portions of the second operand are added, following the rules of address arithmetic, and the sum is placed in the GPR.

    	LA  R6,4(R5,R12)	Load R6 with R5 address + R12 address + 4
    	LA  R14,1(,R14)		Add 1 to value in R14
    LTR (Load and Test Register) - Loads the value in the second register into the first and sets the condition code based upon the magnitude of the transferred value. Both operands can specify the same register. Both register's contents are treated as 32-bit signed binary values, occupying the right 32 bits of each of the first and second operand registers.
    	LTR   R14,R1	Load R1 contents into R14 and set CC
    	LTR   R0,R0	Test value in R0
    MVC (Move Character) - Moves from 1 to 256 bytes from one main storage location to another. The storage locations may overlap each other. If the length operand is not explicitly specified in the first operand, the implicit length of the first operand symbol is used.
    	format:  MVC  D1(L1,B1),D2(B2)		opcode:  D2
    	e.g:     MVC  10(256,R5),0(R5)		Move 256 bytes from R5 address to R5 plus 10(Decimal) address
    		 MVC  TITLE(7),=C'1 ERROR'	Move 7 byte literal to "TITLE" address
    
     		 DS   F
    		 MVC  WORD,=F'1'		Move 4 bytes to "WORD"
    MVCL (Move Character Long) - Moves from 1 to 2G bytes in main storage. Both operands designate an even register of an even-odd pair and hold the addresses of the receiving and sending fields. The odd-registers hold the lengths of the receiving and sending fields.
    	format:  MVCL  R1,R2		opcode:  0E
    	e.g:     MVCL  R0,R6		Move data at address in R6 to address in R0 - R1 & R7 contains move lengths
    OR (OR Registers) - Performs a boolean OR between two registers and stores the result in the first register.

    Related Instructions
    O (Or), OI (Or Immediate), OC (Or Characters), N (And), NI, NC , NR, X (Xor), XI, XC, XR
    	OR	R3,R14		OR R3 with R14
    	OR	R9,R9		OR R9 with itself
    	NI	BYTE1,X'0F'	AND value at BYTE1 with immediate byte of X'0F'
    	NI	0(R10),255	AND value AT R10 address plus 0 with immediate byte of 255 decimal
    	X	R3,0(,R6)		XOR R3 with 4 bytes at address in R6
    	X	R12,=X'00FF00FF'	XOR R12 with 4 byte hex Constant
    ST (Store) - stores the rightmost four bytes of a GPR at a fullword location in main storage.
    	ST  R6,4(R5,R12)	Store right 4 bytes in R6 at R5 address + R12 address + 4
    	ST  R14,FULLWORD	Store right 4 bytes in R14 at "FULLWORD"
    	ST  R0,0(,R3)		Store right 4 bytes in R0 at R3 address
    	ST  R15,X'22'(,R15)	Store right 4 bytes in R15 at R15 address + X'22'
    STCM (Store Characters under Mask) - Stores selected bytes from the first operand register into consecutive bytes addressed by the second operand. The bytes in R1 that are stored are mapped by one bits in the mask. The mask bits correspond one-for-one with the 4 bytes in the right half of R1; if a bit is on, a byte from R1 is stored into a byte at the second operand address; if the bit is off, no store of that byte occurs.
    	format: STCM  R1,M3,D2(B2)	opcode:  BE
    	e.g:    STCM  R8,B'1010',0(R4)	Store 4th and 6th bytes OF R8 into 2 bytes at address in R4
                    STCM  R9,2,SWITCH	Store 6th byte of R9 at label SWITCH
                    STCM  R2,15,0(R3)	Store all four of the rightmost bytes in R2 at the address in R3
    TM (Test under Mask) - uses a one-byte mask to test bits in the byte at the first operand address.

    	format:  TM  D1(B1),I2			opcode:  91
    	e.g:     TM  0(R2),B'00001000'		Test bit 4 in the byte at the address in R2
    	         TM  BYTE1,X'82'		Test bits 0 and 6 in the byte at label BYTE1
    UNPK (Unpack) - Converts the second operand field from packed to zoned decimal format by "unpacking" it into the field at the first operand address. Up to 16 bytes can be unpacked at once. The second operand is treated as though it has the packed format, but this is not absolutely necessary.
                                              
    	Unpacked bytes: F1F2F3F4C5 		Packed Bytes: 12345C
    	Unpacked bytes: FAFBFCFDFE		Packed Bytes: ABCDEF
    The C at the far right of the packed field is the positive sign of the field. C and F are +ve signs, A,B,D and E are -ve signs.
    	format:  UNPK    D1(L1,B1),D2(L2,B2)    Opcode (Hex): F3
    
    	e.g	UNPK    16(16,R14),0(8,R14)	Unpack 8 byte field at R14 address to R14 address + 16 for length of 16
    		UNPK    DWORD,=P'4096'		Unpack packed literal "4096" into field at label DWORD
    ZAP (Zero and Add Packed) - Copies the packed field at the second operand address on top of the packed field at the first operand address. The second operand field must be in the packed format; the first operand field can be in any format. Zeros are used to fill in the first operand if it is longer than the second operand.

    The operands can overlap if the rightmost byte of the first operand packed field is at the same address or at a higher address than the rightmost byte of the second operand packed field.

    	format:	ZAP   D1(L1,B1),D2(L2,B2)	opcode:  F8
    	e.g:	ZAP   8(16,R3),0(8,R5)	copy 2nd packed field to first operand address
    		ZAP   ACCUM,=P'1'	zap a 1 into ACCUM field
    List of Other Machine Instructions
     
    	Instruction				Mnemonic	Hex	Format
    	Branch, Save and Set Mode		BASSM		0C	R1,R2
    	Branch on Condition Register		BCR		07	M1,R2
    	Branch on Count Register		BCTR		06	R1,R2
    	Branch and Set Mode			BSM		0B	R1,R2
    	Branch on Index High			BXH		86	R1,R3,D2(B2)
    	Branch on Index Low/Equal		BXLE		87	R1,R3,D2(B2)
    	Compare Double and Swap			CDS		BB	R1,R3,D2(B2)
    	Compare Logical Characters Long		CLCL		0F	R1,R2
    	Compare Logical under Mask		CLM		BD	R1,M3,D2(B2)
    	Compare and Swap			CS		BA	R1,R3,D2(B2)
    	Convert to Binary			CVB		4F	R1,D2((X2,B2)
    	Convert to Decimal			CVD		4E	R1,D2((X2,B2)
    	Edit					ED		DE	D1(L1,B1),D2(B2)
    	Edit and Mark				EDMK		DF	D1(L1,B1),D2(B2)
    	Execute					EX 		44	R1,D2(X2,B2)
    	Insert Character			IC		43	R1,D2(X2,B2)
    	Load Complement Registers		LCR		13	R1,R2
    	Load Halfword				LH		48	R1,D2(X2,B2)
    	Load Multiple				LM		98	R1,R3,D2(B2)
    	Load Negative				LNR		11	R1,R2
    	Load Positive				LPR		10	R1,R2
    	Move Inverse				MVCIN		E8	D1(L,B1),D2(B2)
    	Move Immediate				MVI		92	D1(B1),I2
    	Move Numerics				MVN		D1	D1(L,B1),D2(B2)
    	Move with Offset			MVO		F1	D1(L1,B1),D2(L2,B2)
    	Move Zones				MVZ		D3	D1(L,B1),D2(B2)
    	Pack					PACK		F2	D1(L1,B1),D2(L2,B2)
    	Shift Left Single			SLA		8B	R1,D2(B2)
    	Shift Right Single			SRA		8A
    	Shift Left Double			SLDA		8F	R1,D2(B2)
    	Shift Right Double			SRDA		8E
    	Shift Left Double Logical		SLDL		8D	R1,D2(B2)
    	Shift Right Double Logical		SRDL		8C
    	Shift Left Single Logical		SLL		89	R1,D2(B2)
    	Shift Right Single Logical		SRL		88
    	Shift and Round Decimal			SRP		F0	D1(L1,B1),D2(B2),I3
    	Store Character				STC		42	R1,D2(X2,B2)
    	Store Halfword				STH		40	R1,D2(X2,B2)
    	Store Multiple				STM		90	R1,R3,D2(B2)
    	Supervisor Call				SVC		0A	I1
    	Translate				TR		DC	D1(L1,B1),D2(B2)
    	Translate and Test			TRT		DD	D1(L1,B1),D2(B2)

    Extended Mnemonic Branch Instructions

    The extended mnemonic branch instructions can be used in place of the BC, BCR (Branch on condition Register), BRC (Branch Relative on condition and BRCL (Branch Relative on condition Long instructions. They include
    General: B, BR, NOP, NOPR, J, JNOP, BRUL, JLU
    After Compare: BH, BHR, BL, BLR, BE, BER, JE, JH, JL
    After Arithmetic: BN, BP, BO, BNZ, BNZR
    After Test under Mask: BO, BM, BZ, BNO, BNM, BNZ

    Assembler Instructions

  • CSECT (Control Section) - The CSECT assembler instruction identifies the beginning of a Control Section and assigns a name to it
    	IEFBR14  CSECT
    		 SR   15,15
    		 BR   14
    		 END
    	
    The name of the above CSECT is IEFBR14. The CSECT is the smallest programmable unit that the Linkage Editor operates on. The Linkage Editor uses the assembler's output and creates an executable module. Multiple CSECT statements are allowed within a source module. If a CSECT statement is specified with the same label as a previous CSECT, the assembler assumes that the statements that follow the CSECT statement continue the definition of the named CSECT.

  • DC (Define Constant) - tells the assembler to Define Constant (initialized storage area). SAVEAREA DC 18F'0' indicates to the assembler to allocate or set aside a storage area named SAVEAREA that is the size of 18 Fullwords.

  • DS (Define Storage) - Used to reserve storage within a program. The assembler reserves storage but does not initialize the storage to any value.

    The symbol in the name field is assigned the address of the first byte of the reserved storage.
    	CONST#4  DS F		Fullword Reserved Area
    	HEXTAB   DS 256X	256 bytes
    	
  • DSECT - The DSECT instruction identifies the beginning or continuation of a dummy control section. One or more dummy sections can be defined in a source module. The statements that appear in a dummy section are not assembled into object code.

    When establishing the addressability of a dummy section, the symbol in the name field of the DSECT instruction, or any symbol defined in the dummy section can be specified in a USING instruction.

  • END - Ends the assembly of a program. It can also be supplied an address in the operand field to which control may be transferred after the program is loaded. The END instruction must always be the last statement in the source program.

  • EQU - is used to equate the value of an expression to the symbol in the name field of the EQU instruction.

  • OPSYN - defines replacement symbols

  • PRINT - Controls the amount of detail printed in the listing of the programs. For e.g PRINT NOGEN avoids printing of expansions of macros.

  • PUNCH - makes the Assembler produce inline text strings of 1 to 80 characters within an object module. The PUNCHed text appears in the object deck immediately after the PUNCH statement.
    PUNCH is used to produce in-line linkage editor control statements in an object module or to produce output suitable for use by another programming language or function.

  • SPACE - tells the assembler to leave a blank line in the listing providing visual separation in the assembler listing of the print lines previous to and following the SPACE statement

  • TITLE - tells the assembler to skip to a new page in the assembly program listing and to use the text specified in the first operand as a title for all the following pages until another TITLE statement is encountered.

  • USING - tells the assembler how to generate storage addresses, in Base register-Displacement (BDis) form, for the symbols used in the source code. The USING statement requires 2 operands. The first operand specifies the beginning storage address that the second operand register contains. For e.g, in the case of the USING PROGRAM2,12 instruction, it directs the assembler to use R12 as the base register when generating storage address and to use the label PROGRAM2 as the first byte that R12 points to. i.e the label PROGRAM2 will be the first displacement-relative-to-zero. For e.g the base-displacement form of the storage location represented by the label PROGRAM2 would be Register 12 plus a displacement value of zero.

    Comments

    Comments are included in an assembler source code by an asterisk(*) in column 1. For e.g
    	PROGRAM2 CSECT
    	* ENTRY LOGIC
    		STM   14,12,12(13)	Save the caller's registers
    		LR    12,15		Set GPR12 as program base register
    		USING PROGRAM2,12
    		LA    15,SAVEAREA	Temporarily point GPR15 to SAVE AREA
    		ST    13,4(,15)		SAVE THE ADDRESS OF THE CALLER'S    -
    						SAVE AREA IN SAVE AREA'S HSA SLOT
    		ST    15,8(,13)		SAVE THE ADDRESS OF SAVE AREA IN -
    						THE CALLER'S SAVE AREA'S LSA SLOT
    		LR    13,15		POINT GPR 13 TO SAVE AREA
    	*
    	* (Main-line logic goes here)
    	*

    Back


    MACROS

    Macros Definition

    Macros are sets of assembler instructions that can be invoked within assembler programs. One disadvantage of Macros is that each call generates a new set of instructions. In-built macros reside in the PDS SYS1.MACLIB.

    Macros are defined as follows:

    In the beginning of the program,
    	MACRO
    	macro-name &name1,&name2,...,&namen   [prototype statement]
    		.........
    	assembler statements containing &name1...&namen
    		.........
    	MEND
    In body of the program,
    	macro-name arg1,arg2,...,argn [macro call]
    Assembler statements will be copied substituting arg1 for &name1, arg2 for &name2,... argn for &namen.

    In listing file, generated statements are indicated by "+" in the left hand column.

    System Macros

  • ABEND - The ABEND macro is used to initiate error processing for a task. ABEND can request a full or tailored dump of virtual storage areas and control blocks pertaining to the tasks being abnormally terminated.
  • CALL - The CALL macro passes control to a CSECT at a specified entry point
  • FREEMAIN - Used to free one or more areas of virtual storage. Also, can be used to free an entire virtual storage subpool if it is owned by the task under which the program is issuing the FREEMAIN.
  • GETMAIN - Used to request one or more areas of virtual storage.
  • SAVE - The SAVE macro stores the contents of the specified GP registers in the save area at the address contained in register 13. An entry point identifier may be specified, if desired. The SAVE macro should be written only at the entry point of a program because the code resulting from the macro expansion requires that register 15 contain the address of the SAVE macro prior to its execution. SAVE macro should not be used in a program interruption exit routine.

    The format of the SAVE macro is
    	SAVE  (reg1)
    	or
    	SAVE   (reg1,reg2)
    where reg1 and reg2: Decimal digits, and in the order 14, 15, 0 through 12.

  • SNAP/SNAPX (Dump Virtual storage and Continue) - The SNAP macro can be used to obtain a dump of some or all of the storage assigned to the current job step. Can also dump some or all of the control program fields. The SNAP macro causes the specified storage to be displayed in the addressing mode of the caller.
  • SPIE (Specify Program Interruption Exit) - The SPIE macro specifies the address of an interruption exit routine and the program interruption types that are to cause the exit routine to get control.
  • STORAGE - The STORAGE macro requests that the system obtain or release an area of virtual storage in the primary address space. The two functions of the macro are
    - STORAGE OBTAIN, which obtains virtual storage in an address space
    - STORAGE RELEASE, which releases virtual storage in an address space.
  • WTO (Write to Operator) - writes messages to one or more operator consoles

    QSAM macro instructions

    An access method is a complete set of macros and modules that are used to perform I/O operations. QSAM is used to process sequential files.

    A data set is read into a buffer in memory one physical record or block at a time. One physical record is made up of a fixed number of logical records. The fixed number is known as the blocking factor.

    QSAM provides two services that some other access methods do not:
  • buffering - reading of one block at a time into a buffer in memory.
  • deblocking - dividing of the physical record into the logical records.

    QSAM is implemented by the use of the following macros

  • CLOSE - Disconnect program and data. The format of the CLOSE macro is
    	[label]  CLOSE     (address[,[(options)][,...]])
    				[,MODE={24|31}]
    				[,TYPE=T]
  • DCB - The data control block for a queued sequential access method (QSAM) dataset is constructed during assembly of the program. The format of the DCB macro for QSAM is
    	DCB   BLKSIZE=__,BUFNO=__,DDNAME=ddname__,DSORG=__,EODAD=__,	X
    		LRECL=__,MACRF=__,RECFM=__,DEVD=__
    Parameters dcbname - the name that will be referenced in the source code when opening or closing a file

    DDNAME=ddname
    - links the DCB to the actual file being read from / written to
    - ddname comes from the corresponding DD statement in the JCL

    DEVD - specifies the type of device that the file is on
    - Possible values: DA=direct access, TA=tape, PR=printer

    DSORG
    - specifies the type of organization of the data set
    - use PS (physical sequential)
    - MUST be coded in the DCB

    MACRF
    - specifies which format of the GET and PUT macros will be used in the program
    - 4 possible values
    GL get locate the address of the next logical record to be read is placed in register 1. The data is not actually moved.
    GM get move the logical record is copied into a named area of storage
    PL put locate the address of the next available buffer area for writing is returned in register 1. The program must move the logical record to the address.
    PM put move a logical record is written from a named area of storage

    - If a file will be used for both input and output, specify (G_,P_) as the macro format.
    - MUST be specified in the DCB

    RECFM
    - specifies the format of a record
    - 3 types (F=fixed length, V=variable length, U=undefined length)
    Fixed and variable can be further subdivided using: B(blocked, FB or VB), A (ASA carriage control character in 1st byte,FA or VA), M ( Machine carriage control character in 1st byte, FM or VM)

    LRECL - specifies the number of bytes in a logical record

    BLKSIZE - number of bytes in a block (blocking factor * LRECL)

    EODAD - specifies the name of an EOF routine that should be performed after all of the input records have been read. The last instruction in this routine is usually BR 14 since register 14 will have the address of the instruction that follows the read
    The DCB macro can be assembled into a program that resides above the 16 MB line, but the program must move it below the line before using it. Except for the DCBE, all areas that the DCB refers to, such as EXLST and EODAD, must be below the 16 MB line.

  • DCBE - The DCB extension (DCBE) provides functions that augment those provided by the DCB. A DCBE is optional. The DCBE must reside in storage that can be accessed and modified. This storage may be located above or below the 16 MB line independently of whether the pgm is executing in 31-bit addressing mode. The DCBE is specified using the DCBE parameter of the DCB macro.

    The DCBE must not be shared by multiple DCBs that are open. After the DCB is successfully closed, the user may open a different DCB pointing to the same DCBE. Program may refer to DCBE fields symbolically by using the IHADCBE mapping macro and the DCBDCBE address in the DCB (using the DCBD mapping macro).

  • GET - Retrieves a record.
  • OPEN - The OPEN macro completes the specified data control blocks and prepares for processing the data sets identified in the data control blocks.
  • PUT - Used to write (load) records to an empty data set, and insert or update records into an existing data set.

    Inner Macros

    Inner Macros are macros which may be called by another macro.
  • Public inner macros - can also be called by open code
  • System inner macros - intended for use by more than one macro, but not by open code
  • Private inner macros - used by only one macro and not by open code

    Back


    ASSEMBLER PROGRAMMING

    File handling

    Assembler routine to access a QSAM file

    This routine can be called by an external program to process the OPEN, CLOSE and GET actions on a QSAM file.
    	QSAMIOA1 CSECT
    
    	OPENRTN  EQU   *
    			 OPEN  (QSAMFILE,(INPUT))
    			 LTR   R15,R15			Was OPEN successful?
    			 BNZ   BADOPEN			If not, quit
    			 TM    QSAMFILE+48,X'10'	Was OPEN successful?
    			 BZ    BADOPEN			If not, post error
    			 ST    R15,8(,R8)		Set Return Field in Pass Area
    			 LA    R9,QSAMFILE		Get Address of DCB using R9
    			 MVC   12(2,R8),82(R9)		Get Record length from DCB
    			 B     RETURN			Return to Calling program
    
    	GETRTN   EQU   *
    			 L     R1,SAVER1
    			 L     R3,0(R1)
    			 LA    R3,14(,R3)		Get address of COBOL data buffer
    			 GET   QSAMFILE,(R3)
    			 LTR   R15,R15			Is RC = 0?
    			 BNZ   BADGET			If not, then post a message
    			 ST    R15,8(,R8)		Set user RC to ZERO
    			 LA    R9,QSAMFILE		Get Address of DCB using Reg-9
    			 MVC   12(2,R8),82(R9)		Get Record length from DCB
    			 B     RETURN
    
    	CLOSERTN EQU   *
    			 CLOSE (QSAMFILE)
    			 ST    R15,8(,R8)		Set user RC
    
    	QSAMFILE DCB   MACRF=G,EODAD=EODRTN,SYNAD=ERROR1,			X
    			DDNAME=QSAMFILE
    		 END
    Linkage conventions

    The program giving control provides the address to save registers in R13.
    	BEGIN	CSECT
    		STM   14,12,12(13)	Store R14 thru R12 to address in R13 + 12
    		BALR  12,0		Load address of next instr. to R12		
    		USING ENTRY,12
    	BASE	ST    13,SAVE+4		Save address in R13 to SAVE + 4
    		LA    13,SAVE		Store SAVE area address to R13
    		...
    		...
    		L     13,SAVE+4		Load address in SAVE+4 to R13
    		LM    14,12,12(13)	Load registers R14-R12 from address in R13+12
    		BCR   X'F',14		Branch unconditionally to address in R14
    		...	
    	SAVE	DS    18F
    To search an input area and process each non-blank character
    		LA  R14,AREA		R14 --> Start of input area
    		LA  R15,L'AREA		R15 = Length of input area
    	LOOP    CLI 0(R14),C' '		Is this byte blank ?
    		BE  NEXT           	Yes, Ignore it
    	*				No,  take appropriate action
    	*				R14 points to a non-blank character
    	NEXT    LA  R14,1(,R14)		R14 --> Next input byte
    		BCT R15,LOOP		Repeat until end of input area ?
    Validating a packed decimal field
    		XR    R2,R2		Clear TRT result register
    	UNPK    WORK(L*2+1),FIELD(L+1)	Put each nibble into a separate byte
    	TRT     WORK(L*2),PDVAL-C'0'	Look for a sign
    		BZ    BADSIGN		Error if no sign found at all
    		BL    BADDIGIT		Error if sign found before end
    		BCT   R2,NEGATIVE	Branch if negative sign found at end
    		B     POSITIVE		Must be positive sign at end
    	WORK    DS CL(L*2+1)
    	PDVAL   DC 10X'00',AL1(2,1,2,1,2,2)
    Initializing field to spaces
    	MVI DATA80,X'40'
    	MVC DATA80+1(79),DATA80
    First-time execution only para
    	NOBRANCH   BC X'00',ONEJUMP
    		   MVI NOBRANCH+1,X'F0'		modify Branch to be unconditional
    		   B NOBRANCH
    	ONEJUMP    EQU *
    Looping
    	BCTEX1    LA   R5,6		/* Load R5 with the value 6 */
    	LOOP      .			/* start of loop */
    		  .
    		  .
    		  BCT R5,LOOP		/* Do loop again until R5 = 0 */
    	DONE      .			/* done with the loop */
    
    	BCTEX2    LA   R5,6		/* Load R5 with the value 6 */
    		  LA   R6,LOOP		/* Load address of LOOP in R6 */
    	LOOP      .			/* start of loop */
    		  .
    		  .
    		  BCTR R5,R6		/* Do loop again until R5 = 0 */
    	DONE      .			/* Done with the loop */
    Calling subprograms

    	ASM1 CSECT
    		 SAVE  (14,12)
    		 BALR  12,0		Prepare a base register
    		 USING *,12		Establish base register
    		 ST    R13,SAVREG13
    	*
    		 WTO   '* ASM1 is starting, example of CALL macro...'
    
    	* call another program without passing parameters
    	*
    		 WTO   '* ASM1 calling ASMA without parameters...'
    		 LA    R13,SAVEAREA
    		 SR    R1,R1
    		 CALL  ASMA
    		 WTO   '* ASMA return...'
    	*
    	* Calling passing parameters. Parameters are passed via an address list
    	* Standard member-to-member linkage is used
    	*
    		 WTO   '* ASM1 calling ASMA with four parameters...'
    		 LA    R13,SAVEAREA
    		 CALL  ASMA,(PARM01,PARM02,PARM03,PARM04),VL
    		 WTO   '* ASMA return...'
    	*
    	EOJAOK   EQU   *
    		 WTO   '* ASM1 is complete, example of CALL macro......'
    		 L     R13,SAVREG13
    		 RETURN (14,12),RC=0
    	*
    	ABEND08  EQU   *
    		 WTO   '* ASM1 is abending...RC=0008'
    		 L     R13,SAVREG13
    		 RETURN (14,12),RC=8
    	*
    	* Define Constants and Equates
    	*
    		 DS    0F		* Force alignment
    	*
    	SAVEAREA EQU   *
    		 DC    F(0)
    		 DC    F(0)
    	SAVREG13 DC    F(0)
    		 DC    15F(0)		* Used by SAVE/RETURN functions
    	*
    	PARM01   DC    H(28),H(0),CL24'* ASM1 parameter 01 '
    	PARM02   DC    H(28),H(0),CL24'* ASM1 parameter 02 '
    	PARM03   DC    H(28),H(0),CL24'* ASM1 parameter 03 '
    	PARM04   DC    H(28),H(0),CL24'* ASM1 parameter 04 ' 
    
    	R0       EQU   0
    	R1       EQU   1 ....etc.
    		 END

    Back



    DB2

    The DB2 subsystem

    DB2 operates as a formal subsystem of z/OS. A DB2 subsystem is a distinct instance of a relational DBMS whose software controls the creation, organization and modification of a database and the access to the data that the database stores.

    z/OS processes are separated into regions that are called address spaces and DB2 processes execute in several different address spaces.

    The following jobs handle the operations of a DB2 subsystem (xxxx = name of the subsystem).
  • xxxxMSTR - runs the system services address space which is responsible for starting and stopping DB2 and for controlling local access to it
  • xxxxDBM1 - the database services address space is responsible for accessing relational databases controlled by DB2. The input and output to database resources is performed on behalf of SQL application programs in this space
  • xxxxIRLM - internal resources lock Manager (IRLM) responsible for controlling access to database resources
  • xxxxDIST - the distributed services address space is responsible for Distributed Data Facility (DDF) that provides distributed database capabilities
  • xxxxSPAS - the stored procedures address space is responsible for processing stored procedures

    DB2 CATALOG AND DIRECTORY

    The DB2 catalog is the central repository for DB2 object and user meta data. DB2 constantly refers that meta data as it processes applications and queries. The physical condition of the tablespaces and indexes that comprise the DB2 catalog is therefore a major component in overall DB2 subsystem performance.

    The DB2 directory contains internal control structures such as DBDs, skeleton cursor tables, and skeleton package tables that can be accessed only by DB2 itself. The information in the DB2 directory is critical for database access, utility processing, plan and package execution and logging.

    Tablespaces

    Tablespaces are VSAM datasets that contains the rows of one or more DB2 tables. Utilities, commands, etc. are run against tablespaces, not tables.

    This sample creates a tablespace in a database named SAMDBASE.
    	CREATE TABLESPACE GUIDE1
    	IN SAMDBASE
    	USING STOGROUP STG3380A
    	PRIQTY 48	(48 kilobytes=12 4k pages=1 track)
    	SECQTY 48
    	ERASE NO
    	LOCKSIZE ANY
    	BUFFERPOOL BP0
    	CLOSE NO;
    Types of tablespaces

  • Simple Table Space
    - One to many tables
    - Useful for co-mingling rows of related tables
    - smallest unit of recovery is the tablespace
  • Segmented Table Space
    - Can contain multiple tables but rows are not co-mingled
    - Space is divided into groups of pages called segments
    - Each segment contains rows for only one table
    - Each table can have different locking strategy
    - Automatically reclaims space after drop table
    - Much more efficient for mass deletes
  • Partitioned Table Space
    - One table per tablespace
    - Each partition is a separate dataset, 1-254 partitions
    - Each partition can be on a separate volume
    - Data placement is controlled by partitioning index
    - Partition independence allows utilities to be run on individual partitions
    - Query parallelism

    Tables

    Tables are logical structures maintained by the database manager, made up of columns and rows. A base table is created with the CREATE TABLE statement and is used to hold persistent user data. A result table is a set of rows that the database manager selects or generates from one or more base tables to satisfy a query.

        CREATE TABLE VINCENT.PERSON_ACCT
                (ID         CHAR(8)        NOT NULL,
                 NAME       CHAR(36)       NOT NULL WITH DEFAULT,
                 BALANCE   DECIMAL(10,2)  NOT NULL WITH DEFAULT)
        IN TSDBASE.GDPSAC;
    Indexes

    Index is an ordered set of pointers to rows of a base table, based on the values of data in one or more columns. An index is an object that is separate from the data in the table, built and maintained by the database manager.

    The CREATE INDEX statement is used to create
  • An index on a DB2 table
  • An index specification: metadata that indicates to the optimizer that a data source table has an index
    	CREATE UNIQUE INDEX VINCE11.CUST_IDX ON VINCE11.CUSTTBL (CUST_ID);
    Keys

  • Unique Keys - The columns of a unique key cannot contain null values. The constraint is enforced by the database manager using a unique index. Thus, every unique key is a key of a unique index.

  • Partitioning Keys - A partitioning key is a key that is part of the definition of a table in a partitioned database. The partitioning key is used to determine the partition on which the row of data is stored. If a partitioning key is defined, unique keys and primary keys must include the partitioning key columns (they may have more columns).

    Views

    A view is a named specification of a result table. The specification is a SELECT statement that is executed whenever the view is referenced in an SQL statement. For retrieval, all views can be used just like base tables. Whether a view can be used in an insert, update, or delete operation depends on its definition.

    The CREATE VIEW statement creates a view on one or more tables, views or nicknames.
    	CREATE VIEW PRJ_LEADER
    	AS SELECT PROJNO, PROJNAME, DEPTNO, RESPEMP, LASTNAME FROM PROJECT, EMPLOYEE
    	WHERE RESPEMP = EMPNO;

    Back



    DATATYPES

    DB2 Datatypes and equivalent COBOL datatypes
    	 ----------------------------------------------------------------------
    	| Datatype     |  Possible storage values   |   Equivalent Cobol type  |
    	 ----------------------------------------------------------------------
    	Small Integer	-32768 to +32767		S9(4) USAGE COMP
    	Integer		-2147483648 to +2147483647	S9(9) USAGE COMP
    	Double, Float	2.225E-307 to 1.79769E+307	USAGE COMP-2
    	Decimal(p,s)	-10E31+1 to 10E31-1		S9(p-s)V9(s) COMP-3.
    	Char		254 BYTES			X(1) to X(254)
    	Varchar		4000 BYTES			01 NAME. 
    							  49 NAME-LEN S9(4) COMP
    							  49 NAME-TEXT X(N)
    	Date		Year, Month, Day		X(10)
    	Time		Hour, Minute, Second		X(10)
    	Timestamp	Date + Time + Microsecond	X(26)
    Large Objects (LOBs)

    DB2 provides three built-in data types for storing large objects
  • BLOBs (Binary Large OBjects) - up to 2GB of binary data. Typical uses for BLOB data include photos, audio and video clips.
  • CLOBs (Character Large OBjects) - up to 2GB of single byte character data. CLOBs are ideal for storing large documents in a DB2 database
  • DBCLOBs (Double Byte Character Large OBjects) - up to 1GB of double byte character data (total of 2GB), useful for storing documents in languages that require double byte characters

    User-defined datatypes

    A distinct type is a user-defined data type that shares its internal representation with an existing type (its "source" type), but is considered to be separate and incompatible for most operations.
    	CREATE DISTINCT TYPE PAY AS DECIMAL(9,2) WITH COMPARISONS
    DB2 System tables

    SYSIBM.SYSTABLES, SYSTABLESPACE, SYSKEYS, SYSTABAUTH, SYSPACKAGE, SYSSTMT, SYSPLAN. SYSCOLUMNS, syspacklist, sysindexes

    Catalog views

    SYSCAT.ATTRIBUTES, BUFFERPOOLS, CHECKS, COLUMNS, DATATYPES, DBAUTH, EVENTS, FUNCTIONS, INDEXES, NODEGROUPDEF, NODEGROUPS, PACKAGEAUTH, PACKAGES, PROCEDURES, REFERENCES, SERVEROPTIONS, SERVERS, STATEMENTS, TABAUTH, TABLES, TABLESPACES, TRIGGERS, VIEWS

    Back



    SQL

    Statements

  • ALTER
  • BEGIN DECLARE SECTION - marks the beginning of a host variable declare section
  • CALL
  • COMMENT ON
  • CONNECT
  • CREATE ALIAS/EVENT MONITOR/FUNCTION/FUNCTION MAPPING
  • CREATE NICKNAME/NODEGROUP/PROCEDURE/SCHEMA/SERVER
  • CREATE BUFFERPOOL/INDEX/TABLE/TABLESPACE/VIEW
  • CREATE TRIGGER/USER MAPPING/WRAPPER
  • CREATE TYPE - create a user-defined datatype
  • CREATE TYPE MAPPING - defines a mapping between data types
  • DECLARE CURSOR
  • DELETE
  • DESCRIBE
  • DISCONNECT
  • END DECLARE SECTION
  • EXECUTE/EXECUTE IMMEDIATE
  • EXPLAIN
  • FREE LOCATOR
  • FLUSH EVENT MONITOR
  • INCLUDE
  • LOCK TABLE
  • PREPARE
  • REFRESH TABLE
  • RELEASE - places one or more connections in the release pending state
  • RENAME TABLE
  • SET CONNECTION
  • SET CURRENT DEGREE/EXPLAIN MODE/EXPLAIN SNAPSHOT
  • SET CURRENT PACKAGESET/QUERY OPTIMIZATION/REFRESH AGE/EVENT MONITOR STATE
  • SET INTEGRITY/PASSTHRU/PASSTHRU RESET/PATH/SCHEMA/SERVER OPTION/transition-variable
  • SIGNAL SQLSTATE
  • WHENEVER

    Predicates

  • Basic Predicates - =, <, >, <=, >=, <>
  • Quantified Predicate - Compares a value or values with a collection of values - SOME, ANY, ALL
  • BETWEEN
  • EXISTS
  • IN
  • LIKE
  • NULL - tests for null values
  • TYPE - compares the type of an expression with one or more user-defined structured types

    Special registers

    A special register is a storage area that is defined for an application process by the database manager and is used to store information that can be referenced in SQL statements. A reference to a special register is a reference to a value provided by the current server. If the value is a string, its CCSID is a default CCSID of the current server.

  • CURRENT DATE
  • CURRENT PATH - this special register specifies the SQL path used to resolve unqualified distinct type names, function names and procedure names in dynamically prepared SQL statements. It is used to resolve unqualified procedure names that are specified as host variables in SQL CALL statements (CALL host-variable). The data type is VARCHAR with a length attribute that is the maximum length of a path
  • CURRENT SERVER
  • CURRENT TIME/TIMESTAMP/TIMEZONE
  • USER


    Functions

    Column (Aggregate) functions

    The argument of a column function is a set of values derived from an expression. The expression can include columns, but cannot include a scalar-fullselect or another column function (SQLSTATE 42607).

  • SUM/MIN/AVG/MAX/COUNT/STDDEV/VARIANCE

    Scalar functions

  • Trigonometric & Math functions
    	ABS or ABSVAL
    	CEILING or CEIL (returns smallest integer >= arg)
    	COS
    	EXP
    	FLOOR (Returns the largest integer value <= argument)
    	LN
    	LOG
    	LOG10
    	MOD
    	POWER
    	RAND
    	ROUND
    	SQRT
    	TAN
  • Date & Time
    	DATE
    	DAY
    	DAYNAME
    	DAYOFWEEK/DAYOFYEAR
    	DAYS
    	HOUR
    	JULIAN_DAY
    	MICROSECOND
    	MIDNIGHT_SECONDS  
    	MINUTE
    	MONTH
    	MONTHNAME
    	SECOND
    	WEEK
    	YEAR
  • Datatype conversion
    	BLOB/CLOB
    	CHAR/VARCHAR
    	DECIMAL/FLOAT
    	GRAPHIC
    	INTEGER/SMALLINT
    	VARGRAPHIC
    	
  • String manipulation
    	CONCAT
    	LEFT/RIGHT
    	LENGTH
    	LCASE or LOWER
    	LTRIM/RTRIM
    	REPLACE
    	SUBSTR
    	UCASE or UPPER
  • Other functions
    	ASCII
    	BIGINT
    	COALESCE
    	DBCLOB
    	DEGREES
    	DEREF
    	DIFFERENCE
    	DIGITS
    	DOUBLE, 
    	GENERATE_UNIQUE
    	HEX
    	INSERT
    	LOCATE
    	NODENUMBER
    	NULLIF
    	PARTITION
    	POSSTR
    	QUARTER
    	RADIANS
    	RAISE_ERROR
    	REAL
    	REPEAT
    	SIGN
    	SOUNDEX
    	SPACE
    	TABLE_NAME/TABLE_SCHEMA
    	TIME 
    	TIMESTAMP/TIMESTAMP_ISO/TIMESTAMPDIFF
    	TRANSLATE
    	TRUNCATE or TRUNC
    	TYPE_ID/TYPE_NAME/TYPE_SCHEMA
    	VALUE

    User-Defined functions (UDF)

    UDFs are extensions or additions to the existing built-in functions of the SQL language. A UDF can be a scalar function, which returns a single value each time it is called; a column function, which is passed a set of like values and returns a single value for the set; a row function, which returns one row; or a table function, which returns a table.

    A number of UDF's are provided in the SYSFUN and SYSPROC schemas.

    A UDF can be a column function only if it is sourced on an existing column function. For e.g a scalar UDF called ADDRESS extracts the home address from resumes stored in script format. The ADDRESS function expects a CLOB argument and returns a VARCHAR(4000) value.
    	SELECT  EMPNO, ADDRESS(RESUME) FROM  EMP_RESUME WHERE  RESUME_FORMAT = 'SCRIPT';

    Back



    DB2 APPLICATION PROGRAMMING

    Host Variables

    Data items within the application program that are used to accept data from, or provide data to the database are called host variables. Host variables are referenced by embedded SQL statements. Host variables used in an SQL statement are prefixed with a colon (:), omitted when used in a host-language statement.

    Indicator Variables

    Applications written in languages other than Java must prepare for receiving null values by associating an indicator variable with any host variable that can receive a null.

    Java applications compare the value of the host variable with Java null to determine whether the received value is null. An indicator variable is shared by both the database manager and the host application; therefore, the indicator variable must be declared in the application as a host variable of type SMALLINT.
    	01 cmind    PIC S9(4) COMP.
    
    	EXEC SQL
    	FETCH C1 INTO :cm INDICATOR :cmind
    	END-EXEC
    	IF cmind LESS THAN 0
    	DISPLAY 'Commission is NULL'.
    SQL Communications Area (SQLCA)

    SQLCA is a collection of variables that is updated after execution of every SQL statement. A program that contains executable SQL statements (except for DECLARE, INCLUDE, and WHENEVER) must provide exactly one SQLCA, though more than one SQLCA is possible by having one SQLCA per thread in a multi-threaded application.

    Important fields in the SQLCA
  • SQLCAID - INTEGER - Contains an 'eye catcher' for storage dumps, 'SQLCA'
  • SQLCABC - INTEGER - Contains the length of the SQLCA, 136
  • SQLCODE - INTEGER - contains an SQL return code
  • SQLWARN - array of CHAR(1) - Contains a set of warning indicators
  • SQLSTATE - CHAR(5) - A return code that indicates the outcome of the most recently executed SQL statement
  • SQLERRP - CHAR(8) - Begins with a three-letter identifier indicating the product (DSN for DB2 UDB for z/OS and OS/390, QSQ for DB2 UDB for iSeries, SQL for DB2 UDB for UWO). If the SQLCODE indicates an error condition, then this field contains the name of the module that returned the error

    SQL Codes - returned in the SQLCODE field in the SQLCA

    -117 - The number of values in INSERT does not match the number of columns -181 - Invalid string representation of a datetime value -204 - Object not defined in DB2 -205 - Column not in specified table -206 - Column not found in table specified in the SELECT -301 - Host variable cannot be used because of datatype mismatch -302 - Host variable is too long for the target column or has invalid value -310 - Decimal host variable contains non-decimal data -404 - The SQL statement specifies a string that is too long -501 - Cursor not open on fetch -502 - Cursor is already open -504 - Cursor not defined -507 - Cursor not open on update or delete -508 - Cursor not positioned on a row for update or delete -530 - Invalid foreign key value -533 - Invalid multiple row insert -539 - Table does not have a primary key -542 - Column identified in a PRIMARY KEY, UNIQUE KEY or REFERENCES clause is defined to allow null values -803 - Duplicate primary key -805 - DBRM or package not found in plan. Timestamp mismatch (between load module and DBRM) -811 - More than one row retrieved in SELECT INTO statement -818 - Plan and Program: Timestamp mismatch -901 - A system error prevented successful execution of the current SQL statement, but does not prevent execution of further SQL statements -904 - Resource Unavailable -905 - Resource limit was exceeded. Long-running queries typically produce this error -911 - Deadlock or Timeout. Rollback has been done -913 - Program was the victim of a deadlock or timeout. NO rollback has been done and needs to be done -922 - Authorization failure



    SQLDA

    SQL Descriptor Area is a collection of variables that is required for execution of the SQL DESCRIBE statement. The SQLDA variables are options that can be used by the PREPARE, OPEN, FETCH, EXECUTE, and CALL statements. An SQLDA communicates with dynamic SQL; it can be used in a DESCRIBE statement, modified with the addresses of host variables, and then reused in a FETCH statement.

    An SQLDA consists of four variables followed by an arbitrary number of occurrences of a sequence of variables collectively named SQLVAR. In OPEN, FETCH, EXECUTE, and CALL each occurrence of SQLVAR describes a host variable. In DESCRIBE and PREPARE, each occurrence of SQLVAR describes a column of a result table.

    There are two types of SQLVAR entries
  • Base SQLVARs - These entries are always present and contain the base information about the column or host variable such as data type code, length attribute, column name, host variable address, and indicator variable address
  • Secondary SQLVARs - These entries are only present if the number of SQLVAR entries is doubled. For user-defined types (distinct or structured), they contain the user-defined type name. For reference types, they contain that target type of the reference. For LOBs, they contain the length attribute of the host variable and a pointer to the buffer that contains the actual length


    Dynamic SQL

    Dynamic SQL statements are prepared and executed within an application program while it is executing. The SQL source is contained in host language variables rather than being coded into the application program and can change several times during the program's execution.

    In dynamic SQL statements, parameter markers are used instead of host variables. A parameter marker is a question mark (?) representing a position in a dynamic SQL statement where the application will provide a value. For e.g
    	INSERT INTO DEPARTMENT VALUES (?, ?, ?, ?);
    Programs containing embedded dynamic SQL statements must be precompiled like those containing static SQL, but unlike static SQL, the dynamic SQL statements are constructed and prepared at run time. The SQL statement text is prepared and executed using either the PREPARE and EXECUTE statements or the EXECUTE IMMEDIATE statement. The statement can also be executed with the cursor operations if it is a SELECT statement.


    Cursors
    	EXEC SQL
    		DECLARE SCURSOR CURSOR FOR
    		SELECT  SN, SNAME, STATUS, CITY  FROM S WHERE STATUS  =  :NEW-STATUS
    	END-EXEC
    	
    	OPEN-SCURSOR.
    		EXEC SQL
    			OPEN SCURSOR
    		END-EXEC.
    		
    		PERFORM GET-ROW UNTIL SQLCODE  =  100
    
    	GET-ROW.
    		EXEC SQL
    			FETCH SCURSOR INTO :SN, :SNAME, :STATUS, :CITY
    		END-EXEC.
    
    		IF SQLCODE  =  100
    			MOVE SAVE-STATUS TO NEW-STATUS
    			EXEC SQL
    				CLOSE SCURSOR
    			END-EXEC.
    If a cursor is not declared with the WITH HOLD option, it may be closed prematurely as DB2 automatically closes all open cursors when it reaches a COMMIT point i.e the unit of work is complete. The cursor can be opened again but processing begins at the start of the result table.
    	DECLARE cursor-name CURSOR WITH HOLD FOR select-statement.
    Scrollable and non-scrollable cursors

    A scrollable cursor provides the ability to scroll forward and backward through the data once the cursor is open. This can be achieved using just SQL - no host language code (COBOL, C, etc.) is required.. A scrollable cursor makes navigating through SQL result sets much easier. There are two types of DB2 scrollable cursors - SENSITIVE (where data can be changed) and INSENSITIVE (not updateable; will not show changes made).

    The WHERE CURRENT OF CURSOR clause

    When a cursor is used for updates, the UPDATE statement must use a WHERE CURRENT OF CURSOR clause to indicate the rows to be processed. For e.g
    	EXEC SQL
    		UPDATE CUSTINFO
    			SET CITY  =  :NEW-CITY
    			WHERE CURRENT OF CUSTCURSOR
    	END-EXEC.
    The WHERE CURRENT OF clause causes the row at which the cursor points to be updated. The update does not advance the cursor. A FETCH.. INTO statement is necessary for that.


    Locking

    DB2 uses row-level locking by default. In addition, DB2 provides four other lockable units: pages, tables, tablespace and for indexes subpage.

    Locking can be done in two ways

  • Explicit locking Only two possibilities: SELECT .... FOR UPDATE and LOCK TABLE....
    The user or application programmer is responsible.
  • Implicit locking More possibilities through 4 isolation levels
    DB2 is responsible.
    The syntax of the LOCK statement is as follows:
    	EXEC SQL
    		LOCK TABLE <tablename> IN SHARE/EXCLUSIVE/UPDATE MODE
    	END-EXEC.

    Isolation levels

    The isolation level is specified as an attribute of a package and applies to the application processes that use the package. The isolation level is specified in the program preparation process. Depending on the type of lock, this limits or prevents access to the data by concurrent processes.

    DB2 allows four types of isolation levels

  • Repeatable Read (RR)
    Any row read during a UOW is not changed by other application processes until the UOW is complete
    Any row changed by another application process cannot be read until it is committed by that process

  • Read Stability (RS)

    Like level RR, level RS ensures that Any row read during a UOW is not changed by other processes until the UOW is complete
    Any row changed by another process cannot be read until it is committed by that process
    Unlike RR, RS does not completely isolate the application process from the effects of concurrent processes. At level RS, application processes that issue the same query more than once might see additional rows. These additional rows are called phantom rows

  • Cursor Stability (CS)

    Like the RR level
    CS ensures that any row that was changed by another process cannot be read until it is committed by that process Unlike the RR level
    CS only ensures that the current row of every updatable cursor is not changed by other processes. Thus, the rows that were read during a UOW can be changed by other application processes In addition to any exclusive locks, a process running at level CS has at least a share lock for the current row of every cursor.

  • Uncommitted Read (UR)

    It allows reading updates that have not been committed yet.

    UR can be invoked in two ways:
    - Bind the plan or package with ISOLATION(UR). All read-only statements in in the plan or package will execute with UR.
    - Specify WITH UR in the select statement. This will override the isolation level with which the plan or package was bound.


    Using Explain

    EXPLAIN is a tool provided with DB2 to analyze queries for their costs and observe the access paths for the Select part of the statements. EXPLAIN provides information about whether indexes or table scan will be used and what I/O methods are used to read the pages, join methods and type, order of joining tables etc.

    A table called as Plan_table must be created to hold the results of Explain. Explain can be executed as
    	EXPLAIN PLAN SET QUERYNO = 1 (0 to 32767)
    		FOR SELECT * FROM VINCENT.TBLNAME
    			WHERE CITY_NAME = ? AND	STATE_NAME = 'MAINE' AND CTRY_NAME = 'USA';
    ? is a Parameter maker that replaces a host variable. The information returned by Explain can be retrieved by the following query
    	SELECT * FROM PLAN_TABLE WHERE QUERY_NO = 1
    		ORDER BY QBLOCKNO, PLANNO, MIXOPSEQ;

    DB2 program preparation

  • DB2 PRECOMPILE - The DB2 precompile performs three functions. First, it checks the SQL in the program for errors. Second, it adds working storage areas and source code compatible statements that are used to invoke DB2. One of the working storage areas contains a literal 'timestamp' called a consistency token. Finally, all of the SQL statements are extracted from the program source and placed into a member called the Database Request Module (DBRM), which also contains the consistency token.

  • COMPILE - The modified source from the precompile is then compiled. The code is checked for errors and a compiled version of the code is created.

  • LINK-EDIT - The compiled code is link-edited along with statically called source language and DB2 run-time modules to create a load module which has the same consistency token that was generated in the precompile imbedded. If multiple DB2 programs are statically linked together, the resulting load module contains a consistency token for each one.

  • DB2 BIND - The bind process reads the DBRM that was created in the precompile and prepares an access path to the data. This access path, along with the consistency token, is stored in the DB2 catalog as a package. Every package is bound into a package list or collection. The name of the collection is specified by the PACKAGE parameter. A collection is a group of packages that are included in one or more plans. The QUALIFIER parameter of the bind is used to direct the SQL to the specific set of DB2 objects (tables, views, aliases or synonyms) qualified by this name.

  • PROGRAM EXECUTION - When a task containing a DB2 program executes, the plan name must be specified. For online CICS programs, the plan name is specified by Tran ID in the Resource Control Table (RCT). For a batch program, the plan name is specified in the SYSTSIN input DD. The packages for all DB2 programs executed under a Tran ID or batch job step must be included in collection bound into this plan.

    The load-module and plan must come from the same DB2 precompile. DB2 enforces this by performing a timestamp verification before an application is executed. When the first SQL statement of each program is executed, DB2 searches the collections within the plan using the package name and consistency token from the load module. If an exact match is not found, a -805 SQLCODE is returned.
    Also, if a DBRM is replaced because of a compile, then any plans which use that DBRM must be re-BOUND.

    Binding

    Binding is the process by which the output from the SQL precompiler is converted to a usable control structure, often called an access plan, application plan or package. During this process, access paths to the data are selected and some authorization checking is performed. The types of bind are
  • automatic bind - a process by which SQL statements are bound automatically (without a user issuing a BIND command) when an application process begins execution and the bound application plan or package it requires is not valid
  • dynamic bind - SQL statements are bound as they are entered
  • incremental bind - SQL statements are bound during the execution of an application process, because they could not be bound during the bind process, and VALIDATE(RUN) was specified
  • static bind - SQL statements are bound after they have been precompiled. All static SQL statements are prepared for execution at the same time

    Advantage of using packages over plan-direct binds

  • Improved availability - If an SQL in a program is changed, only one package has to be rebound which can be done quickly. A package cannot be executed while it's being rebound. If, on the other hand, programs are bound directly into plans, a change of one SQL requires that the plan be rebound. If a large number of DBRMs are bound into the plan, the rebind could take a fair amount of time, during which the plan cannot be executed.
  • Improved flexibility - Some DB2 shops with large databases undertake database segmentation for database manageability and availability reasons. Tables are distinguished by a high-level qualifier. For example, sales data would be divided between tables with fully qualified names such as REGION1.SALES, REGION2.SALES, REGION3.SALES, and so on. In this scenario, package bind allows one program that can access multiple tables.

    First, the program is written using unqualified table names (for e.g SELECT TERRITORY FROM SALES). Then the program is bound into multiple collections, one for each database segment (REGION1 collection, REGION2 collection, and so on). For each of these multiple bind operations, the appropriate high-level qualifier is specified by way of the QUALIFIER option of the BIND PACKAGE command. Thus, the package bound into the REGION1 collection will, when executed, access tables in the REGION1 segment of the database.

    Determining the correct package to execute in a plan with a multi-collection package list

  • DB2 first checks to see whether a special register called CURRENT PACKAGESET (one such register is maintained for each DB2 thread) contains a nonblank value. If it does, DB2 will search for the package in the collection specified (and will find the package, assuming that each segment-related collection contains the same set of packages, distinguished only by the high-level qualifier specified at bind time). The value of CURRENT PACKAGESET is blank at the beginning of a transaction or batch job and can be updated by way of the SQL statement SET CURRENT PACKAGESET. Thus, if a program needs to access data in the REGION2 database segment, it can do so by issuing the statement SET CURRENT PACKAGESET = 'REGION2'

  • If the value of CURRENT PACKAGESET is blank, DB2 will check to see whether the package is already allocated to the thread. This could be the case if, for e.g the thread is reused by multiple transactions (an example being a CICS-DB2 protected thread) and the package in question was bound with RELEASE(DEALLOCATE). If the package is already allocated to the thread, DB2 will use that package

  • If the value of CURRENT PACKAGESET is blank and the package is not already allocated to the thread, DB2 will search for the package in the collections listed in the plan's package list - searching in the order in which the collections are listed - until the package is found.

    Bind JCL
    	//S010    EXEC PGM=IKJEFT01,DYNAMNBR=20
    	//STEPLIB  DD  DSN=ACSNS.DB2.SDSNEXIT.DB2T,DISP=SHR
    	//         DD  DSN=ACSNS.DB2.SDSNLOAD,DISP=SHR
    	//SYSTSPRT DD  SYSOUT=*
    	//DBRMLIB  DD  DISP=SHR,DSN=DBDC.TEST.TATFDB2.DBRMLIB
    	//         DD  DISP=SHR,DSN=DBDC.PRODDB2.DBRMLIB
    	//SYSTSIN  DD  *
    		DSN S(DB3T)
    		BIND PACKAGE(ATS_TATJ) MEMBER(DATBASTD)		-
    			OWNER(TATJ) QUALIFIER(TATG) CURRENTDATA(NO)	-
    			VALIDATE(BIND) EXPLAIN(NO )		-
    			ISOLATION(CS) RELEASE(COMMIT)		-
    			DEGREE(1)		-
    			NOREOPT(VARS)		-
    			KEEPDYNAMIC(NO)	-
    			DBPROTOCOL(DRDA)	-
    			FLAG(I);
    		END
    	/*

    Running DB2 programs

    Before a program can pass SQL statements to DB2, a connection (thread) must be established between the task under which the program is running and the DB2 address space. In addition, a plan - a DB2 object that indicates what kind of processing the program will do - must be opened. DB2 provides several facilities for creating threads and opening plans.

    Running DB2 programs in the TSO batch mode using the DSN command processor

    This is done using the the TSO terminal monitor batch interface program IKJEFT01.
    	//STEP002  EXEC PGM=IKJEFT01
    	//SYSTSPRT DD SYSOUT=*
    	//SYSPRINT DD SYSOUT=*
    	//SYSTSIN  DD *
    	 DSN SYSTEM(DB2SYSTEM)   -
    	 RUN  PROGRAM(PGMNAME) PLAN(PLANNAME) -
    	 PARM (PGMPARM)	
    	 END
    	//*

    Call Attachment Facility

    CAF is a DB2 attachment facility for application programs that run in TSO or MVS batch. The CAF is an alternative to the DSN command processor and provides greater control over the execution environment.

    The CAF is a general purpose facility that can be used to establish threads from nearly any MVS address space. There are two ways of using CAF
  • Implicit opens
    The SQL host command environment routines always verify that a plan has been opened prior to processing an SQL statement. If no thread is detected on the current task, a thread is created using a default plan name and a default DB2 subsystem name.
  • Using DSNALI
    Programmers can explicitly control the DB2 subsystem to which the program is connected and the name of the plan to be opened, using the DSNALI function. An example OPEN call is shown below
    	if dsnali("open", "dsna", "sdbr1010") <> 0 then do
    		say "Open for plan failed. RC="rc "REASON="reason
    		exit rc
    end

    Back



    DB2 UTILITIES & TOOLS

    The DB2I (Interactive) screen

  • 1 SPUFI - SQL Processor Using File Input is a facility for executing SQL statements interactively. The results of processing those statements are placed in a standard VSAM dataset. The ISPF facilities of Edit and Browse operate on the input and output datasets, and the panel flow between the ISPF facilites is automatic. Thus, SPUFI uses DB2 to operate on the data, and ISPF facilities to manage the input statements and the output results.
  • 2 DCLGEN (Declaration Generator) is an IBM provided utility which generates INCLUDE members for DB2 tables for use in COBOL Application programs. These INCLUDE members contain SQL table declarations and working storage structures.
  • 3 PROGRAM PREPARATION (Prepare a DB2 application program to run)
  • 4 PRECOMPILE (Invoke DB2 precompiler)
  • 5 BIND/REBIND/FREE (BIND, REBIND, or FREE application plans)
  • 6 RUN (RUN an SQL program)
  • 7 DB2 COMMANDS (Issue DB2 commands)
  • 8 UTILITIES (Invoke DB2 utilities)


    DB2 Utilities

    CHECK

    The CHECK utility tests whether indexes are consistent with the data they index, and also checks table spaces for violations of referential constraints.

    CHECK should be run after LOAD to verify both index and referential integrity. CHECK optionally deletes rows in violation of referential constraints and copies them to an exception table.

    Whenever the CHECK PENDING status is set for a table space, the CHECK utility should be run against that table space to determine the cause of the problem.

    COPY

    The DB2 COPY utility backs up table spaces, enabling the RECOVER utility to work from a static backup when it begins to recover a table space.

    The parameter FULL YES/NO determines whether a full or incremental image copy is being taken. A full copy copies the entire table space; an incremental copy only those pages that have changed since the last image copy.

    When a table space is in COPY PENDING status (which can be caused be a LOAD or REORG), a full image copy of the table space should be taken. The parameter SHRLEVEL determines whether the table space being copied is available for read-only or update access. The default is SHRLEVEL REFERENCE (read-only access).

    LOAD

    The LOAD utility is used to batch load data into DB2 tables. During this process LOAD performs all necessary data conversions (e.g. character to date format) and error processing (e.g. rejecting records with duplicate keys).

    Discarded input records are written to a sequential file, which can subsequently be examined to determine the reason or reasons for rejection. The LOAD utility will always produce a return code of at least '04' because a COPY is required following a LOAD.

    QUIESCE

    The QUIESCE utility establishes a quiesce point (the current log RBA) for a table space, and inserts it into the Catalog table SYSCOPY. This allows the recovery of a table space to a known point in time by running the RECOVER utility and specifying a RBA to fall back to.

    REORG

    The REORG utility reorganizes a table space to improve access performance and reorganizes indexes so that they are more efficiently clustered.

    If the parameters REORG UNLOAD ONLY or REORG UNLOAD PAUSE are specified, the REORG utility unloads data in a format that the LOAD utility can use as input.

    RUNSTATS

    RUNSTATS scans a table or index space to gather information about the utilization of space and the efficiency of indexes. This information is then stored in the DB2 Catalog and used by the SQL optimizer to select access paths to data.

    The RUNSTATS utility should be run
  • when a table is loaded
  • when an index is created
  • when a tablespace is reorganized
  • when there have been extensive updates, deletions, or insertions in a tablespace
  • after the recover of a tablespace to a prior point in time

    Loading/unloading tables
    	//S010    EXEC PGM=IKJEFT01,DYNAMNBR=20
    	//SYSTSPRT DD SYSOUT=*
    	//SYSPRINT DD SYSOUT=*
    	//SYSUDUMP DD SYSOUT=*
    	//SYSREC00 DD DISP=(NEW,CATLG,DELETE),SPACE=(CYL,(20,15)),
    	//            DCB=(RECFM=FB,BLKSIZE=0),UNIT=SYSDA,DSN=&QUAL..TBL.MARKET
    	//SYSPUNCH DD SYSOUT=*
    	//SYSIN    DD *
    	  SELECT * FROM TEST.TBL_MARKET
    	  FOR FETCH ONLY;
    	/*
    	//SYSTSIN  DD *
    	  DSN SYSTEM(DB2T)
    	  RUN  PROGRAM(DSNTIAUL) PARMS('SQL')
    	/*
    
    	//S020    EXEC DSNUPROC,UID='LOADUSER',
    	//             SYSTEM=DB2T,SIZE=4096K
    	//SYSIN    DD *
    	  LOAD DATA RESUME YES LOG NO NOCOPYPEND INDDN SYSREC01
    		  INTO TABLE TEST.TBL_MARKET
    	   (
    	   FIELD1	POSITION( 1	)
    	   CHAR(	3) ,
    	   FIELD2	POSITION( 4	)
    	   CHAR(	6) ,
    	   FIELD3	POSITION( 10 )
    	   CHAR(	3)
    	   )
    	/*
    	//SYSREC01 DD DSN=TEST.TBL.MRKT.LOADDATA,DISP=SHR
    	//SYSDISC  DD DSN=&&SYSDISC,DISP=(MOD,PASS),
    	//            DCB=(BLKSIZE=1040,LRECL=104,RECFM=FB,DSORG=PS),
    	//            SPACE=(CYL,(1,1),RLSE),UNIT=SYSDA
    	//SYSUT1   DD DSN=&&SYSUT1,DISP=(MOD,DELETE,),UNIT=SYSDA,
    	//            DCB=(BLKSIZE=20400),SPACE=(CYL,(150,50),RLSE)
    	//SORTOUT  DD DSN=&&SORTOUT,DISP=(MOD,DELETE,),UNIT=SYSDA,
    	//            DCB=(BLKSIZE=20400),SPACE=(CYL,(150,50),RLSE)
    	//SORTWK01 DD DSN=&&SORTWK1,DISP=(MOD,DELETE,),SPACE=(CYL,(150,50),RLSE),UNIT=SYSDA
    	//SYSERR   DD DSN=&&SYSERR,DISP=(MOD,PASS),
    	//            SPACE=(CYL,(5,5),RLSE),UNIT=SYSDA
    	//SYSMAP   DD DSN=&&SYSMAP,DISP=(MOD,PASS),
    	//            SPACE=(CYL,(5,5),RLSE),UNIT=SYSDA

    CA-Platinum

    Platinum provides the following facilities
  • RC/Query - to query the DB2 catalog with menu driven process
  • RC/Migrator - to migrate DB2 objects and/or data
  • Database Analyzer - automates DB2 utility execution
  • Plan Analyzer - facilitates DB2 plan analysis to improve performance
  • DB2 Command Processor - to execute DB2 commands with menu driven templates
  • Platinum Fastload - allows the rapid migration of data between flatfiles and DB2 tables.

    Platinum FastUnload

    JCL to download data from a DB2 table to a flat file
    	//UNLOAD   EXEC PGM=PTLDRIVM,REGION=0M,
    	//         PARM='EP=UTLGLCTL/DB2P,RESTART(BYPASS),,&DBNAME&TSNAME'
    	//*
    	//STEPLIB  DD  DISP=SHR,DSN=CAI.DB2UTL.LOADLIB
    	//PTIPARM  DD  DISP=SHR,DSN=CAI.DB2UTL.PARMLIB
    	//*
    	//SYSREC01 DD  DISP=(,CATLG),DSN=TEST.DB2.UNLOAD,UNIT=(SYSDA,3),SPACE=(CYL,(5,5),RLSE)
    	//SYSDDL01 DD  DSN=&&DDL(&TSNAME),DISP=(,PASS),LRECL=80,RECFM=FB,BLKSIZE=0,SPACE=(TRK,(1,1,1),RLSE)
    	//SYSCTL01 DD  DSN=&&RELOAD(&TSNAME),DISP=(,PASS),LRECL=80,RECFM=FB,BLKSIZE=0,SPACE=(TRK,(1,1,1),RLSE)
    	//PTIMSG   DD  SYSOUT=*
    	//PTIIMSG  DD  SYSOUT=*
    	//SYSOUT   DD  SYSOUT=*
    	//SYSUDUMP DD  SYSOUT=*
    	//SYSIN    DD  *
    		FASTUNLOAD
    		LOAD-CONTROL      DB2LOAD
    		OUTPUT-FORMAT     DSNTIAUL
    		INPUT-FORMAT      TABLE
    		SORTSIZE          32M
    		DISPLAY-STATUS    1000000
    		EXCP              YES
    		SHRLEVEL          REFERENCE
    		DDL-CONTROL       INTABLE
    		SELECT CUST_ID, CUST_NAME, CUST_ADDR FROM CUST_TBL
    			WHERE CUST_CATG='BUSI';

    Back



    IMS (INFORMATION MANAGEMENT SYSTEM)

    - hierarchical database architecture widely used in IBM mainframes
    - data is arranged logically in a top-down format, grouped in records, which are subdivided into a series of segments
    - the structure of the database is designed to reflect logical dependencies
    - certain data is dependent on the existence of certain other data

    IMS database organization

    The nine types of databases supported by IMS can be grouped by their IMS access method.

    Hierarchic Sequential Databases

    The earliest IMS databases were organized based on sequential storage and access of database segments. The root and dependent segments of a record are related by physical adjacency. Access to dependent segments is always sequential. Deleted dependent segments are not physically removed but are marked as deleted. Hierarchic sequential databases can be stored on tape or DASD.

    Hierarchic sequentially accessed databases include

  • HSAM (Hierarchic Sequential Access Method) - In a HSAM database, the segments in each record are stored physically adjacent. Records are loaded sequentially with root segments in ascending key sequence. Dependent segments are stored in hierarchic sequence. The record format is fixed-length and unblocked. An HSAM database is updated by rewriting the entire database. Although HSAM databases can be stored on DASD or tape, HSAM is basically a tape-based format.

    IMS identifies HSAM segments by creating a two-byte prefix consisting of a segment code and a delete byte at the beginning of each segment. HSAM segments are accessed through two operating system access methods - BSAM and QSAM. QSAM is always used as the access method when the system is processing online,

  • SHSAM - A Simple HSAM (SHSAM) database contains only one type of segment - a fixed-length root segment.

  • HISAM - Like HSAM, HISAM databases store segments within each record in physically adjacent sequential order. Unlike HSAM, each HISAM record is indexed, allowing direct access to each record. HISAM databases also provide a method for sequential access when required. HISAM databases are stored on DASD.

    A HISAM database is stored in a combination of two data sets. The database index and all segments in a database record that fit into one logical record are stored in a primary data set that is a VSAM KSDS. Remaining segments are stored in the overflow data set, which is a VSAM ESDS. The index points to the CI containing the root segment, and the logical record in the KSDS points to the logical record in the ESDS, if necessary.

  • SHISAM - A Simple HISAM (SHISAM) database contains only a root segment, and its segment has no prefix portion. SHISAM databases can use only VSAM as their access method. The data must be stored in a KSDS.

  • GSAM (Generalized Sequential Access Method) - GSAM databases are designed to be compatible with MVS data sets. They are used primarily when converting from an existing MVS-based application to IMS because they allow access to both during the conversion process. To be compatible with MVS data sets, GSAM databases have no hierarchy, database records, segments, or keys. GSAM databases can be based on the VSAM or QSAM/BSAM MVS access methods.

    Hierarchic Direct Databases

    HD databases share these characteristics
  • Pointers are used to relate segments
  • Deleted segments are physically removed
  • VSAM ESDS or OSAM data sets are used for storage
  • HD databases are stored on DASD
  • HD databases are of a more complex organization than sequentially organized databases

    Hierarchic direct databases include

  • HDAM (Hierarchic Direct Access Method) - HDAM databases are useful when fast, direct access is needed to the root segment of the database record. In a HDAM database, the root segments of records are randomized to a storage location by an algorithm that converts a root's key into a storage location. No index or sequential ordering of records or segments is involved. The randomizing module reads the root's key and, through an arithmetic technique, determines the storage address of the root segment. The storage location to which the roots are randomized are called root anchor points (RAPs). The randomizing algorithm attempts to achieve a random distribution of records across the data set. Theoretically, randomizing the location of records minimizes the number of accesses required to retrieve a root segment.

    The randomizing technique results in extremely fast retrieval of data, but it usually does not provide for sequential retrieval of records. This can be achieved in HDAM databases through the use of secondary indexes or by using a physical-key-sequencing randomizer module.

    The advantage of HDAM is that it does not require reading an index to access the database. The randomizing module provides fast access to root segments and to the paths of dependent segments using only the paths of the hierarchy needed to reach the segment being accessed. The disadvantage is that HDAM databases cannot be processed in key sequence unless the randomizing module stores root segments in physical key sequence.

  • HIDAM (Hierarchic Indexed Direct Access Method) - Unlike HDAM, HIDAM databases use an index to locate root segments and are used to access database records randomly and sequentially and also access segments randomly within a record. The index and the data are stored in separate data sets, the index as a single VSAM KSDS and data as a VSAM ESDS or OSAM data set. The index stores the value of the key of each root segment, with a four-byte pointer that contains the address of the root segment.

    The root segment locations in the index are stored in sequential order, allowing HIDAM databases to be processed directly or sequentially. A disadvantage of HIDAM databases is that the additional step required to scan an index makes access slower than HDAM databases.

    When accessing a record by root key, IMS searches for the key in the index and uses the pointer to go directly to the record. If the PTR=TB or PTR=HB (twin or hierarchic backward pointer) parameter is defined for the root, the root segments are chained together in ascending order. Sequential processing is done by following this pointer chain.

    In HIDAM, RAPs are generated only if the PTR=T or PTR=H (twin or hierarchic pointer) parameter is specified for the root. When either of these parameters is defined, IMS puts one RAP at the beginning of the CI or block. Root segments within the CI or block are chained by pointers from the most recently inserted back to the first root on the RAP. The result is that the pointers from one root to the next cannot be used to process roots sequentially. Sequential processing must be performed by using key values, which requires the use of the index and increases access time. Hence, PTR=TB or PTR=HB should be specified for root segments in HIDAM databases.

    PHDAM databases are partitioned HDAM databases. Each PHDAM database is divided into a maximum of 1001 partitions which can be treated as separate databases. A PHDAM database is also referred to as a High Availability Large Database (HALDB).

    Fast Path Databases

    Fast Path databases provide fast access with limited functionality. Two types of databases can be used with the Fast Path feature of IMS

  • DEDB (Data Entry databases) - similar in structure to a HDAM database, but with some important differences. DEDBs are stored in special VSAM data sets called areas. The unique storage attributes of areas are a key element of the effectiveness of DEDBs in improving performance. While other database types allow records to span data sets, a DEDB always stores all of the segments that make up a record in a single area, so an area can be treated as a self-contained unit. Also, each area is independent of other areas. An area can be taken offline, if it fails or while a reorganization is performed on it, without affecting the other areas.

    Areas of the same DEDB can be allocated on different volumes or volume types. Each area can have its own space management parameters. A randomizing routine chooses each record location, avoiding buildup on one device. These capabilities allow greater I/O efficiency and increase the speed of access to the data.

    An important advantage of DEDB areas is the flexibility they provide in storing and accessing self-contained portions of a databases.

  • MSDB (Main Storage databases) - are so named because the entire database is loaded into main storage when processing begins. This makes them extremely fast as segments do not have to be retrieved from DASD. Most shops reserve MSDBs for a site's most frequently accessed data that requires a high transaction rate. The fact that MSDBs require memory storage limits their size.


    Segments

    A segment is the smallest structure of the database in the sense that IMS cannot retrieve data in an amount less than a segment. Segments can be broken down into smaller increments called fields, which can be addressed individually by application programs. A record is defined as a root segment with all its dependent segments. A database record can contain a maximum of 255 types of segments.

    In IMS, segments are defined by the order in which they occur and by their relationship with other segments.
  • Root segment - The first, or highest segment in the record. There can be only one root segment for each record. There can be many records in a database
  • Dependent segment - All segments in a database record except the root segment
  • Parent segment - A segment that has one or more dependent segments beneath it in the hierarchy
  • Child segment - A segment that is a dependent of another segment above it in the hierarchy
  • Twin segment - A segment occurrence that exists with one or more segments of the same type under a single parent


    DataBase Description (DBD)

    The DBD describes the physical structure of the database and also the access methods to be used. It is a series of macro statements that define the type of the DB, all segments and fields, logical relationships and indexing.

    DBD statements are submitted to the DBDGEN utility, which generates a DBD control block and stores it in the IMS.DBDLIB library for use when an application program accesses the database.

    A sample DBD is given below
    	    DBD   NAME=EMPDBD, ACCESS=HIDAM
    	ROOTSEGM DATASET DD1=EMPDAT, SIZE=4096, FRSPC=(00,05), DEVICE=3390
    	    SEGM  NAME=EMPLOC, PARENT=0, PTR=TB,
    	               COMPRTN=(IMSHRINK,DATA,INIT),BYTES=768
             	    LCHILD NAME=(X1LOCKEY,CUSTX1), PTR=INDX
    	    FIELD NAME=(LOCKEY,SEQ,U),START=1,TYPE=C,BYTES=11
    	    LCHILD NAME=(X2LOCKEY,CUSTX2), PTR=INDX
        XDFLD NAME=MNENX2, SEGMENT=EMPLOC, SUBSEQ=/SX2, SRCH=(LOCMNEN,LOCTOWN)
    	    FIELD NAME=/SX2
                          FIELD NAME=LOCMNEN,START=39,TYPE=C,BYTES=7
                          FIELD NAME=LOCCORP,START=1,TYPE=C,BYTES=3
                          FIELD NAME=LOCTOWN,START=4,TYPE=C,BYTES=3
    	   SEGM  NAME=EMPEDU, PARENT=((EMPLOC,SNGL)), PTR=T,
    		       COMPRTN=(IMSHRINK,DATA,INIT),BYTES=640
    	   FIELD NAME=(EDUSQNO,SEQ,U),START=1,TYPE=C,BYTES=2
    	   FIELD NAME=EDUSCHOOL,START=3,TYPE=C,BYTES=6
     	   FIELD NAME=EDUDEGREE,START=47,TYPE=C,BYTES=8
    	   FIELD NAME=EDUYEAR,START=55,TYPE=C,BYTES=8
    			-----etc----
    	   DBDGEN
    	   FINISH
    	   END
    The DBD contains the following statements

    DBD - names the database being described and specifies its organization
    DATASET - Defines the DDname and block size of a data set. One DATASET statement is required for each data set group.
    SEGM - Defines a segment type, its position in the hierarchy, its physical characteristics, and its relationship to other segments. Up to 15 hierarchic levels can be defined. The maximum number of segment types for a single database is 255.
    FIELD - Defines a field within a segment. The maximum number of fields per segment is 255. The maximum number of fields per database is 1,000
    LCHILD - Defines a secondary index or logical relationship between two segments. It also is used to define the relationship between a HIDAM index and the root segment of the database
    XDFLD - Used only when a secondary index exists. It is associated with the target segment and specifies the name of the indexed field, the name of the source segment, and the field to be used to create the secondary index
    DBDGEN - Indicates the end of statements defining the DBD
    END - Indicates to the assembler that there are no more statements.

    IMS operating modes

    The three modes that IMS is run on:

    A. Batch DL/I - No Data communication services or terminals are used. Transactions are batch generated and saved in standard files. Application program runs are initiated by using JCL. Processing output is in a hard copy format. Databases are accessed off-line.

    B. BMP mode - A combination of Batch and Online Processing. There are two kinds of BMPs
  • Transaction oriented: Access the online message queues. Process Input from and output to OS/VS files and Databases.
  • Batch Oriented: Access online databases in batch mode, can send messages to the message queue, are scheduled by the operator by using a JCL.

    C. Teleprocessing Program mode also called as Message Processing Program mode - Transactions are entered at the terminal, transactions are put in the message queue file, IMS scheduler immediately schedules the appropriate program to process the transaction. Processing output may be in the form of hard copy format or back as a screen message to the original or an alternate terminal. Databases are accessed online.

    The Control Region

    The Control (CTL) region is the address space in the MVS environment that holds the control program that runs continuously in the DB/DC environment. It is responsible for a number of online functions. It holds the IMS control program, which services all communications DL/I calls. It is responsible for Fast Path databases that are accessed by an online program and for Fast Path I/O. It performs all IMS command processing. The control region also
  • supervises processing for message queues
  • supervises communication traffic for all connected terminals
  • is responsible for restart and recovery information
  • is responsible for the operation of the system log

    IMS operator commands

    /ALLOCATE - causes IMS to allocate a conversation to the specified LUNAME and TPNAME if any output is queued in IMS for that destination.
    /DBRECOVERY - used to prevent transactions or programs from accessing DL/I databases, DEDBs, or DEDB areas.
    /START - makes IMS resources available for reference and use.
    /STOP - stops the sending, receiving, or queuing of output messages to a particular communication line, terminal, user, or logical path
    - the scheduling or queuing of messages containing a specific transaction code
    - the execution of a specific program
    - the use of a given database
    /DISPLAY DATABASE - displays the status of specified databases.

    IMS Execution datasets

    	 //IMS        DD	- the location of the PSBLIB and the DBDLIB libraries on the system
    	 //DFSRESLIB  DD	- Resident Library for IMS Load modules
    	 //DFSVSAMP   DD	- Buffer Pool Info for the DL/I Buffer pools
    	 //IEFRDER    DD	- IMS writes a detailed record Log of the activity to the files defined in this DD
    	 //DATABASE   DD	- Unique DD name associated which each Database dataset, that are to be allocated before any DLI call
    	 //RECON      DD	- Datasets for recovery, needed by DBRC in batch run. Not needed for BMP or IRC
    	 //IMSRESLIB  DD	- the RESLIB containing the HDAM Randomizer and other IMS required programs
    	 //ACBLIB     DD	- ACBLIB library. The ACB are to be generated from the PSB’s generated in the IRC mode

    Data language/I (DL/I)

    DL/I is a command level language used in batch and online programs to access data stored in IMS databases. Application programs use DL/I calls to request data and DL/I then uses system access methods like VSAM to handle the physical transfer of data to and from the DB.

    Connection to IMS is established by using the ENTRY 'DLITCBL' statement. IMS gives control to an application program through this entry point. The entry point must refer to the PCBs in the order in which they have been defined in the PSB.
    	LINKAGE SECTION.
    
    	*  PCB MASK FOR THE DATA BASE DEFINED IN THE PSB
    	 01  EMPLOYEE-DB-PCB-MASK.
    		 05  PCB1-DBD-NAME	PIC X(08).
    		 05  PCB1-SEG-LEVEL	PIC X(02).
    		 05  PCB1-STATUS-CODE	PIC X(02).
    		 05  PCB1-PROC-OPT	PIC X(04).
    		 05  PCB1-RESV		PIC S9(05) COMP.
    		 05  PCB1-SEG-NAME	PIC X(08).
    		 05  PCB1-LEN-KEY	PIC S9(05) COMP.
    		 05  PCB1-SENS-SG	PIC S9(05) COMP.
    		 05  PCB1-FB-AREA	PIC X(08).
    
    	 01  CUSTOMER-DB-PCB-MASK.
    		 05  PCB2-DBD-NAME	PIC X(08).
    		 05  PCB2-SEG-LEVEL	PIC X(02).
    		........
    		........
    		 05  PCB2-FB-AREA	PIC X(08).
    
    	*
    	 PROCEDURE DIVISION.
    	 A000-MAIN-PROCESS.
    		ENTRY 'DLITCBL' USING EMPLOYEE-DB-PCB-MASK, CUSTOMER-DB-PCB-MASK.

    Making DL/I calls

    To establish a DL/1 interface from an application program, either the CBLTDLI or PLITDLI procedures are to be used (for COBOL and PL/1 respectively).
    	CALL 'CBLTDLI' USING DL1-FUNCTION-GN,
    			EMPLOYEE-PCB,
    			EMP-IO-AREA,
    			SSA-EMPLOYEE.
    Parameters in a CBLTDLI call

    The first parameter is optional and contains a count of the number of parameters being passed.

    The first parameter passed (second if a count is passed in the first parameter) is a four character field containing the type of database (DL/1) call that is being made.

    The next parameter is the PCB mask corresponding to the database to which the call is made.

    This is then followed by an I/O area to contain the record retrieved from (or to be written to) the database.

    Then follow a number of optional SSAs (Segment Search Arguments) that identify the record within the database to be accessed.

    If the program performs a GN or other call that does not fully qualify the record to be retrieved from the database then the information in the PCB area contains the location within the DB from which the database call will commence the search for the indicated record. The content of this area after the call gives the information about the current location of the pointer within the DB as well as the status of the last call. The status code field in this area can be checked following the call.

    SSAs

    An unqualified SSA only gives the name of the segment that the call should access. In an unqualified SSA, the segment name field is 8 bytes and must be followed by a 1-byte blank. If the actual segment name is fewer than 8 bytes long, it must be padded to the right with blanks. Examples of an unqualified SSA are
    	01  EMP-SSA		PIC X(09)  VALUE 'EMPLOYEE '.
    	01  PATIENT-SSA		PIC X(09)  VALUE 'PATIENT  '.
    In a qualified SSA, a qualification statement follows the segment name specifying the key to be accessed. A qualified SSA has the below structure.
    	01  EMP-SSA.
    		05  SEGNAME		PIC X(08)  VALUE 'EMPLOYEE'.  
    		05  CMD-CD-DELIMITER	PIC X(01)  VALUE '*'.        
    		05  CMD-CD		PIC X(01)  VALUE '-'.        
    		05  FILLER		PIC X(01)  VALUE '('.        
    		05  KEY-FIELD-NAME	PIC X(08)  VALUE 'EMPKEY  '.
    		05  OPERATOR		PIC X(02)  VALUE 'EQ'.                   
    		05  EMP-KEY		PIC X(06)  VALUE 'D93821'.                                          
    		05  FILLER		PIC X(01)  VALUE ')'.
    Note:
  • If the SSA contains only the segment name, byte 9 must contain a blank
  • If the SSA contains one or more command codes - byte 9 must contain an asterisk (*). The last command code must be followed by a blank unless the SSA contains a qualification statement in which case, the command code is followed by the left parenthesis of the qualification statement
  • The operator can be one of the following: 'EQ','GT','LT','LE','GE','NE','= ',' =','> ', ' >','< ',' <','<=','=<','>=','=>','^=','=^'

    Command Codes

    SSAs can include one or more command codes, which can change and extend the functions of DL/I calls. Available command codes are
    	C 	Supplies concatenated key in SSA 
    	D 	Retrieves or inserts a sequence of segments 
    	F 	Starts search with first occurrence 
    	L 	Locates last occurrence 
    	M 	Moves subset pointer forward to the next segment 
    	N 	Prevents replacement of a segment on a path call 
    	P 	Establishes parentage of present level 
    	Q 	Enqueues segment 
    	R 	Retrieves first segment in the subset 
    	S 	Sets subset pointer unconditionally 
    	U 	Maintains current position 
    	V 	Maintains current position at present level and higher 
    	W 	Sets subset pointer conditionally 
    	Z 	Sets subset pointer to 0 
    	- (null) Reserves storage positions for program command codes in SSA

    DL/I calls

    CHKP - Checkpointing DLET - deletes the segment retrieved by the last get-hold call using the same PCB FLD - used to verify and optionally update the contents of one or more fields in an MSDB segment GU - Get Unique. If unqualified, this call retrieves the first segment in the PCB view (program view) of the DB. If SSAs are specified, retrieves the first segment that satisfies the SSAs GN - Get Next. If unqualified, this call retrieves the next segment in the hierarchical sequence of the DB. If SSAs are specified, retrieves the next segment that satisfies the SSAs GNP - Get Next within Parent. It is like the GN call but is restricted to the subtree of the current parent. (The parent is described in the PCB) GHU - Get Hold Unique, like the GU call but also holds the segment for the next update call that uses the same PCB GHN - Get Hold Next, like GN but also holds the retrieved segment GHNP - Get Hold Next within Parent, like GNP but also holds the segment INIT - get data availability status code ISRT - Insert. Adds new segments using the PCB specified LOG - write a record to the system log PCB - schedules a PSB to allow database access POS - The POS or Position call is used with a DEDB to retrieve the position of a specific (or last inserted) sequential dependent segment or to find out how much free space is available within a DEDB area REPL - replaces the segment held from the last get-hold call using the same PCB with an updated segment that is provided. The get-hold call must be the last DL/I call that used the same PCB ROLB - Rollback ROLS - Dynamically backs out changes to the database to the last sync point, then returns control to the application program SETS - Used to set intermediate backout (sync) points while updating the database, allowing the program to restore the data to its original condition if errors are encountered STAT - get IMS system statistical information TERM - Terminates the current PSB and database access after committing all database changes XRST - Restart from a checkpoint

    The GOBACK statement

    In an IMS Cobol program GOBACK should be used because it returns control to DL/I. DL/I needs to do some "housecleaning" before acquiescing. STOP RUN on the other hand does not return control to DL/I, causing unpredictable results.

    Checkpointing and Restarting

    Because some programs do not have built-in commit points, IMS provides a means whereby an application program can request a commit point through a Checkpoint (CHKP) call. A CHKP call tells IMS that the program has reached a commit point. A checkpoint provides a point from which the program can be restarted. Checkpoint calls are primarily used in the following programs
  • multiple-mode programs
  • batch-oriented BMPs
  • batch programs
  • programs running in a data sharing environment

    Checkpoint calls are not needed in the following programs
  • single-mode programs
  • database load programs
  • programs that access the database in read-only mode and with PROCOPT=GO that are short enough to be restarted from the beginning
  • programs that have exclusive use of the database

    A Checkpoint call produces the following results
  • IMS makes the changes to the database permanent
  • IMS releases the segment or segments it has locked since the last commit point
  • The current position in the database (except GSAM) is reset to the beginning of the database
  • IMS writes a log record (containing the checkpoint identification) to the system log
  • IMS sends a message (containing the checkpoint identification) to the system console operator and the IMS master terminal operator
  • IMS returns the next input message to the program's I/O area
  • If the program also accesses DB2, IMS tells DB2 that the changes the program has made to DB2 can be made permanent and DB2 obliges

    Backout

    IMS will back out changes to a database automatically if a MPP or BMP application program terminates abnormally before reaching a commit point. IMS also performs a backout if a program issues a Roll backout call (ROLL), Roll Back backout call (ROLB), or Roll Back to SETS (ROLS) call. To perform the backout and for users to not receive information that may be inaccurate, IMS holds output messages until a program reaches a commit point. In an abnormal termination of the program, IMS discards any output messages generated since the last commit point.


    If a program terminates abnormally while processing an input message, IMS may discard the input message, depending on the type of termination. In all cases, IMS backs out uncommitted changes and releases locks on any segments held since the last commit point. The following DL/I calls can be used to manually back out database updates:
  • ROLB
  • ROLL
  • ROLS


    Set a Backout Point (SETS)

    SET Unconditional(SETU), ROLB, ROLL, and ROLS calls produce three common results:
  • All database changes since the last commit are backed out
  • All output messages (except EXPRESS PURG) since the last commit point are discarded
  • All segment locks are released


    ROLB, ROLL, and ROLS calls differ in the following ways:
  • ROLB returns control to the program and places the first segment of the first message after the last commit point into the I/O PCB
  • ROLL abends with user code 0778. All messages retrieved since the last commit point are discarded
  • ROLB abends with user code 3303. All messages retrieved since the last commit point are returned to the message queue

    A SETS call can be used to set up to nine intermediate backout points to be used by the ROLS call. It can also be used to cancel all existing backout points. SETS can be combined with a ROLS call to back out pieces of work between the intermediate backout points.

    The SETS call sets the intermediate backout point by using the I/O PCB and including an I/O area and a 4-byte token. The ROLS call backs out database changes and message activity that has occurred since a prior SETS call, by specifying the token that marks the selected backout point. IMS then backs out the database changes made since the SETS token specified. It also discards all non-express messages since the token.

    A SETU call operates like a SETS call except that it ignores certain conditions under which the SETS call is rejected. A SETS call is not accepted when unsupported PCBs exist in the PSB (PCBs for DEDB, MSDB and GSAM organizations) or an external subsystem is used.

    Restart

    A Restart (XRST) call lets a program to be restarted after an abnormal termination. It must be coded as the first call in the program. When the Restart call is used the following actions occur
  • The last message is returned.
  • The database position is reestablished.
  • Up to seven specified data areas are returned to their condition at the last checkpoint.

    DL/I Status codes Blank -call completed successfully AA - the alternate PCB contains a transaction code instead of a logical terminal as a destination AB - segment I/O area is missing from call statement AC - hierarchical error on insert or get call AD - function argument is not coded correctly AF - size of variable length record is invalid for GSAM get access AH - invalid SSA encountered on insert call AI - error opening database AJ - SSA specified for the call is invalid AK - field name specified for qualified SSA is incorrectly coded AT - I/O area specified is too small AU - length for SSAs specified exceeds the maximum allowed DA - REPL or DLET attempted to change segment key field DJ - get hold issued after REPL or DLET DX - DLET violated delete rule for segment FD - resource deadlock GB - end of database reached on GN call GC - attempted to cross unit-of-work boundary GD - position in database lost GE - segment not found GG - processing with procopt of GON or GOT and concurrent update activity is occurring GK - call completed successfully but different segment type on same level retrieved for GN or GNP call GL - LOG request has an invalid log code GP - GNP issued but parentage was not previously established II - attempt to insert a segment with a duplicate key IX - insert rule violation LB - attempt to load a segment that already exists LC - attempt to load a segment out of sequence LD - attempt to load a segment whose parent does not exist LE - hierarchical sequence in DBD does not match that in the segment to be loaded RX - replace rule violation TI - path to segment is invalid TJ - DL/I is not active VI - during an insert or update length of variable length segment is too long
    Program Specification Block (PSB)

    A PSB contains a series of macro statements that describe the data access characteristics of an application program.
    PSB specifies
  • all databases that the program will access
  • which segments in the database that the program is sensitive to
  • how the program can use the segments (inquiry or update)

    To allow a program to access an IMS DB, a PSB has to be defined that includes all of the database references the program will be allowed to make. This PSB will be made up of a number of PCB (Program Communication Block) references - one for each IMS database pointer that the program requires.
    	PCB   TYPE=DB, DBDNAME=EMPDBD, PROCOPT=AP, KEYLEN=30, POS=MULTIPLE
    	SENSEG NAME=EMPDATA,  PARENT=0, PROCOPT=AP
    	SENSEG NAME=EMPEDU,  PARENT=EMPDATA, PROCOPT=AP
    	SENSEG NAME=EMPPERS,  PARENT=EMPDATA, PROCOPT=AP
                  ---etc----
    	PSBGEN LANG=ASSEM (or COBOL),PSBNAME=EMPPSB
    	END

    PSB Statements

    PCB - Defines the database to be accessed by the application program. The statement also defines the type of operations allowed by the application program. Each database requires a separate PCB statement. PSB generation allows for up to 255 database PCBs (less the number of alternate PCBs defined).

    SENSEG - Defines the segment types to which the application program will be sensitive. A separate SENSEG statement is required for each segment type. If a segment is defined as sensitive, all the segments in the path from the root to that segment must also be defined as sensitive. Specific segments in the path can be exempted from sensitivity by coding PROCOPT=K in the SENSEG statement.

    SENFLD - Defines the fields in a segment type to which the application program is sensitive. Can be used only in association with field-level sensitivity. The SENFLD statement must follow the SENSEG statement to which it is related.

    PROCOPT - Defines the type of access to a database or segment. PROCOPTs can be used on the PCB or SENSEG statements.

    Primary PROCOPT codes
    G - read only
    R - replace, includes G
    I - insert
    D - delete, includes G
    A - get and update, includes G, R, I, D
    K - used on SENSEG statement; program will have key-only sensitivity to this segment
    L - load database

    Secondary PROCOPT codes
    E - exclusive use of hierarchy or segments
    O - get only, does not lock data when in use
    P - must be used if program will issue path call using the D command code
    S - sequential (LS is required to load HISAM and HIDAM databases; GS gets in ascending sequence)

    The PCB Mask

    The program that accesses an IMS DB through a PSB needs to define an access area to each of the PCBs in the PSB used. The area is not updateable from within the program but instead provides a means for information about the current location of the database pointer and the status of the last call to be accessed by the program. These PSB definitions go in the Linkage Section. For each PCB listed in the PSB, there will be one PCB mask defined and the order that these appear in the Linkage Section needs to match exactly with the list of PCBs in the PSB.

    The format of each PCB mask is as follows

    	Database name			8 bytes
    	Segment level number		2 bytes
    	Status code			2 bytes
    	Processing options		4 bytes
    	Reserved			4 bytes
    	Segment name			8 bytes
    	Length of key feedback area	4 bytes
    	Number of sensitive segments	4 bytes 
    	Key feedback area		variable 
    
    	LINKAGE SECTION.                                   
    													   
    	01  PCB1.                                          
    		05  PCB1-DBDNAME	PIC X(8).
    		05  PCB1-SEG-LEVEL	PIC X(2).
    		05  PCB1-STATUS		PIC X(2).
    		05  PCB1-PROCOPT	PIC X(4).
    		05  FILLER		PIC S9(5) COMP.
    		05  PCB1-SEG-NAME	PIC X(8).
    		05  PCB1-KEYFB-LEN	PIC S9(5) COMP.
    		05  PCB1-NUM-SENSEGS	PIC S9(5) COMP.
    		05  PCB1-KEYFB		PIC X(30).

    Bufferpool Allocation

    The DFSVSAMP dataset is used to allocate bufferpools for VSAM processing. IMS may build multiple VSAM pools and each pool may have multiple subpools.

    A VSAM pool is defined by a POOLID= statement. Subpools in the pool are defined by the VSRBF= statements which follow the POOLID= statement. The POOLID= statement is optional if only one pool is built. When creating multiple pools, a POOLID= statement for each pool must be included.

    A subpool has a buffer size of 512, 1024, 2048, or a multiple of 4096 up to 32768. A subpool may have up to 32,767 buffers. Subpools may be used for data components only, index components only, or both. The third positional parameter on the VSRBF= statement determines which type of components may use the subpool.
    	I - indicates that only index components may use the subpool
    	D - either data or index components may use the subpool (default)
    If there are any subpools defined with the I parameter, no subpool in the pool will be used for both index and data components.

    Sample pool specifications
    	POOLID=VSM1
    	VSRBF=4096,3000	==> all of these subpools may be used for both index and data components
    	VSRBF=12288,2000
    	VSRBF=32768,12
    
    
    	POOLID=VSM1		==>   Since the second VSRBF statement builds an index subpool, the first
    	VSRBF=4096,3000,D	==>   and third subpools may only be used for data components
    	VSRBF=12288,2000,I
    	VSRBF=32768,12,D
    
    
    	POOLID=VSM1
    	VSRBF=4096,3000	==> The first and third subpools may only be used for data components and the second for index components
    	VSRBF=12288,2000,I
    	VSRBF=32768,12

    Insync

    The Insync utility is designed to aid in the manipulation of IMS databases. It is a menu-driven interactive system and is designed to run under ISPF/PDF using panels and function keys similar to those used in ISPF.

    InSync Parameters screen
    	IMS System ==>  Enter valid subsystem
    	Checkpoint frequency  ===> 20
    
    	Use dynamic PSBs	===> YES
    			DBRC	===> N
    			IRLM	===> N
    			IRLMNAME ===>
    			AGN	===>
    			NBA	===> 20
    			OBA	===> 10
    
    	Refresh variable values from Install	===> N
    
    	IMS RESLIB
    	   Dsname 1  ===>
    	   Dsname 2  ===>
    	   Dsname 3  ===>
    	IMS DALIB
    	   Dsname    ===>
    	IMS DFSVSAMP
    	   Dsname    ===>
    	IMS Dynamic ACB Library
    	   Dsname    ===>
    	IMS RECON DATASETS
    	   Dsname 1  ===>
    	   Dsname 2  ===>
    	   Dsname 3  ===>
    	IMS Override datasets
    	   DBDLIB    ===>
    	   PSBLIB    ===>

    Back



    IDMS (INTEGRATED DATABASE MANAGEMENT SYSTEM)

    IDMS is a network model (CODASYL) database management system designed and developed for mainframes in the 1960s.

    The network model

    The main structuring concepts in this model are records and sets. Records essentially follow the COBOL pattern, consisting of fields of different types allowing complex internal structure such as repeating items and groups.

    A set represents a one-to-many relationship between records - one owner, many members. The fact that a record can be a member in many different sets is the key factor that distinguishes the network model from the earlier hierarchical model. As with records, each set belongs to a named set type (different set types model different logical relationships). Sets are in fact ordered, and the sequence of records in a set can be used to convey information. A record can participate as an owner and member of any number of sets.

    Records have identity, the identity being represented by a value known as a database key. The database key is directly related to the physical address of the record on disk and are also used as pointers to implement sets in the form of linked lists and trees. Records can be accessed directly by database key, by following set relationships or by direct access using key values. Initially the only direct access was through hashing, a mechanism known as CALC access. In IDMS, CALC access is implemented through an internal set, linking all records that share the same hash value to an owner record that occupies the first few bytes of every disk page.

    Data storage

    The database is comprised of areas that are mapped to disk files. Areas are broken up into pages that contain the database records. The records are uniquely identified by the number of the page they reside on and a sequence number called the line number, that together make up the database key.

    Data definition

    A schema contains the record, set and area definitions for an IDMS database. A subschema contains the records, sets and areas that can be referenced by an application and whether they can be updated or retrieved only.

    IDMS provides a Device Media Control Language (DMCL) for describing the relationship between logical database structures and the physical files. DMCL maps the database areas to file blocks and describes the buffer storage required.

    Record attributes

    Record ID - unique numeric value assigned to the record within the schema
    Storage mode - fixed or variable size. Fixed is more desirable as variable records can get fragmented and take more I/O to retrieve
    Record length - total length of all data elements plus 4 bytes for each pointer database key associated with the record
    Location name - the manner in which a record occurrence is physically located in an area of the database. The three types are
  • CALC - the target page for storage is calculated by means of a randomizing routine executed against the value of the CALC key in the record
  • VIA - clusters member records in the same physical location for efficient access
  • DIRECT - populates an area in the order the records are loaded, best used for data which is static and will be retrieved in the order it physically resides in the database
  • Area name - name of the database area the record is stored in

    Set-based relationships

    In an IDMS database, physical relationships between record types are achieved with pointers that correspond to sets. A set implements a one-to-many relationship between record types. In a set, one record type acts as the owner (the one side of the relationship) and one or more record types act as the members (the many side of the relationship). A single record type can participate in several set relationships as either the owner or the member.

    The representation of record types and set relationships within a database is called a Bachman diagram, also known as a data structure diagram. In a Bachman diagram, a record type is depicted as a box and a set as a line with an arrow. Set names appear as labels beside the arrows. The box that the arrow points to is the member record type. Triangles indicate indexes.

    Sets relate records to each other using a number of parameters
  • pointers - Next, Prior, Owner, Index, Index Owner
  • membership - Mandatory Automatic, Mandatory Manual, Optional Automatic, Optional Manual
  • order - First, Last, Next, Prior (sorted sets) and Ascending or Descending by key (unsorted sets)

    Currency

    IDMS keeps track of record occurrences being processed by area, set, record type and run-unit (program). The current record is usually the last record retrieved or updated. Currency is important when updating a database for maintaining data integrity.

    COBOL commands

    ACCEPT - retrieves database status information
    	ACCEPT [TASKCODE/TASKID/LTERMID/PTERMID/SYSVERSION/USERID/SCREENSIZE] INTO return_location.
    BIND - initiates a run-unit and establishes addressability in variable storage to the IDMS communication block, record types and optionally to procedure control information.
    	BIND [RUN-UNIT/record_name].
    COMMIT - makes updates permanent
    	COMMIT [ALL].
    CONNECT - establishes a record occurrence as a member of a set occurrence. The set must not be defined as Mandatory Automatic
    	CONNECT record-name TO set-name.
    DISCONNECT - removes a member record occurrence from a set but does not delete it from the database. The command is only valid for records which are optional members of a set
    	DISCONNECT record-name FROM set-name.
    ERASE - deletes a record occurrence from the database and optionally deletes records subordinate to it
    	ERASE record-name [ALL MEMBERS].
    FIND/OBTAIN - FIND locates a record occurrence in the database and OBTAIN locates the record and moves the associated data to the record buffers
    	FIND/OBTAIN CALC record-name.
    	FIND/OBTAIN CURRENT record-name [WITHIN set-name/area-name].
    	FIND/OBTAIN DB-KEY IS db-key.
    	FIND/OBTAIN [NEXT/PRIOR/FIRST/LAST] record-name WITHIN set-name/area-name.
    FINISH - causes database sessions to terminate
    	FINISH.
    IF - allows the program to test for the presence of member record occurrences in a set and to determine the membership status and perform further action
    	IF set-name [NOT] EMPTY statement.
    	IF [NOT] set-name MEMBER statement.
    MODIFY - replaces the contents of a record occurrence with the values in its corresponding variable storage. The record being modified must always be current of run-unit
    	MODIFY record-name.
    READY - prepares a database area for access by DML functions and specifies the usage mode
    	READY area_name USAGE-MODE [UPDATE/RETRIEVAL}.
    ROLLBACK - rolls back uncommitted changes made through a run-unit. The CONTINUE option allows the run-unit to remain active after the changes have been backed out.
    	ROLLBACK [CONTINUE].
    Operating modes

    IDMS can be run in local or central modes. In the local mode, the database is dedicated to a single application program. In central mode, IDMS provides a central monitor that queues requests from multiple applications enabling them to access the database concurrently. All applications running within TP monitors including DC/UCF use the central mode. Batch applications can access data in central or local modes.

    Back


    Maintained by: VINCENT KANDASAMY, Database Architect/Administrator (kandasf@hotmail.com)
    Last Update: Dec 19, 13:58