This is a three part DataStage tutorial on using the new version 8 Parameter Set functionality that shows how it works and adds some practical advice for how to use it.
DataStage 8.0.1 came with a great new function called Parameter Sets that let you group your DataStage and QualityStage job parameters and store default values in files.
There are some very good things about Parameter Sets:
I like the second benefit, an average project could have 10-20 job parameters per job once you have file, database, processing, date and source system job parameters. When you add a job onto a Sequence job canvas you've got to pass through every bloody parameter value via manual clicks, it takes 20 clicks to add and configure a job. Time consuming when you just want to throw together a Sequence job for some testing. With the arrival of a ParameterSet you do one value per ParameterSet rather than one value per Parameter - many less clicks.
There are a couple drawbacks to ParameterSets that we will expand on later:
A Parameter Set is a way to group a set of parameters into a single re-usable definition that you can add to your jobs. Parameter sets let you save the values for a set in a file or in an object in the DataStage repository.
To create a Parameter Set use the "New" menu or toolbar option and find Parameter Sets under the "Other" or "Recently Used" folders. Choose a short Parameter Set name because you will need to use it throughout your job as a parameter prefix: #parameter_set.parameter_name#. Your parameter set name should be short and sweet and your parameter names can be longer and descriptive.
In the Add Parameter Set form type the parameters that belong in a single set into the Add Parameter Set screen just like normal job parameters, note that you have two fields for operators and/or developers, one for the Prompt you see when you run the job and one for the Help Text when you click "Property Help" from the Job Run screen:
I suggest these Prompt and Help Text fields be used as technical instructions on how to use the parameter.
On the "Values" tab you can specify one or more files to save values to and by default it copies across the values from the previous tab. This is an optional tab, you can put values into a file or keep the values in the Parameter Set repository object:
You can add more than one file. The columns you see are the parameter values copied across from the previous tab. When you specify multiple files you are creating multiple scenarios to be selected at run time. A job can be run with a different set of file values depending on the parameter file name passed into the job or selected from a drop down list by the operator.
You could use this feature if you are in a dev or test environment and you had multiple source databases to choose from, say a small database for a quick test or a full database for a performance test. One file could be called DB_SMALL_DEV and the other DB_LARGE_DEV with the DB connection values in each.
I'm not sure what use it could have in production where you want to be quite certain about what parameters a job should use and don't want an operator trying to choose the right file.
When you save your Parameter Set you choose a location in your DataStage repository for them to be saved into the repository. So I don't lose them I create a "ParameterSets" folder under the Job folder to save them close to the DataStage jobs. In DataStage 8.0.1 you can save Parameter Sets anywhere, you can save them in Jobs or Table Definitions or even Stage Types. There are lots of stupid places to save them so create a folder for them that makes sense and put them all there.Â
Under the covers DataStage saves the option Parameter Set Values in the location $PROJECTHOME/ParameterSets/ParameterSetName/ValueFileName. The Values file is a plain text file where encrypted values are converted to printable mashed text:
MY_DB_SERVER=OVERTHERE
MY_DB_NAME=MYDATA
MY_DB_LOGIN=MURRAY
MY_DB_PASSWORD=L59@A;V1=9JM06E
You can see that this file would be easy to modify directly or manage from a simple GUI application - except for the encrypted password value.
When you run the job that uses a Parameter Set you get three choices for choosing job parameters values:
1. File Values - you run the job with parameter values from the Parameter Set file by choosing the Parameter Set file name from the "Value" field next to the Parameter Set name. At this point if you have multiple files to chose from (eg. multiple source databases or source instances) you can choose the right file.
Â
2. Parameter Set object Values - you can go with the values stored on the Parameter Set object in the repository by choosing "pre-defined" from the Value field next to the Parameter Set name.
3. User override - the person running the job can override the value of any parameter by typing in a value next to the individual parameter names. This works with either file or pre-defined usage shown above.
One of the biggest improvements brought by Parameter Sets is the simplification of Sequence Jobs. In DataStage 6 you had to add a Server/Parallel and then set every single parameter value which could be quite time consuming if you had 20 or more parameters per job. You could then copy and paste the stage and switch job names and keep those parameter values. In DataStage 7.5.x they took away this copy and paste feature and each time you changed the job name on a job stage the parameter values got wiped out. So for every job you added to a Sequence job you had to painstakingly set all the job parameter values - there were no shortcuts, no auto mapping or auto setting.
Parameter Sets make Sequence Jobs easy again by only requiring you to set a default behaviour for the Parameter Set to either "User-Defined" (take it from the Parameter Set object) or File (take it from the Parameter Set file):
If you have parameters that you want to dynamically generate - such as processing dates, last key used, process id - you would set this up as a normal job parameter in the parallel job so it can be retrieved and set in the Sequence job and passed in as a parameter override.Â
When you use a Parameter Set parameter in a job you refer to it using the Parameter Set name as a prefix: #parameter_set.parameter_name#. When a stage property window has an "Insert Parameter" button with a popup list of parameters you will see the list with both the parameter set name and parameter name in alphabetic order so it's easy to scroll to the right parameter set.
When you call a job from the command line you can specify whether to set Parameter Set values from the object or from the file. To override values you need to refer to the full parameter name: dsjob -run -param ParameterSetName=ParameterFileName -param ParameterSet.ParameterName = OverrideValue
If you try to override an encrypted parameter value from outside of a DataStage product you will lose the encryption - the override will work but the value will show up in the DataStage log in plain text.Â
The major drawback of Parameter Sets is that a DataStage Support person who might have only a passing knowledge of the tools has no easy way to change encrypted values in the Parameter Set file. The primary tool of use for a DataStage Support Dude is the Director - and unfortunately IBM have forgotten to put any Parameter Set maintenance tools into the Director. You can set Job Parameter Defaults in Director but not if those Parameters are in a Parameter Set. Parameter Sets are kind of invisible to the Director tool.Â
This leaves just two ways to change a Parameter Set default - modify the Parameter Set object or modify the underlying file.
If you modify the Parameter Set file directly you open it up to manual mistakes and you cannot change encrypted values. Putting passwords into a file without any encryption is a big security no no. Not only will they be exposed in this file but they may turn up in DataStage Director log messages. So even if you did keep this file secure, which is difficult to do since DataStage needs read access to it, you can still expose it in log messages that almost anyone can get access to.
If you modify the Parameter Set values from the DataStage Designer you can use DataStage encryption to protect passwords and use the grid for data entry. It's a technically safe way to do it but it means giving the Designer tool to your production support team which is overkill. They need an operations tool not a complex designer tool.
What we need in DataStage 8.1 is a way to change Parameter default values and encrypted values via the Director tool and/or via the Information Server console. So a member of a support team who doesn't really know much about DataStage can follow a set of instructions to log into a DataStage support tool to change a database password on a regular interval.
I covered Job Parameter Ideas in a previous post about some of the uses of normal job parameters. Now it's time to beef this up and organise parameters into parameter sets.
Here are some job parameters that should not be put in parameter sets because you want to set them from a Sequence job or hard coded to a default value in a parallel job and not pulled from a central parameter set file:
These are the Parameter Set groupings where every parameter set has a prefix of PS_ to make them easier to find in repository searches, what you can do in Parameter Sets is overload a job with parameters, adding parameters the job doesn't necessarily need to cut back the number of Parameter Sets that need to be found. The overloading doesn't hurt as you don't need to set every single parameter value in the Sequence job any more:
In my next Parameter Set Series post I'll look at Environment Variables in Parameter Sets with a lot of new ideas for grouping parameters.