On the other hand, CL had been lying fallow for many years, stuck in a functional rut reminiscent of RPG-II. Over a period of several releases it has gotten some improvements that now make it more or less reminiscent of RPG-III.
|Feature||Classic CL||Modern CL||RPG-II||RPG-III||RPG-IV|
|Advanced Data Types||✔|
|Null field mapping||✔|
2-Calling procedures is only supported in ILE CL.
3-Expressions are only supported in a few places, not everywhere. For instance you can't have an expression as one of the parameters in a built-in function like %SST().
I'm heartened by the fact that today CL also has support for data pointers and ILE function interfaces, which RPG-III never had. It bodes well that maybe, just maybe, CL might, over several releases, get some more of the same basic functionality that would make it reminiscent of RPG-IV.
Towards this goal, and following in Brother Bob Cozzi's footsteps, I'm laying out my top 10 items I'd like to see in ILE CL:
- Procedure pointers: This is by far my number one annoyance. What were the minions of our lord Ib'm thinking when they gave us pointers but neglected procedure pointers? I used to envelope RPG programs with CL for file overrides. Now I find myself doing the opposite, enveloping CL programs with RPG, so that I can register abnormal termination procedures. (The latter can't be done in CL because it doesn't have a *PPTR type or a %PADDR() built-in function.)
- Expressions everywhere: This is my number two annoyance. Currently mathematical and string expressions are not allowed in the parameters of built-in functions and many commands. Why is that? It would be wonderful if I could code something like this: CHGVAR VAR(&MYSTR) VALUE(%SST(&THATFIELD (&POS + &OFFSET) 10)).
- %PARMS() built-in function: I've been using the CEETSTA() (ILE Test Argument) function in CL modules to test whether a parameter has been omitted. If it wasn't passed then that function will get an error, which is a rather ugly way of doing things. It would be better if our most gracious lord Ib'm provided a %PARMS() built-in function.
- Arrays: I'm still trying to fathom how such a sophisticated scripting language has gone this long without any kind of array support. Prior to getting pointer support in CL I had to declare large fields, do a lot of calculating, and use %SST() functions. After wards I did the same calculations on large fields, but replaced the %SST() functions with pointers. Either way such machinations in CL can get really ugly. It'd be wonderful if a DIM parameter could be added to the DCL command. I also envision referencing the array elements with parentheses as is done in RPG: &MYFIELD(1), &THATFIELD(&I), etc.
- Advanced data types: When Cardinal Klement wrote an encyclical on how to use the CPYNV MI instruction to convert floating-point data to decimal in CL, the minions of our lord Ib'm should have heard our prayers and provided support for advanced data types. That was back in '05; now it's '10 and our prayers have still gone unanswered. Basically anything that is supported by DB2 and the ILE CEE functions should be supported in CL. Not just floating point fields, but dates, times, timestamps, zoned decimal (especially for data structures), the larger decimals, and varying length fields too.
- DB2 NULL field support: I've been using null-capable fields now for quite a while and it's supremely annoying that I can't find out in CL whether fields just read in are null or not. In CL, is the 00.00.00 in a time field a null value or is it midnight? We can't know unless we have that support. I envision something similar to the %NULLIND() built-in function in RPG.
- %CHAR() built-in function: Have you ever run an English statement through an English to French translator, then run the resulting French sentence through a French to English translator? Has the final result ever looked anything like what you originally put in? This problem also crops up with CL and the CHGVAR command. If you take a number stored in an alphanumeric variable, say "65", and assign to a 5,0 decimal variable it'll work fine. If you then take that decimal variable and move it back to the alphanumeric field it'll come out as "00065". This has always annoyed me. Over the years I've worked out various strategies to overcome this anomaly, but I was never completely satisfied with any of them. Recently I found the CEE4JNTS API and it works great, but it only works with an 8-byte integer, so numbers with decimal digits can't be converted. It would actually be perfect if the arc-angels of our lord Ib'm could perform a minor miracle and give us beleaguered IBM i faithful the %CHAR() built-in function that would work with not only decimal numbers, but all of the advanced data types.
- Larger field names: When RPG had 6 character field names I used to think that the 4 extra characters allowed in CL was a bonus. Now that RPG has joined the rest of the world's computer languages in allowing huge field names, the 10 character limitation in CL looks really out-dated. I've been using the larger names in RPG to write code that is more self-documenting, but it's pretty hard to do that in CL with only 10 characters at your disposal. Come on you minions of Ib'm, there's no reason that I can think of why CL can't have this support too.
- Named constants: Would be a very nice thing to have, especially in an include file that was copied in. It could be coded as: DCL VAR(&MYCONST) TYPE(*CONST) VALUE('THIS VALUE')
- Qualified fields: CL already allows qualification of fields for files declared on DCLF commands so why not allow qualified fields on data structures too. It'd be relatively easy to add a new value to the STG parameter. One would code it thus: DCL VAR(&MYSUBFIELD) TYPE(*CHAR) STG(*QUALDEF) LEN(10) DEFVAR(&MYFIELD 15) and the field would be referenced as &MYFIELD_MYSUBFIELD just as record format fields are referenced now.
- Type definition (typedef, TEMPLATE, LIKE(), or LIKEDS()) support: Really makes include files that much more useful if there's Type Definition support built into CL. One could create a template definition in the include file and have it referenced to declare actual variables. I haven't worked out all the kinks in my hypothesis yet because the way in which data structures are defined in CL and other languages are fundamentally different.
- Commands for procedures: Currently if a CL is calling a program with the CALL command you could instead write a command to call the program. It would help document the program's parameters and through help text would document the command itself. Why can't we do the same with procedures called with CALLPRC? It would provide the same documentation benefits and it could conceivably be relatively easy to implement. The commands that call procedures would only be allowed in CLLE programs. While commands that call programs would be needed at run-time to resolve the parameters and call the program. Commands that call procedures would only be needed at compile and program creation time to resolve the linkages and parameters. The CRTCMD command would have to be changed to allow service programs and the procedure within to be named as the C.P.P. The only issues to work out are how the parameter passing and the optional return value would be defined. The PARM definition statement could be enhanced to specify pass-by-reference, pass-by-value, or pass-by-reference-const. As to the return value I envision adding a parameter to the CMD definition statement to define it. Having a return value would allow these commands to appear in expressions, albeit inside a set of parentheses. If these commands appeared outside of expressions, like any other normal command, then the CL program would ignore the return value.
- Procedure prototyping: If we can't get commands for procedures then I'd settle for procedure prototyping similar to what is done to prototype programs and procedures in ILE RPG. It would save a lot of headaches.
- Internal procedures: Just like any other procedure based computer language, being able to define procedures in the same module would be an awesome addition to the CL pantheon. The PGM and ENDPGM statement would still be used to define the entry point of the program and the procedures would appear following the ENDPGM statement. I envision the procedure statement looking something like this: PRC NAME(MYPROC) PARM((&ARG1 *BYREF) (&ARG2 *BYVALUE)) RTNVAL(&RTNCODE) EXPORT(*NO) and the closing statement would be ENDPRC. If there was no PGM/ENDPGM at the beginning then the module would have no main entry point and could only be used as a service program or a module hard linked into another program.