Under what circumstances would you code a SELECT construct instead of IF statements?
A: I think Select statement is used when you are using one condition to compare with several conditions like…….
Data exam;
Set exam;
select (pass);
when Physics >60;
when math > 100;
when English = 50;
otherwise fail;
run;
What is the one statement to set the criteria of data that can be coded in any step?
A) Options statement.
What is the effect of the OPTIONS statement ERRORS=1?
A) The –ERROR- variable ha a value of 1 if there is an error in the data for that observation and 0 if it is not.
What's the difference between VAR A1 - A4 and VAR A1 -- A4?
A) There is no diff between VAR A1-A4 a VAR A1—A4. Where as if we submit VAR A1---A4 instead of VAR A1-A4 or VAR A1—A3, u will see error message in the log.
What do the SAS log messages "numeric values have been converted to character" mean? What are the implications?
A) It implies that automatic conversion took place to make character functions possible.
Why is a STOP statement needed for the POINT= option on a SET statement?
A) Because POINT= reads only the specified observations, SAS cannot detect an end-of-file condition as it would if the file were being read sequentially.
How do you control the number of observations and/or variables read or written?
A) FIRSTOBS and OBS option
Approximately what date is represented by the SAS date value of 730?
A) 31st December 1961
Identify statements whose placement in the DATA step is critical.
A) INPUT, DATA and RUN…
Does SAS 'Translate' (compile) or does it 'Interpret'? Explain.
A) Compile
What does the RUN statement do?
A) When SAS editor looks at Run it starts compiling the data or proc step, if you have more than one data step or proc step or if you have a proc step. Following the data step then you can avoid the usage of the run statement.
Why is SAS considered self-documenting?
A) SAS is considered self documenting because during the compilation time it creates and stores all the information about the data set like the time and date of the data set creation later No. of the variables later labels all that kind of info inside the dataset and you can look at that info using proc contents procedure.
What are some good SAS programming practices for processing very large data sets?
A) Sort them once, can use firstobs = and obs = ,
What is the different between functions and PROCs that calculate thesame simple descriptive statistics?
A) Functions can used inside the data step and on the same data set but with proc's you can create a new data sets to output the results. May be more ...........
If you were told to create many records from one record, show how you would do this using arrays and with PROC TRANSPOSE?
A) I would use TRANSPOSE if the variables are less use arrays if the var are more ................. depends
What is a method for assigning first.VAR and last.VAR to the BY groupvariable on unsorted data?
A) In Unsorted data you can't use First. or Last.
How do you debug and test your SAS programs?
A) First thing is look into Log for errors or warning or NOTE in some cases or use the debugger in SAS data step.
What other SAS features do you use for error trapping and datavalidation?
A) Check the Log and for data validation things like Proc Freq, Proc means or some times proc print to look how the data looks like ........
How would you combine 3 or more tables with different structures?
A) I think sort them with common variables and use merge statement. I am not sure what you mean different structures.
Other questions:
What areas of SAS are you most interested in?
A) BASE, STAT, GRAPH, ETSBriefly
Describe 5 ways to do a "table lookup" in SAS.
A) Match Merging, Direct Access, Format Tables, Arrays, PROC SQL
What versions of SAS have you used (on which platforms)?
A) SAS 9.1.3,9.0, 8.2 in Windows and UNIX, SAS 7 and 6.12
What are some good SAS programming practices for processing very large data sets?
A) Sampling method using OBS option or subsetting, commenting the Lines, Use Data Null
What are some problems you might encounter in processing missing values? In Data steps? Arithmetic? Comparisons? Functions? Classifying data?
A) The result of any operation with missing value will result in missing value. Most SAS statistical procedures exclude observations with any missing variable values from an analysis.
How would you create a data set with 1 observation and 30 variables from a data set with 30 observations and 1 variable?
A) Using PROC TRANSPOSE
What is the different between functions and PROCs that calculate the same simple descriptive statistics?
A) Proc can be used with wider scope and the results can be sent to a different dataset. Functions usually affect the existing datasets.
If you were told to create many records from one record, show how you would do this using array and with PROC TRANSPOSE?
A) Declare array for number of variables in the record and then used Do loop Proc Transpose with VAR statement
What are _numeric_ and _character_ and what do they do?
A) Will either read or writes all numeric and character variables in dataset.
How would you create multiple observations from a single observation?
A) Using double Trailing @@
For what purpose would you use the RETAIN statement?
A) The retain statement is used to hold the values of variables across iterations of the data step. Normally, all variables in the data step are set to missing at the start of each iteration of the data step.What is the order of evaluation of the comparison operators: + - * / ** ()?A) (), **, *, /, +, -
How could you generate test data with no input data?
A) Using Data Null and put statement
How do you debug and test your SAS programs?
A) Using Obs=0 and systems options to trace the program execution in log.
What can you learn from the SAS log when debugging?
A) It will display the execution of whole program and the logic. It will also display the error with line number so that you can and edit the program.
What is the purpose of _error_?
A) It has only to values, which are 1 for error and 0 for no error.
How can you put a "trace" in your program?
A) By using ODS TRACE ON
How does SAS handle missing values in: assignment statements, functions, a merge, an update, sort order, formats, PROCs?
A) Missing values will be assigned as missing in Assignment statement. Sort order treats missing as second smallest followed by underscore.
How do you test for missing values?
A) Using Subset functions like IF then Else, Where and Select.
How are numeric and character missing values represented internally?
A) Character as Blank or “ and Numeric as.
Which date functions advances a date time or date/time value by a given interval?
A) INTNX.
In the flow of DATA step processing, what is the first action in a typical DATA Step?
A) When you submit a DATA step, SAS processes the DATA step and then creates a new SAS data set.( creation of input buffer and PDV)
Compilation Phase
Execution Phase
What are SAS/ACCESS and SAS/CONNECT?
A) SAS/Access only process through the databases like Oracle, SQL-server, Ms-Access etc. SAS/Connect only use Server connection.
What is the one statement to set the criteria of data that can be coded in any step?
A) OPTIONS Statement, Label statement, Keep / Drop statements.
What is the purpose of using the N=PS option?
A) The N=PS option creates a buffer in memory which is large enough to store PAGESIZE (PS) lines and enables a page to be formatted randomly prior to it being printed.
What are the scrubbing procedures in SAS?
A) Proc Sort with nodupkey option, because it will eliminate the duplicate values.
What are the new features included in the new version of SAS i.e., SAS9.1.3?
A) The main advantage of version 9 is faster execution of applications and centralized access of data and support.
There are lots of changes has been made in the version 9 when we compared with the version 8. The following are the few:SAS version 9 supports Formats longer than 8 bytes & is not possible with version 8.
Length for Numeric format allowed in version 9 is 32 where as 8 in version 8.
Length for Character names in version 9 is 31 where as in version 8 is 32.
Length for numeric informat in version 9 is 31, 8 in version 8.
Length for character names is 30, 32 in version 8.3 new informats are available in version 9 to convert various date, time and datetime forms of data into a SAS date or SAS time.
·ANYDTDTEW. - Converts to a SAS date value ·ANYDTTMEW. - Converts to a SAS time value. ·ANYDTDTMW. -Converts to a SAS datetime value.CALL SYMPUTX Macro statement is added in the version 9 which creates a macro variable at execution time in the data step by ·
Trimming trailing blanks · Automatically converting numeric value to character.
New ODS option (COLUMN OPTION) is included to create a multiple columns in the output.
WHAT DIFFERRENCE DID YOU FIND AMONG VERSION 6 8 AND 9 OF SAS.
The SAS 9
A) Architecture is fundamentally different from any prior version of SAS. In the SAS 9 architecture, SAS relies on a new component, the Metadata Server, to provide an information layer between the programs and the data they access. Metadata, such as security permissions for SAS libraries and where the various SAS servers are running, are maintained in a common repository.
What has been your most common programming mistake?
A) Missing semicolon and not checking log after submitting program,
Not using debugging techniques and not using Fsview option vigorously.
Name several ways to achieve efficiency in your program.
Efficiency and performance strategies can be classified into 5 different areas.
·CPU time
·Data Storage
· Elapsed time
· Input/Output
· Memory CPU Time and Elapsed Time- Base line measurements
Few Examples for efficiency violations:
Retaining unwanted datasets Not sub setting early to eliminate unwanted records.
Efficiency improving techniques:
A)
Using KEEP and DROP statements to retain necessary variables. Use macros for reducing the code.
Using IF-THEN/ELSE statements to process data programming.
Use SQL procedure to reduce number of programming steps.
Using of length statements to reduce the variable size for reducing the Data storage.
Use of Data _NULL_ steps for processing null data sets for Data storage.
What other SAS products have you used and consider yourself proficient in using?
B) A) Data _NULL_ statement, Proc Means, Proc Report, Proc tabulate, Proc freq and Proc print, Proc Univariate etc.
What is the significance of the 'OF' in X=SUM (OF a1-a4, a6, a9);
A) If don’t use the OF function it might not be interpreted as we expect. For example the function above calculates the sum of a1 minus a4 plus a6 and a9 and not the whole sum of a1 to a4 & a6 and a9. It is true for mean option also.
What do the PUT and INPUT functions do?
A) INPUT function converts character data values to numeric values.
PUT function converts numeric values to character values.EX: for INPUT: INPUT (source, informat)
For PUT: PUT (source, format)
Note that INPUT function requires INFORMAT and PUT function requires FORMAT.
If we omit the INPUT or the PUT function during the data conversion, SAS will detect the mismatched variables and will try an automatic character-to-numeric or numeric-to-character conversion. But sometimes this doesn’t work because $ sign prevents such conversion. Therefore it is always advisable to include INPUT and PUT functions in your programs when conversions occur.
Which date function advances a date, time or datetime value by a given interval?
INTNX:
INTNX function advances a date, time, or datetime value by a given interval, and returns a date, time, or datetime value. Ex: INTNX(interval,start-from,number-of-increments,alignment)
INTCK: INTCK(interval,start-of-period,end-of-period) is an interval functioncounts the number of intervals between two give SAS dates, Time and/or datetime.
DATETIME () returns the current date and time of day.
DATDIF (sdate,edate,basis): returns the number of days between two dates.
What do the MOD and INT function do? What do the PAD and DIM functions do? MOD:
A) Modulo is a constant or numeric variable, the function returns the reminder after numeric value divided by modulo.
INT: It returns the integer portion of a numeric value truncating the decimal portion.
PAD: it pads each record with blanks so that all data lines have the same length. It is used in the INFILE statement. It is useful only when missing data occurs at the end of the record.
CATX: concatenate character strings, removes leading and trailing blanks and inserts separators.
SCAN: it returns a specified word from a character value. Scan function assigns a length of 200 to each target variable.
SUBSTR: extracts a sub string and replaces character values.Extraction of a substring: Middleinitial=substr(middlename,1,1); Replacing character values: substr (phone,1,3)=’433’; If SUBSTR function is on the left side of a statement, the function replaces the contents of the character variable.
TRIM: trims the trailing blanks from the character values.
SCAN vs. SUBSTR: SCAN extracts words within a value that is marked by delimiters. SUBSTR extracts a portion of the value by stating the specific location. It is best used when we know the exact position of the sub string to extract from a character value.
How might you use MOD and INT on numeric to mimic SUBSTR on character Strings?
A) The first argument to the MOD function is a numeric, the second is a non-zero numeric; the result is the remainder when the integer quotient of argument-1 is divided by argument-2. The INT function takes only one argument and returns the integer portion of an argument, truncating the decimal portion. Note that the argument can be an expression.
DATA NEW ;A = 123456 ;
X = INT( A/1000 ) ;
Y = MOD( A, 1000 ) ;
Z = MOD( INT( A/100 ), 100 ) ;
PUT A= X= Y= Z= ;
RUN ;
A=123456X=123Y=456Z=34
In ARRAY processing, what does the DIM function do?
A) DIM: It is used to return the number of elements in the array. When we use Dim function we would have to re –specify the stop value of an iterative DO statement if u change the dimension of the array.
How would you determine the number of missing or nonmissing values in computations?
A) To determine the number of missing values that are excluded in a computation, use the NMISS function.
data _null_;
m = . ;
y = 4 ;
z = 0 ;
N = N(m , y, z);
NMISS = NMISS (m , y, z);
run;
The above program results in N = 2 (Number of non missing values) and NMISS = 1 (number of missing values).
Do you need to know if there are any missing values?
A) Just use: missing_values=MISSING(field1,field2,field3);
This function simply returns 0 if there aren't any or 1 if there are missing values.If you need to know how many missing values you have then use num_missing=NMISS(field1,field2,field3);
You can also find the number of non-missing values with non_missing=N (field1,field2,field3);
What is the difference between: x=a+b+c+d; and x=SUM (of a, b, c ,d);?
A) Is anyone wondering why you wouldn’t just use total=field1+field2+field3;
First, how do you want missing values handled?
The SUM function returns the sum of non-missing values. If you choose addition, you will get a missing value for the result if any of the fields are missing. Which one is appropriate depends upon your needs.However, there is an advantage to use the SUM function even if you want the results to be missing. If you have more than a couple fields, you can often use shortcuts in writing the field names If your fields are not numbered sequentially but are stored in the program data vector together then you can use: total=SUM(of fielda--zfield); Just make sure you remember the “of” and the double dashes or your code will run but you won’t get your intended results. Mean is another function where the function will calculate differently than the writing out the formula if you have missing values.There is a field containing a date. It needs to be displayed in the format "ddmonyy" if it's before 1975, "dd mon ccyy" if it's after 1985, and as 'Disco Years' if it's between 1975 and 1985.
How would you accomplish this in data step code?
Using only PROC FORMAT.
data new ;
input date ddmmyy10.
;
cards;
01/05/1955
01/09/1970
01/12/1975
19/10/1979
25/10/1982
10/10/1988
27/12/1991;
run;
proc format ;
value dat low-'01jan1975'd=ddmmyy10.'01jan1975'd-'01JAN1985'd="Disco Years"'
01JAN1985'd-high=date9.;
run;
proc print;
format date dat. ;
run;
In the following DATA step, what is needed for 'fraction' to print to the log?
data _null_;
x=1/3;
if x=.3333 then put 'fraction';
run;
What is the difference between calculating the 'mean' using the mean function and PROC MEANS?
A) By default Proc Means calculate the summary statistics like N, Mean, Std deviation, Minimum and maximum, Where as Mean function compute only the mean values.
What are some differences between PROC SUMMARY and PROC MEANS?
Proc means by default give you the output in the output window and you can stop this by the option NOPRINT and can take the output in the separate file by the statement OUTPUTOUT= , But, proc summary doesn't give the default output, we have to explicitly give the output statement and then print the data by giving PRINT option to see the result.
What is a problem with merging two data sets that have variables with the same name but different data?
A) Understanding the basic algorithm of MERGE will help you understand how the stepProcesses. There are still a few common scenarios whose results sometimes catch users off guard. Here are a few of the most frequent 'gotchas':
1- BY variables has different lengthsIt is possible to perform a MERGE when the lengths of the BY variables are different,But if the data set with the shorter version is listed first on the MERGE statement, theShorter length will be used for the length of the BY variable during the merge. Due to this shorter length, truncation occurs and unintended combinations could result.In Version 8, a warning is issued to point out this data integrity risk. The warning will be issued regardless of which data set is listed first:WARNING: Multiple lengths were specified for the BY variable name by input data sets.This may cause unexpected results. Truncation can be avoided by naming the data set with the longest length for the BY variable first on the MERGE statement, but the warning message is still issued. To prevent the warning, ensure the BY variables have the same length prior to combining them in the MERGE step with PROC CONTENTS. You can change the variable length with either a LENGTH statement in the merge DATA step prior to the MERGE statement, or by recreating the data sets to have identical lengths for the BY variables.Note: When doing MERGE we should not have MERGE and IF-THEN statement in one data step if the IF-THEN statement involves two variables that come from two different merging data sets. If it is not completely clear when MERGE and IF-THEN can be used in one data step and when it should not be, then it is best to simply always separate them in different data step. By following the above recommendation, it will ensure an error-free merge result.
Which data set is the controlling data set in the MERGE statement?
A) Dataset having the less number of observations control the data set in the merge statement.
How do the IN= variables improve the capability of a MERGE?
A) The IN=variablesWhat if you want to keep in the output data set of a merge only the matches (only those observations to which both input data sets contribute)? SAS will set up for you special temporary variables, called the "IN=" variables, so that you can do this and more. Here's what you have to do: signal to SAS on the MERGE statement that you need the IN= variables for the input data set(s) use the IN= variables in the data step appropriately, So to keep only the matches in the match-merge above, ask for the IN= variables and use them:data three;merge one(in=x) two(in=y); /* x & y are your choices of names */by id; /* for the IN= variables for data */if x=1 and y=1; /* sets one and two respectively */run;
What techniques and/or PROCs do you use for tables?
A) Proc Freq, Proc univariate, Proc Tabulate & Proc Report.
Do you prefer PROC REPORT or PROC TABULATE? Why?
A) I prefer to use Proc report until I have to create cross tabulation tables, because, It gives me so many options to modify the look up of my table, (ex: Width option, by this we can change the width of each column in the table) Where as Proc tabulate unable to produce some of the things in my table. Ex: tabulate doesn’t produce n (%) in the desirable format.
How experienced are you with customized reporting and use of DATA _NULL_ features?
A) I have very good experience in creating customized reports as well as with Data _NULL_ step. It’s a Data step that generates a report without creating the dataset there by development time can be saved. The other advantages of Data NULL is when we submit, if there is any compilation error is there in the statement which can be detected and written to the log there by error can be detected by checking the log after submitting it. It is also used to create the macro variables in the data set.
What is the difference between nodup and nodupkey options?
A) NODUP compares all the variables in our dataset while NODUPKEY compares just the BY variables.
What is the difference between compiler and interpreter?
Give any one example (software product) that act as an interpreter?
A) Both are similar as they achieve similar purposes, but inherently different as to how they achieve that purpose. The interpreter translates instructions one at a time, and then executes those instructions immediately. Compiled code takes programs (source) written in SAS programming language, and then ultimately translates it into object code or machine language. Compiled code does the work much more efficiently, because it produces a complete machine language program, which can then be executed.
Code the table’s statement for a single level frequency?
A) Proc freq data=lib.dataset;
table var;*here you can mention single variable of multiple variables seperated by space to get single frequency;
run;
What is the main difference between rename and label?
A) 1. Label is global and rename is local i.e., label statement can be used either in proc or data step where as rename should be used only in data step. 2. If we rename a variable, old name will be lost but if we label a variable its short name (old name) exists along with its descriptive name.
What is Enterprise Guide? What is the use of it?
A) It is an approach to import text files with SAS (It comes free with Base SAS version 9.0)
What other SAS features do you use for error trapping and data validation?
What are the validation tools in SAS?
A) For dataset: Data set name/debugData set: name/stmtchk
For macros: Options:mprint mlogic symbolgen.
How can you put a "trace" in your program?
A) ODS Trace ON, ODS Trace OFF the trace records.
How would you code a merge that will keep only the observations that have matches from both data sets?
A) Using "IN" variable option. Look at the following example.
data three;
merge one(in=x) two(in=y);
by id;
if x=1 and y=1;
run;
or
data three;
merge one(in=x) two(in=y);
by id;
if x and y;
run;
What are input dataset and output dataset options?
A) Input data set options are obs, firstobs, where, in output data set options compress, reuse.Both input and output dataset options include keep, drop, rename, obs, first obs.
How can u create zero observation dataset?
A) Creating a data set by using the like clause.ex: proc sql;create table latha.emp like oracle.emp;quit;In this the like clause triggers the existing table structure to be copied to the new table. using this method result in the creation of an empty table.
Have you ever-linked SAS code, If so, describe the link and any required statements used to either process the code or the step itself?
A) In the editor window we write%include 'path of the sas file';run;if it is with non-windowing environment no need to give run statement.
How can u import .CSV file in to SAS? tell Syntax?
A) To create CSV file, we have to open notepad, then, declare the variables.
proc import datafile='E:\age.csv'out=sarathdbms=csv replace;
getnames=yes;
proc print data=sarath;
run;
What is the use of Proc SQl?
A) PROC SQL is a powerful tool in SAS, which combines the functionality of data and proc steps. PROC SQL can sort, summarize, subset, join (merge), and concatenate datasets, create new variables, and print the results or create a new dataset all in one step! PROC SQL uses fewer resources when compard to that of data and proc steps. To join files in PROC SQL it does not require to sort the data prior to merging, which is must, is data merge.
What is SAS GRAPH?
A) SAS/GRAPH software creates and delivers accurate, high-impact visuals that enable decision makers to gain a quick understanding of critical business issues.
Why is a STOP statement needed for the point=option on a SET statement?
A) When you use the POINT= option, you must include a STOP statement to stop DATA step processing, programming logic that checks for an invalid value of the POINT= variable, or Both. Because POINT= reads only those observations that are specified in the DO statement, SAScannot read an end-of-file indicator as it would if the file were being read sequentially. Because reading an end-of-file indicator ends a DATA step automatically, failure to substitute another means of ending the DATA step when you use POINT= can cause the DATA step to go into a continuous loop.
What is the difference between nodup and nodupkey options?
A) The NODUP option checks for and eliminates duplicate observations. The NODUPKEY option checks for and eliminates duplicate observations by variable values.
Monday, November 24, 2008
SAS Interview Questions & Answers:Clinical trials
1.Describe the phases of clinical trials?
Ans:- These are the following four phases of the clinical trials:
Phase 1: Test a new drug or treatment to a small group of people (20-80) to evaluate its safety.
Phase 2: The experimental drug or treatment is given to a large group of people (100-300) to see that the drug is effective or not for that treatment.
Phase 3: The experimental drug or treatment is given to a large group of people (1000-3000) to see its effectiveness, monitor side effects and compare it to commonly used treatments.
Phase 4: The 4 phase study includes the post marketing studies including the drug's risk, benefits etc.
2. Describe the validation procedure? How would you perform the validation for TLG as well as analysis data set?
Ans:- Validation procedure is used to check the output of the SAS program, generated by the source programmer. In this process validator write the program and generate the output. If this output is same as the output generated by the SAS programmer's output then the program is considered to be valid. We can perform this validation for TLG by checking the output manually and for analysis data set it can be done using PROC COMPARE.
3. How would you perform the validation for the listing, which has 400 pages?
Ans:- It is not possible to perform the validation for the listing having 400 pages manually. To do this, we convert the listing in data sets by using PROC RTF and then after that we can compare it by using PROC COMPARE.
4. Can you use PROC COMPARE to validate listings? Why?
Ans:- Yes, we can use PROC COMPARE to validate the listing because if there are many entries (pages) in the listings then it is not possible to check them manually. So in this condition we use PROC COMPARE to validate the listings.
5. How would you generate tables, listings and graphs?
Ans:- We can generate the listings by using the PROC REPORT. Similarly we can create the tables by using PROC FREQ, PROC MEANS, and PROC TRANSPOSE and PROC REPORT. We would generate graph, using proc Gplot etc.
6. How many tables can you create in a day?
Ans:- Actually it depends on the complexity of the tables if there are same type of tables then, we can create 1-2-3 tables in a day.
7. What are all the PROCS have you used in your experience?
Ans:- I have used many procedures like proc report, proc sort, proc format etc. I have used proc report to generate the list report, in this procedure I have used subjid as order variable and trt_grp, sbd, dbd as display variables.
8. Describe the data sets you have come across in your life?
Ans:- I have worked with demographic, adverse event , laboratory, analysis and other data sets.
9. How would you submit the docs to FDA? Who will submit the docs?
Ans:- We can submit the docs to FDA by e-submission. Docs can be submitted to FDA using
Define.pdf or define.Xml formats. In this doc we have the documentation about macros and program and E-records also. Statistician or project manager will submit this doc to FDA.
10. What are the docs do you submit to FDA?
Ans:- We submit ISS and ISE documents to FDA.
11. Can u share your CDISC experience? What version of CDISC SDTM have you used?
Ans: I have used version 1.1 of the CDISC SDTM.
12. Tell me the importance of the SAP?
Ans:- This document contains detailed information regarding study objectives and statistical methods to aid in the production of the Clinical Study Report (CSR) including summary tables, figures, and subject data listings for Protocol. This document also contains documentation of the program variables and algorithms that will be used to generate summary statistics and statistical analysis.
13. Tell me about your project group? To whom you would report/contact?
My project group consisting of six members, a project manager, two statisticians, lead programmer and two programmers.
I usually report to the lead programmer. If I have any problem regarding the programming I would contact the lead programmer.
If I have any doubt in values of variables in raw dataset I would contact the statistician. For example the dataset related to the menopause symptoms in women, if the variable sex having the values like F, M. I would consider it as wrong; in that type of situations I would contact the statistician.
14. Explain SAS documentation.
SAS documentation includes programmer header, comments, titles, footnotes etc. Whatever we type in the program for making the program easily readable, easily understandable are in called as SAS documentation.
15. How would you know whether the program has been modified or not?
I would know the program has been modified or not by seeing the modification history in the program header.
16. Project status meeting?
It is a planetary meeting of all the project managers to discuss about the present Status of the project in hand and discuss new ideas and options in improving the Way it is presently being performed.
17. Describe clin-trial data base and oracle clinical
Clintrial, the market's leading Clinical Data Management System (CDMS).Oracle Clinical or OC is a database management system designed by Oracle to provide data management, data entry and data validation functionalities to Clinical Trials process.18. Tell me about MEDRA and what version of MEDRA did you use in your project?Medical dictionary of regulatory activities. Version 10
19. Describe SDTM?
CDISC’s Study Data Tabulation Model (SDTM) has been developed to standardize what is submitted to the FDA.
20. What is CRT?
Case Report Tabulation, Whenever a pharmaceutical company is submitting an NDA, conpany has to send the CRT's to the FDA.
21. What is annotated CRF?
Case report form, it’s a collection of the forms of all the patients in the trial.
22. What do you know about 21CRF PART 11?
Title 21 CFR Part 11 of the Code of Federal Regulations deals with the FDA guidelines on electronic records and electronic signatures in the United States. Part 11, as it is commonly called, defines the criteria under which electronic records and electronic signatures are considered to be trustworthy, reliable and equivalent to paper records.
23. Have you did validation in your projects?
I did validation of the fellow programmers work to ensure that the logic and intent of the program is correct and that data errors are detected.e.gVerify error and warning messages are generated when the macro is called more than 10 times which means to add more than 10 titles.
Verify the error message when TITLENUM parameter is invalid.Verify a warning message is generated if the total length of texts specified in the input parameters LEFT, CENTER, and RIGHT is greater than 32 characters.
Also verify precedence is given to string in input parameter LEFT if the total string length is more than 32 characters.Verify there is no error/warning message generated if the macro is used within a data step and all input parameters are valid.
24. What are the contents of AE dataset? What is its purpose?
What are the variables in adverse event datasets?The adverse event data set contains the SUBJID, body system of the event, the preferred term for the event, event severity. The purpose of the AE dataset is to give a summary of the adverse event for all the patients in the treatment arms to aid in the inferential safety analysis of the drug.
25. What are the contents of lab data? What is the purpose of data set?
The lab data set contains the SUBJID, week number, and category of lab test, standard units, low normal and high range of the values. The purpose of the lab data set is to obtain the difference in the values of key variables after the administration of drug.
26.How did you do data cleaning? How do you change the values in the data on your own?
I used proc freq and proc univariate to find the discrepancies in the data, which I reported to my manager.
27.Have you created CRT’s, if you have, tell me what have you done in that?
Yes I have created patient profile tabulations as the request of my manager and and the statistician. I have used PROC REPORT and Proc SQl to create simple patient listing which had all information of a particular patient including age, sex, race etc.
28. Have you created transport files?
Yes, I have created SAS Xport transport files using Proc Copy and data step for the FDA submissions. These are version 5 files. we use the libname engine and the Proc Copy procedure, One dataset in each xport transport format file. For version 5: labels no longer than 40 bytes, variable names 8 bytes, character variables width to 200 bytes. If we violate these constraints your copy procedure may terminate with constraints, because SAS xport format is in compliance with SAS 5 datasets.
Libname sdtm “c:\sdtm_data”;Libname dm xport “c:\dm.xpt”;
Proc copy;
In = sdtm;
Out = dm;
Select dm;
Run;
29. How did you do data cleaning? How do you change the values in the data on your own?
I used proc freq and proc univariate to find the discrepancies in the data, which I reported to my manager.
30. Definitions?
CDISC- Clinical data interchange standards consortium.They have different data models, which define clinical data standards for pharmaceutical industry.
SDTM – It defines the data tabulation datasets that are to be sent to the FDA for regulatory submissions.
ADaM – (Analysis data Model)Defines data set definition guidance for creating analysis data sets.
ODM – XML – based data model for allows transfer of XML based data .
Define.xml – for data definition file (define.pdf) which is machine readable.
ICH E3: Guideline, Structure and Content of Clinical Study Reports
ICH E6: Guideline, Good Clinical Practice
ICH E9: Guideline, Statistical Principles for Clinical Trials
Title 21 Part 312.32: Investigational New Drug Application
31. have you ever done any Edit check programs in your project, if you have, tell me what do you know about edit check programs?
Yes I have done edit check programs .Edit check programs – Data validation.
1.Data Validation – proc means, proc univariate, proc freq.Data Cleaning – finding errors.
2.Checking for invalid character values.Proc freq data = patients;Tables gender dx ae / nocum nopercent;Run;Which gives frequency counts of unique character values.
3. Proc print with where statement to list invalid data values.[systolic blood pressure - 80 to 100][diastolic blood pressure – 60 to 120]
4. Proc means, univariate and tabulate to look for outliers.Proc means – min, max, n and mean.Proc univariate – five highest and lowest values[ stem leaf plots and box plots]
5. PROC FORMAT – range checking
6. Data Analysis – set, merge, update, keep, drop in data step.
7. Create datasets – PROC IMPORT and data step from flat files.
8. Extract data – LIBNAME.9. SAS/STAT – PROC ANOVA, PROC REG.
10. Duplicate Data – PROC SORT Nodupkey or NoduplicateNodupkey – only checks for duplicates in BYNoduplicate – checks entire observation (matches all variables)For getting duplicate observations first sort BY nodupkey and merge it back to the original dataset and keep only records in original and sorted.
11.For creating analysis datasets from the raw data sets I used the PROC FORMAT, and rename and length statements to make changes and finally make a analysis data set.
32. What is Verification?
The purpose of the verification is to ensure the accuracy of the final tables and the quality of SAS programs that generated the final tables. According to the instructions SOP and the SAP I selected the subset of the final summary tables for verification. E.g Adverse event table, baseline and demographic characteristics table.The verification results were verified against with the original final tables and all discrepancies if existed were documented.
33. What is ANNOTATED CRF?
An annotated CRF is a CRF in which the variable names are written next to the spaces provided for the investigator. It serves as a link between the database/data sets and the questions on the CRF.
34. What is Program Validation?
Its same as macro validation except here we have to validate the programs i.e according to the SOP I had to first determine what the program is supposed to do, see if they work as they are supposed to work and create a validation document mentioning if the program works properly and set the status as pass or fail.Pass the input parameters to the program and check the log for errors.
35. What do you lknow about ISS and ISE, have you ever produced these reports?
ISS (Integrated summary of safety):Integrates safety information from all sources (animal, clinical pharmacology, controlled and uncontrolled studies, epidemiologic data). "ISS is, in part, simply a summation of data from individual studies and, in part, a new analysis that goes beyond what can be done with individual studies."ISE (Integrated Summary of efficacy)ISS & ISE are critical components of the safety and effectiveness submission and expected to be submitted in the application in accordance with regulation. FDA’s guidance Format and Content of Clinical and Statistical Sections of Application gives advice on how to construct these summaries. Note that, despite the name, these are integrated analyses of all relevant data, not summaries.
36. Explain the process and how to do Data Validation?
I have done data validation and data cleaning to check if the data values are correct or if they conform to the standard set of rules.A very simple approach to identifying invalid character values in this file is to use PROC FREQ to list all the unique values of these variables. This gives us the total number of invalid observations. After identifying the invalid data …we have to locate the observation so that we can report to the manager the particular patient number.Invalid data can be located using the data _null_ programming.
Following is e.g
DATA _NULL_;
INFILE "C:PATIENTS,TXT" PAD;FILE PRINT; ***SEND OUTPUT TO THE OUTPUT WINDOW;
TITLE "LISTING OF INVALID DATA";
***NOTE: WE WILL ONLY INPUT THOSEVARIABLES OF INTEREST;INPUT @1 PATNO $3.@4 GENDER $1.@24 DX $3.@27 AE $1.;
***CHECK GENDER;IF GENDER NOT IN ('F','M',' ') THEN PUT PATNO= GENDER=;
***CHECK DX;
IF VERIFY(DX,' 0123456789') NE 0
THEN PUT PATNO= DX=;
***CHECK AE;
IF AE NOT IN ('0','1',' ') THEN PUT PATNO= AE=;
RUN;
For data validation of numeric values like out of range or missing values I used proc print with a where statement.
PROC PRINT DATA=CLEAN.PATIENTS;
WHERE HR NOT BETWEEN 40 AND 100 AND
HR IS NOT MISSING OR
SBP NOT BETWEEN 80 AND 200 AND
SBP IS NOT MISSING OR
DBP NOT BETWEEN 60 AND 120 AND
DBP IS NOT MISSING;TITLE "OUT-OF-RANGE VALUES FOR NUMERICVARIABLES";
ID PATNO;
VAR HR SBP DBP;
RUN;
If we have a range of numeric values ‘001’ – ‘999’ then we can first use user defined format and then use proc freq to determine the invalid values.
PROC FORMAT;
VALUE $GENDER 'F','M' = 'VALID'' ' = 'MISSING'OTHER = 'MISCODED';
VALUE $DX '001' - '999'= 'VALID'' ' = 'MISSING'OTHER = 'MISCODED';
VALUE $AE '0','1' = 'VALID'' ' = 'MISSING'OTHER = 'MISCODED';
RUN;
One of the simplest ways to check for invalid numeric values is to run either PROC MEANS or PROC UNIVARIATE.We can use the N and NMISS options in the Proc Means to check for missing and invalid data. Default (n nmiss mean min max stddev).The main advantage of using PROC UNIVARIATE (default n mean std skewness kurtosis) is that we get the extreme values i.e lowest and highest 5 values which we can see for data errors. If u want to see the patid for these particular observations …..state and ID patno statement in the univariate procedure.
37. Roles and responsibilities?
Programmer:
Develop programming for report formats (ISS & ISE shell) required by the regulatory authorities.Update ISS/ISE shell, when required.
Clinical Study Team:
Provide information on safety and efficacy findings, when required.Provide updates on safety and efficacy findings for periodic reporting.
Study Statistician
Draft ISS and ISE shell.Update shell, when appropriate.Analyze and report data in approved format, to meet periodic reporting requirements.
38. Explain Types of Clinical trials study you come across?
Single Blind Study
When the patients are not aware of which treatment they receive.
Double Blind Study
When the patients and the investigator are unaware of the treatment group assigned.
Triple Blind Study
Triple blind study is when patients, investigator, and the project team are unaware of the treatments administered.
39. What are the domains/datasets you have used in your studies?
Demog
Adverse Events
Vitals
ECG
Labs
Medical History
PhysicalExam etc
40. Can you list the variables in all the domains?
Demog: Usubjid, Patient Id, Age, Sex, Race, Screening Weight, Screening Height, BMI etc
Adverse Events: Protocol no, Investigator no, Patient Id, Preferred Term, Investigator Term, (Abdominal dis, Freq urination, headache, dizziness, hand-food syndrome, rash, Leukopenia, Neutropenia) Severity, Seriousness (y/n), Seriousness Type (death, life threatening, permanently disabling), Visit number, Start time, Stop time, Related to study drug?
Vitals: Subject number, Study date, Procedure time, Sitting blood pressure, Sitting Cardiac Rate, Visit number, Change from baseline, Dose of treatment at time of vital sign, Abnormal (yes/no), BMI, Systolic blood pressure, Diastolic blood pressure.
ECG: Subject no, Study Date, Study Time, Visit no, PR interval (msec), QRS duration (msec), QT interval (msec), QTc interval (msec), Ventricular Rate (bpm), Change from baseline, Abnormal.
Labs: Subject no, Study day, Lab parameter (Lparm), lab units, ULN (upper limit of normal), LLN (lower limit of normal), visit number, change from baseline, Greater than ULN (yes/no), lab related serious adverse event (yes/no).Medical History: Medical Condition, Date of Diagnosis (yes/no), Years of onset or occurrence, Past condition (yes/no), Current condition (yes/no).
PhysicalExam: Subject no, Exam date, Exam time, Visit number, Reason for exam, Body system, Abnormal (yes/no), Findings, Change from baseline (improvement, worsening, no change), Comments
41. Give me the example of edit ckecks you made in your programs?Examples of Edit Checks
Demog:Weight is outside expected rangeBody mass index is below expected
( check weight and height)
Age is not within expected range.
DOB is greater than the Visit date or not..
Gender value is a valid one or invalid. etc
Adverse Event
Stop is before the start or visit Start is before birthdate Study medicine discontinued due to adverse event but completion indicated (COMPLETE =1)
Labs
Result is within the normal range but abnormal is not blank or ‘N’Result is outside the normal range but abnormal is blank
Vitals
Diastolic BP > Systolic BP
Medical History
Visit date prior to Screen datePhysicalPhysical exam is normal but comment included
42. What are the advantages of using SAS in clinical data management? Why should not we use other software products in managing clinical data?
ADVANTAGES OF USING A SAS®-BASED SYSTEM
Less hardware is required.
A Typical SAS®-based system can utilize a standard file server to store its databases and does not require one or more dedicated servers to handle the application load. PC SAS® can easily be used to handle processing, while data access is left to the file server. Additionally, as presented later in this paper, it is possible to use the SAS® product SAS®/Share to provide a dedicated server to handle data transactions.
Fewer personnel are required.
Systems that use complicated database software often require the hiring of one ore more DBA’s (Database Administrators) who make sure the database software is running, make changes to the structure of the database, etc. These individuals often require special training or background experience in the particular database application being used, typically Oracle. Additionally, consultants are often required to set up the system and/or studies since dedicated servers and specific expertise requirements often complicate the process.Users with even casual SAS® experience can set up studies. Novice programmers can build the structure of the database and design screens. Organizations that are involved in data management almost always have at least one SAS® programmer already on staff. SAS® programmers will have an understanding of how the system actually works which would allow them to extend the functionality of the system by directly accessing SAS® data from outside of the system.Speed of setup is dramatically reduced. By keeping studies on a local file server and making the database and screen design processes extremely simple and intuitive, setup time is reduced from weeks to days.All phases of the data management process become homogeneous. From entry to analysis, data reside in SAS® data sets, often the end goal of every data management group. Additionally, SAS® users are involved in each step, instead of having specialists from different areas hand off pieces of studies during the project life cycle.No data conversion is required. Since the data reside in SAS® data sets natively, no conversion programs need to be written.Data review can happen during the data entry process, on the master database. As long as records are marked as being double-keyed, data review personnel can run edit check programs and build queries on some patients while others are still being entered.Tables and listings can be generated on live data. This helps speed up the development of table and listing programs and allows programmers to avoid having to make continual copies or extracts of the data during testing.43. Have you ever had to follow SOPs or programming guidelines?SOP describes the process to assure that standard coding activities, which produce tables, listings and graphs, functions and/or edit checks, are conducted in accordance with industry standards are appropriately documented.It is normally used whenever new programs are required or existing programs required some modification during the set-up, conduct, and/or reporting clinical trial data.44. Describe the types of SAS programming tasks that you performed: Tables? Listings? Graphics? Ad hoc reports? Other?Prepared programs required for the ISS and ISE analysis reports. Developed and validated programs for preparing ad-hoc statistical reports for the preparation of clinical study report. Wrote analysis programs in line with the specifications defined by the study statistician. Base SAS (MEANS, FREQ, SUMMARY, TABULATE, REPORT etc) and SAS/STAT procedures (REG, GLM, ANOVA, and UNIVARIATE etc.) were used for summarization, Cross-Tabulations and statistical analysis purposes. Created Statistical reports using Proc Report, Data _null_ and SAS Macro. Created, derived and merged and pooled datasets,listings and summary tables for Phase-I and Phase-II of clinical trials.45. Have you been involved in editing the data or writing data queries?If your interviewer asks this question, the u should ask him what he means by editing the data… and data queries…
46. Are you involved in writing the inferential analysis plan? Table’s specifications?
47. What do you feel about hardcoding?
Programmers sometime hardcode when they need to produce report in urgent. But it is always better to avoid hardcoding, as it overrides the database controls in clinical data management. Data often change in a trial over time, and the hardcode that is written today may not be valid in the future.Unfortunately, a hardcode may be forgotten and left in the SAS program, and that can lead to an incorrect database change.
48. How do you write a test plan?
Before writing "Test plan" you have to look into on "Functional specifications". Functional specifications itself depends on "Requirements", so one should have clear understanding of requirements and functional specifications to write a test plan.
49. What is the difference between verification and validation?
Although the verification and validation are close in meaning, "verification" has more of a sense of testing the truth or accuracy of a statement by examining evidence or conducting experiments, while "validate" has more of a sense of declaring a statement to be true and marking it with an indication of official sanction.
50.What other SAS features do you use for error trapping and data validation?
Conditional statements, if then else.
Put statement
Debug option.
51. What is PROC CDISC?
It is new SAS procedure that is available as a hotfix for SAS 8.2 version and comes as a part withSAS 9.1.3 version.
PROC CDISC is a procedure that allows us to import (and export XML files that are compliant with the CDISC ODM version 1.2 schema.
For more details refer SAS programming in the Pharmaceutical Industry text book.
52) What is LOCF?
Pharmaceutical companies conduct longitudinalstudies on human subjects that often span several months. It is unrealistic to expect patients to keep every scheduled visit over such a long period of time.Despite every effort, patient data are not collected for some time points. Eventually, these become missing values in a SAS data set later. For reporting purposes,the most recent previously available value is substituted for each missing visit. This is called the Last Observation Carried Forward (LOCF).LOCF doesn't mean last SAS dataset observation carried forward. It means last non-missing value carried forward. It is the values of individual measures that are the "observations" in this case. And if you have multiple variables containing these values then they will be carried forward independently.
53) ETL process:
Extract, transform and LoadExtract:
The 1st part of an ETL process is to extract the data from the source systems. Most data warehousing projects consolidate data from different source systems.
Each separate system may also use a different data organization / format. Common data source formats are relational databases and flat files, but may include non-relational database structures such as IMS or other data structures such as VSAM or ISAM.
Extraction converts the data into a format for transformation processing.An intrinsic part of the extraction is the parsing of extracted data, resulting in a check if the data meets an expected pattern
Transform:The transform stage applies a series of rules or functions to the extracted data from the source to derive the data to be loaded to the end target. Some data sources will require very little or even no manipulation of data. In other cases, one or more of the following transformations types to meet the business and technical needs of the end target may be required:·
Selecting only certain columns to load (or selecting null columns not to load) · Translating coded values (e.g., if the source system stores 1 for male and 2 for female, but the warehouse stores M for male and F for female), this is called automated data cleansing; no manual cleansing occurs during ETL · Encoding free-form values (e.g., mapping "Male" to "1" and "Mr" to M) ·
Joining together data from multiple sources (e.g., lookup, merge, etc.) · Generating surrogate key values · Transposing or pivoting (turning multiple columns into multiple rows or vice versa) · Splitting a column into multiple columns (e.g., putting a comma-separated list specified as a string in one column as individual values in different columns) ·
Applying any form of simple or complex data validation; if failed, a full, partial or no rejection of the data, and thus no, partial or all the data is handed over to the next step, depending on the rule design and exception handling. Most of the above transformations itself might result in an exception, e.g. when a code-translation parses an unknown code in the extracted data.Load:The load phase loads the data into the end target, usually being the data warehouse (DW).
Depending on the requirements of the organization, this process ranges widely. Some data warehouses might weekly overwrite existing information with cumulative, updated data, while other DW (or even other parts of the same DW) might add new data in a historized form, e.g. hourly. The timing and scope to replace or append are strategic design choices dependent on the time available and the business needs. More complex systems can maintain a history and audit trail of all changes to the data loaded in the DW.
As the load phase interacts with a database, the constraints defined in the database schema as well as in triggers activated upon data load apply (e.g. uniqueness, referential integrity, mandatory fields), which also contribute to the overall data quality performance of the ETL process.
source: wikipedia
Ans:- These are the following four phases of the clinical trials:
Phase 1: Test a new drug or treatment to a small group of people (20-80) to evaluate its safety.
Phase 2: The experimental drug or treatment is given to a large group of people (100-300) to see that the drug is effective or not for that treatment.
Phase 3: The experimental drug or treatment is given to a large group of people (1000-3000) to see its effectiveness, monitor side effects and compare it to commonly used treatments.
Phase 4: The 4 phase study includes the post marketing studies including the drug's risk, benefits etc.
2. Describe the validation procedure? How would you perform the validation for TLG as well as analysis data set?
Ans:- Validation procedure is used to check the output of the SAS program, generated by the source programmer. In this process validator write the program and generate the output. If this output is same as the output generated by the SAS programmer's output then the program is considered to be valid. We can perform this validation for TLG by checking the output manually and for analysis data set it can be done using PROC COMPARE.
3. How would you perform the validation for the listing, which has 400 pages?
Ans:- It is not possible to perform the validation for the listing having 400 pages manually. To do this, we convert the listing in data sets by using PROC RTF and then after that we can compare it by using PROC COMPARE.
4. Can you use PROC COMPARE to validate listings? Why?
Ans:- Yes, we can use PROC COMPARE to validate the listing because if there are many entries (pages) in the listings then it is not possible to check them manually. So in this condition we use PROC COMPARE to validate the listings.
5. How would you generate tables, listings and graphs?
Ans:- We can generate the listings by using the PROC REPORT. Similarly we can create the tables by using PROC FREQ, PROC MEANS, and PROC TRANSPOSE and PROC REPORT. We would generate graph, using proc Gplot etc.
6. How many tables can you create in a day?
Ans:- Actually it depends on the complexity of the tables if there are same type of tables then, we can create 1-2-3 tables in a day.
7. What are all the PROCS have you used in your experience?
Ans:- I have used many procedures like proc report, proc sort, proc format etc. I have used proc report to generate the list report, in this procedure I have used subjid as order variable and trt_grp, sbd, dbd as display variables.
8. Describe the data sets you have come across in your life?
Ans:- I have worked with demographic, adverse event , laboratory, analysis and other data sets.
9. How would you submit the docs to FDA? Who will submit the docs?
Ans:- We can submit the docs to FDA by e-submission. Docs can be submitted to FDA using
Define.pdf or define.Xml formats. In this doc we have the documentation about macros and program and E-records also. Statistician or project manager will submit this doc to FDA.
10. What are the docs do you submit to FDA?
Ans:- We submit ISS and ISE documents to FDA.
11. Can u share your CDISC experience? What version of CDISC SDTM have you used?
Ans: I have used version 1.1 of the CDISC SDTM.
12. Tell me the importance of the SAP?
Ans:- This document contains detailed information regarding study objectives and statistical methods to aid in the production of the Clinical Study Report (CSR) including summary tables, figures, and subject data listings for Protocol. This document also contains documentation of the program variables and algorithms that will be used to generate summary statistics and statistical analysis.
13. Tell me about your project group? To whom you would report/contact?
My project group consisting of six members, a project manager, two statisticians, lead programmer and two programmers.
I usually report to the lead programmer. If I have any problem regarding the programming I would contact the lead programmer.
If I have any doubt in values of variables in raw dataset I would contact the statistician. For example the dataset related to the menopause symptoms in women, if the variable sex having the values like F, M. I would consider it as wrong; in that type of situations I would contact the statistician.
14. Explain SAS documentation.
SAS documentation includes programmer header, comments, titles, footnotes etc. Whatever we type in the program for making the program easily readable, easily understandable are in called as SAS documentation.
15. How would you know whether the program has been modified or not?
I would know the program has been modified or not by seeing the modification history in the program header.
16. Project status meeting?
It is a planetary meeting of all the project managers to discuss about the present Status of the project in hand and discuss new ideas and options in improving the Way it is presently being performed.
17. Describe clin-trial data base and oracle clinical
Clintrial, the market's leading Clinical Data Management System (CDMS).Oracle Clinical or OC is a database management system designed by Oracle to provide data management, data entry and data validation functionalities to Clinical Trials process.18. Tell me about MEDRA and what version of MEDRA did you use in your project?Medical dictionary of regulatory activities. Version 10
19. Describe SDTM?
CDISC’s Study Data Tabulation Model (SDTM) has been developed to standardize what is submitted to the FDA.
20. What is CRT?
Case Report Tabulation, Whenever a pharmaceutical company is submitting an NDA, conpany has to send the CRT's to the FDA.
21. What is annotated CRF?
Case report form, it’s a collection of the forms of all the patients in the trial.
22. What do you know about 21CRF PART 11?
Title 21 CFR Part 11 of the Code of Federal Regulations deals with the FDA guidelines on electronic records and electronic signatures in the United States. Part 11, as it is commonly called, defines the criteria under which electronic records and electronic signatures are considered to be trustworthy, reliable and equivalent to paper records.
23. Have you did validation in your projects?
I did validation of the fellow programmers work to ensure that the logic and intent of the program is correct and that data errors are detected.e.gVerify error and warning messages are generated when the macro is called more than 10 times which means to add more than 10 titles.
Verify the error message when TITLENUM parameter is invalid.Verify a warning message is generated if the total length of texts specified in the input parameters LEFT, CENTER, and RIGHT is greater than 32 characters.
Also verify precedence is given to string in input parameter LEFT if the total string length is more than 32 characters.Verify there is no error/warning message generated if the macro is used within a data step and all input parameters are valid.
24. What are the contents of AE dataset? What is its purpose?
What are the variables in adverse event datasets?The adverse event data set contains the SUBJID, body system of the event, the preferred term for the event, event severity. The purpose of the AE dataset is to give a summary of the adverse event for all the patients in the treatment arms to aid in the inferential safety analysis of the drug.
25. What are the contents of lab data? What is the purpose of data set?
The lab data set contains the SUBJID, week number, and category of lab test, standard units, low normal and high range of the values. The purpose of the lab data set is to obtain the difference in the values of key variables after the administration of drug.
26.How did you do data cleaning? How do you change the values in the data on your own?
I used proc freq and proc univariate to find the discrepancies in the data, which I reported to my manager.
27.Have you created CRT’s, if you have, tell me what have you done in that?
Yes I have created patient profile tabulations as the request of my manager and and the statistician. I have used PROC REPORT and Proc SQl to create simple patient listing which had all information of a particular patient including age, sex, race etc.
28. Have you created transport files?
Yes, I have created SAS Xport transport files using Proc Copy and data step for the FDA submissions. These are version 5 files. we use the libname engine and the Proc Copy procedure, One dataset in each xport transport format file. For version 5: labels no longer than 40 bytes, variable names 8 bytes, character variables width to 200 bytes. If we violate these constraints your copy procedure may terminate with constraints, because SAS xport format is in compliance with SAS 5 datasets.
Libname sdtm “c:\sdtm_data”;Libname dm xport “c:\dm.xpt”;
Proc copy;
In = sdtm;
Out = dm;
Select dm;
Run;
29. How did you do data cleaning? How do you change the values in the data on your own?
I used proc freq and proc univariate to find the discrepancies in the data, which I reported to my manager.
30. Definitions?
CDISC- Clinical data interchange standards consortium.They have different data models, which define clinical data standards for pharmaceutical industry.
SDTM – It defines the data tabulation datasets that are to be sent to the FDA for regulatory submissions.
ADaM – (Analysis data Model)Defines data set definition guidance for creating analysis data sets.
ODM – XML – based data model for allows transfer of XML based data .
Define.xml – for data definition file (define.pdf) which is machine readable.
ICH E3: Guideline, Structure and Content of Clinical Study Reports
ICH E6: Guideline, Good Clinical Practice
ICH E9: Guideline, Statistical Principles for Clinical Trials
Title 21 Part 312.32: Investigational New Drug Application
31. have you ever done any Edit check programs in your project, if you have, tell me what do you know about edit check programs?
Yes I have done edit check programs .Edit check programs – Data validation.
1.Data Validation – proc means, proc univariate, proc freq.Data Cleaning – finding errors.
2.Checking for invalid character values.Proc freq data = patients;Tables gender dx ae / nocum nopercent;Run;Which gives frequency counts of unique character values.
3. Proc print with where statement to list invalid data values.[systolic blood pressure - 80 to 100][diastolic blood pressure – 60 to 120]
4. Proc means, univariate and tabulate to look for outliers.Proc means – min, max, n and mean.Proc univariate – five highest and lowest values[ stem leaf plots and box plots]
5. PROC FORMAT – range checking
6. Data Analysis – set, merge, update, keep, drop in data step.
7. Create datasets – PROC IMPORT and data step from flat files.
8. Extract data – LIBNAME.9. SAS/STAT – PROC ANOVA, PROC REG.
10. Duplicate Data – PROC SORT Nodupkey or NoduplicateNodupkey – only checks for duplicates in BYNoduplicate – checks entire observation (matches all variables)For getting duplicate observations first sort BY nodupkey and merge it back to the original dataset and keep only records in original and sorted.
11.For creating analysis datasets from the raw data sets I used the PROC FORMAT, and rename and length statements to make changes and finally make a analysis data set.
32. What is Verification?
The purpose of the verification is to ensure the accuracy of the final tables and the quality of SAS programs that generated the final tables. According to the instructions SOP and the SAP I selected the subset of the final summary tables for verification. E.g Adverse event table, baseline and demographic characteristics table.The verification results were verified against with the original final tables and all discrepancies if existed were documented.
33. What is ANNOTATED CRF?
An annotated CRF is a CRF in which the variable names are written next to the spaces provided for the investigator. It serves as a link between the database/data sets and the questions on the CRF.
34. What is Program Validation?
Its same as macro validation except here we have to validate the programs i.e according to the SOP I had to first determine what the program is supposed to do, see if they work as they are supposed to work and create a validation document mentioning if the program works properly and set the status as pass or fail.Pass the input parameters to the program and check the log for errors.
35. What do you lknow about ISS and ISE, have you ever produced these reports?
ISS (Integrated summary of safety):Integrates safety information from all sources (animal, clinical pharmacology, controlled and uncontrolled studies, epidemiologic data). "ISS is, in part, simply a summation of data from individual studies and, in part, a new analysis that goes beyond what can be done with individual studies."ISE (Integrated Summary of efficacy)ISS & ISE are critical components of the safety and effectiveness submission and expected to be submitted in the application in accordance with regulation. FDA’s guidance Format and Content of Clinical and Statistical Sections of Application gives advice on how to construct these summaries. Note that, despite the name, these are integrated analyses of all relevant data, not summaries.
36. Explain the process and how to do Data Validation?
I have done data validation and data cleaning to check if the data values are correct or if they conform to the standard set of rules.A very simple approach to identifying invalid character values in this file is to use PROC FREQ to list all the unique values of these variables. This gives us the total number of invalid observations. After identifying the invalid data …we have to locate the observation so that we can report to the manager the particular patient number.Invalid data can be located using the data _null_ programming.
Following is e.g
DATA _NULL_;
INFILE "C:PATIENTS,TXT" PAD;FILE PRINT; ***SEND OUTPUT TO THE OUTPUT WINDOW;
TITLE "LISTING OF INVALID DATA";
***NOTE: WE WILL ONLY INPUT THOSEVARIABLES OF INTEREST;INPUT @1 PATNO $3.@4 GENDER $1.@24 DX $3.@27 AE $1.;
***CHECK GENDER;IF GENDER NOT IN ('F','M',' ') THEN PUT PATNO= GENDER=;
***CHECK DX;
IF VERIFY(DX,' 0123456789') NE 0
THEN PUT PATNO= DX=;
***CHECK AE;
IF AE NOT IN ('0','1',' ') THEN PUT PATNO= AE=;
RUN;
For data validation of numeric values like out of range or missing values I used proc print with a where statement.
PROC PRINT DATA=CLEAN.PATIENTS;
WHERE HR NOT BETWEEN 40 AND 100 AND
HR IS NOT MISSING OR
SBP NOT BETWEEN 80 AND 200 AND
SBP IS NOT MISSING OR
DBP NOT BETWEEN 60 AND 120 AND
DBP IS NOT MISSING;TITLE "OUT-OF-RANGE VALUES FOR NUMERICVARIABLES";
ID PATNO;
VAR HR SBP DBP;
RUN;
If we have a range of numeric values ‘001’ – ‘999’ then we can first use user defined format and then use proc freq to determine the invalid values.
PROC FORMAT;
VALUE $GENDER 'F','M' = 'VALID'' ' = 'MISSING'OTHER = 'MISCODED';
VALUE $DX '001' - '999'= 'VALID'' ' = 'MISSING'OTHER = 'MISCODED';
VALUE $AE '0','1' = 'VALID'' ' = 'MISSING'OTHER = 'MISCODED';
RUN;
One of the simplest ways to check for invalid numeric values is to run either PROC MEANS or PROC UNIVARIATE.We can use the N and NMISS options in the Proc Means to check for missing and invalid data. Default (n nmiss mean min max stddev).The main advantage of using PROC UNIVARIATE (default n mean std skewness kurtosis) is that we get the extreme values i.e lowest and highest 5 values which we can see for data errors. If u want to see the patid for these particular observations …..state and ID patno statement in the univariate procedure.
37. Roles and responsibilities?
Programmer:
Develop programming for report formats (ISS & ISE shell) required by the regulatory authorities.Update ISS/ISE shell, when required.
Clinical Study Team:
Provide information on safety and efficacy findings, when required.Provide updates on safety and efficacy findings for periodic reporting.
Study Statistician
Draft ISS and ISE shell.Update shell, when appropriate.Analyze and report data in approved format, to meet periodic reporting requirements.
38. Explain Types of Clinical trials study you come across?
Single Blind Study
When the patients are not aware of which treatment they receive.
Double Blind Study
When the patients and the investigator are unaware of the treatment group assigned.
Triple Blind Study
Triple blind study is when patients, investigator, and the project team are unaware of the treatments administered.
39. What are the domains/datasets you have used in your studies?
Demog
Adverse Events
Vitals
ECG
Labs
Medical History
PhysicalExam etc
40. Can you list the variables in all the domains?
Demog: Usubjid, Patient Id, Age, Sex, Race, Screening Weight, Screening Height, BMI etc
Adverse Events: Protocol no, Investigator no, Patient Id, Preferred Term, Investigator Term, (Abdominal dis, Freq urination, headache, dizziness, hand-food syndrome, rash, Leukopenia, Neutropenia) Severity, Seriousness (y/n), Seriousness Type (death, life threatening, permanently disabling), Visit number, Start time, Stop time, Related to study drug?
Vitals: Subject number, Study date, Procedure time, Sitting blood pressure, Sitting Cardiac Rate, Visit number, Change from baseline, Dose of treatment at time of vital sign, Abnormal (yes/no), BMI, Systolic blood pressure, Diastolic blood pressure.
ECG: Subject no, Study Date, Study Time, Visit no, PR interval (msec), QRS duration (msec), QT interval (msec), QTc interval (msec), Ventricular Rate (bpm), Change from baseline, Abnormal.
Labs: Subject no, Study day, Lab parameter (Lparm), lab units, ULN (upper limit of normal), LLN (lower limit of normal), visit number, change from baseline, Greater than ULN (yes/no), lab related serious adverse event (yes/no).Medical History: Medical Condition, Date of Diagnosis (yes/no), Years of onset or occurrence, Past condition (yes/no), Current condition (yes/no).
PhysicalExam: Subject no, Exam date, Exam time, Visit number, Reason for exam, Body system, Abnormal (yes/no), Findings, Change from baseline (improvement, worsening, no change), Comments
41. Give me the example of edit ckecks you made in your programs?Examples of Edit Checks
Demog:Weight is outside expected rangeBody mass index is below expected
( check weight and height)
Age is not within expected range.
DOB is greater than the Visit date or not..
Gender value is a valid one or invalid. etc
Adverse Event
Stop is before the start or visit Start is before birthdate Study medicine discontinued due to adverse event but completion indicated (COMPLETE =1)
Labs
Result is within the normal range but abnormal is not blank or ‘N’Result is outside the normal range but abnormal is blank
Vitals
Diastolic BP > Systolic BP
Medical History
Visit date prior to Screen datePhysicalPhysical exam is normal but comment included
42. What are the advantages of using SAS in clinical data management? Why should not we use other software products in managing clinical data?
ADVANTAGES OF USING A SAS®-BASED SYSTEM
Less hardware is required.
A Typical SAS®-based system can utilize a standard file server to store its databases and does not require one or more dedicated servers to handle the application load. PC SAS® can easily be used to handle processing, while data access is left to the file server. Additionally, as presented later in this paper, it is possible to use the SAS® product SAS®/Share to provide a dedicated server to handle data transactions.
Fewer personnel are required.
Systems that use complicated database software often require the hiring of one ore more DBA’s (Database Administrators) who make sure the database software is running, make changes to the structure of the database, etc. These individuals often require special training or background experience in the particular database application being used, typically Oracle. Additionally, consultants are often required to set up the system and/or studies since dedicated servers and specific expertise requirements often complicate the process.Users with even casual SAS® experience can set up studies. Novice programmers can build the structure of the database and design screens. Organizations that are involved in data management almost always have at least one SAS® programmer already on staff. SAS® programmers will have an understanding of how the system actually works which would allow them to extend the functionality of the system by directly accessing SAS® data from outside of the system.Speed of setup is dramatically reduced. By keeping studies on a local file server and making the database and screen design processes extremely simple and intuitive, setup time is reduced from weeks to days.All phases of the data management process become homogeneous. From entry to analysis, data reside in SAS® data sets, often the end goal of every data management group. Additionally, SAS® users are involved in each step, instead of having specialists from different areas hand off pieces of studies during the project life cycle.No data conversion is required. Since the data reside in SAS® data sets natively, no conversion programs need to be written.Data review can happen during the data entry process, on the master database. As long as records are marked as being double-keyed, data review personnel can run edit check programs and build queries on some patients while others are still being entered.Tables and listings can be generated on live data. This helps speed up the development of table and listing programs and allows programmers to avoid having to make continual copies or extracts of the data during testing.43. Have you ever had to follow SOPs or programming guidelines?SOP describes the process to assure that standard coding activities, which produce tables, listings and graphs, functions and/or edit checks, are conducted in accordance with industry standards are appropriately documented.It is normally used whenever new programs are required or existing programs required some modification during the set-up, conduct, and/or reporting clinical trial data.44. Describe the types of SAS programming tasks that you performed: Tables? Listings? Graphics? Ad hoc reports? Other?Prepared programs required for the ISS and ISE analysis reports. Developed and validated programs for preparing ad-hoc statistical reports for the preparation of clinical study report. Wrote analysis programs in line with the specifications defined by the study statistician. Base SAS (MEANS, FREQ, SUMMARY, TABULATE, REPORT etc) and SAS/STAT procedures (REG, GLM, ANOVA, and UNIVARIATE etc.) were used for summarization, Cross-Tabulations and statistical analysis purposes. Created Statistical reports using Proc Report, Data _null_ and SAS Macro. Created, derived and merged and pooled datasets,listings and summary tables for Phase-I and Phase-II of clinical trials.45. Have you been involved in editing the data or writing data queries?If your interviewer asks this question, the u should ask him what he means by editing the data… and data queries…
46. Are you involved in writing the inferential analysis plan? Table’s specifications?
47. What do you feel about hardcoding?
Programmers sometime hardcode when they need to produce report in urgent. But it is always better to avoid hardcoding, as it overrides the database controls in clinical data management. Data often change in a trial over time, and the hardcode that is written today may not be valid in the future.Unfortunately, a hardcode may be forgotten and left in the SAS program, and that can lead to an incorrect database change.
48. How do you write a test plan?
Before writing "Test plan" you have to look into on "Functional specifications". Functional specifications itself depends on "Requirements", so one should have clear understanding of requirements and functional specifications to write a test plan.
49. What is the difference between verification and validation?
Although the verification and validation are close in meaning, "verification" has more of a sense of testing the truth or accuracy of a statement by examining evidence or conducting experiments, while "validate" has more of a sense of declaring a statement to be true and marking it with an indication of official sanction.
50.What other SAS features do you use for error trapping and data validation?
Conditional statements, if then else.
Put statement
Debug option.
51. What is PROC CDISC?
It is new SAS procedure that is available as a hotfix for SAS 8.2 version and comes as a part withSAS 9.1.3 version.
PROC CDISC is a procedure that allows us to import (and export XML files that are compliant with the CDISC ODM version 1.2 schema.
For more details refer SAS programming in the Pharmaceutical Industry text book.
52) What is LOCF?
Pharmaceutical companies conduct longitudinalstudies on human subjects that often span several months. It is unrealistic to expect patients to keep every scheduled visit over such a long period of time.Despite every effort, patient data are not collected for some time points. Eventually, these become missing values in a SAS data set later. For reporting purposes,the most recent previously available value is substituted for each missing visit. This is called the Last Observation Carried Forward (LOCF).LOCF doesn't mean last SAS dataset observation carried forward. It means last non-missing value carried forward. It is the values of individual measures that are the "observations" in this case. And if you have multiple variables containing these values then they will be carried forward independently.
53) ETL process:
Extract, transform and LoadExtract:
The 1st part of an ETL process is to extract the data from the source systems. Most data warehousing projects consolidate data from different source systems.
Each separate system may also use a different data organization / format. Common data source formats are relational databases and flat files, but may include non-relational database structures such as IMS or other data structures such as VSAM or ISAM.
Extraction converts the data into a format for transformation processing.An intrinsic part of the extraction is the parsing of extracted data, resulting in a check if the data meets an expected pattern
Transform:The transform stage applies a series of rules or functions to the extracted data from the source to derive the data to be loaded to the end target. Some data sources will require very little or even no manipulation of data. In other cases, one or more of the following transformations types to meet the business and technical needs of the end target may be required:·
Selecting only certain columns to load (or selecting null columns not to load) · Translating coded values (e.g., if the source system stores 1 for male and 2 for female, but the warehouse stores M for male and F for female), this is called automated data cleansing; no manual cleansing occurs during ETL · Encoding free-form values (e.g., mapping "Male" to "1" and "Mr" to M) ·
Joining together data from multiple sources (e.g., lookup, merge, etc.) · Generating surrogate key values · Transposing or pivoting (turning multiple columns into multiple rows or vice versa) · Splitting a column into multiple columns (e.g., putting a comma-separated list specified as a string in one column as individual values in different columns) ·
Applying any form of simple or complex data validation; if failed, a full, partial or no rejection of the data, and thus no, partial or all the data is handed over to the next step, depending on the rule design and exception handling. Most of the above transformations itself might result in an exception, e.g. when a code-translation parses an unknown code in the extracted data.Load:The load phase loads the data into the end target, usually being the data warehouse (DW).
Depending on the requirements of the organization, this process ranges widely. Some data warehouses might weekly overwrite existing information with cumulative, updated data, while other DW (or even other parts of the same DW) might add new data in a historized form, e.g. hourly. The timing and scope to replace or append are strategic design choices dependent on the time available and the business needs. More complex systems can maintain a history and audit trail of all changes to the data loaded in the DW.
As the load phase interacts with a database, the constraints defined in the database schema as well as in triggers activated upon data load apply (e.g. uniqueness, referential integrity, mandatory fields), which also contribute to the overall data quality performance of the ETL process.
source: wikipedia
SAS interview questions:Macros
1. Have you used macros? For what purpose you have used?
Yes I have, I used macros in creating datasets and tables where it is necessary to make a small change through out the program where it is necessary to use the code and again.
2. How would you invoke a macro?
After I have defined a macro I can invoke it by adding the percent sign prefix to its name like this: % macro name a semicolon is not required when invoking a macro, though adding one generally does no harm.
3. How we can call macros with in data step?
We can call the macro with CALLSYMPUT
4. How do u identify a macro variable?
Ampersand (&)
5. How do you define the end of a macro?
The end of the macro is defined by %Mend Statement
6. For what purposes have you used SAS macros?
If we want use a program step for executing to execute the same Proc step on multiple data sets. We can accomplish repetitive tasks quickly and efficiently. A macro program can be reused many times. Parameters passed to the macro program customize the results without having to change the code within the macro program. Macros in SAS make a small change in the program and have SAS echo that change thought that program.
7. What is the difference between %LOCAL and % Global?
% Local is a macro variable defined inside a macro.%Global is a macro variable defined in open code (outside the macro or can use anywhere).
8. How long can a macro variable be? A token?
A component of SAS known as the word scanner breaks the program text into fundamental units called tokens.· Tokens are passed on demand to the compiler.· The compiler then requests token until it receives a semicolon.· Then the compiler performs the syntax check on the statement.
9. If you use a SYMPUT in a DATA step, when and where can you use the macro variable?
Macro variable is used inside the Call Symput statement and is enclosed in quotes.
10. What do you code to create a macro? End one?
%MACRO and %MEND
11. What is the difference between %PUT and SYMBOLGEN?
%PUT is used to display user defined messages on log window after execution of a program where as % SYMBOLGEN is used to print the value of a macro variable resolved, on log window.
12. How do you add a number to a macro variable?
Using %eval function
13. Can you execute a macro within a macro? Describe.
Yes, Such macros are called nested macros. They can be obtained by using symget and call symput macros.
14. If you need the value of a variable rather than the variable itself what would you use to load the value to a macro variable?
If we need a value of a macro variable then we must define it in such terms so that we can call them everywhere in the program. Define it as Global. There are different ways of assigning a global variable. Simplest method is %LET.
Ex:A, is macro variable. Use following statement to assign the value of a rather than the variable itselfe.g.
%Let A=xyzx="&A";
This will assign "xyz" to x, not the variable xyz to x.
15. Can you execute macro within another macro? If so, how would SAS know where the current macro ended and the new one began?
Yes, I can execute macro within a macro, what we call it as nesting of macros, which is allowed. Every macro's beginning is identified the keyword %macro and end with %mend.
16. How are parameters passed to a macro?
A macro variable defined in parentheses in a %MACRO statement is a macro parameter. Macro parameters allow you to pass information into a macro.
Here is a simple example:
%macro plot(yvar= ,xvar= );
proc plot;
plot &yvar*&xvar;
run;
%mend plot;
17. How would you code a macro statement to produce information on the SAS log?
This statement can be coded anywhere?
OPTIONS, MPRINT MLOGIC MERROR SYMBOLGEN;
18. How we can call macros with in data step?
We can call the macro with
CALLSYMPUT,
Proc SQL and
%LET statement.
19. Tell me about call symput?
CALL SYMPUT takes a value from a data step and assigns it to a macro variable. I can then use this macro variable in later steps. To assign a value to a single macro variable,
I use CALL SYMPUT with this general form:
CALL SYMPUT (“macro-variable-name”, value);
Where macro-variable-name, enclosed in quotation marks, is the name of a macro variable, either new or old, and value is the value I want to assign to that macro variable. Value can be the name of a variable whose value SAS will use, or it can be a constant value enclosed quotation marks.
CALL SYMPUT is often used in if-then statements such as this:
If age>=18 then call symput (“status”,”adult”);
Else call symput (“status”,”minor”);
These statements create a macro variable named &status and assign to it a value of either adult or minor depending on the variable age.Caution: We cannot create a macro variable with CALL SYMPUT and use it in the same data step because SAS does not assign a value to the macro variable until the data step executes. Data steps executes when SAS encounters a step boundary such as a subsequent data, proc, or run statement.
20. Tell me about % include and % eval?
The %include statement, despite its percent sign, is not a macro statement and is always executed in SAS, though it can be conditionally executed in a macro.It can be used to setting up a macro library. But this is a least approach.
The use of %include does not actually set up a library. The %include statement points to a file and when it executed the indicated file (be it a full program, macro definition, or a statement fragment) is inserted into the calling program at the location of the call. When using the %include building a macro library, the included file will usually contain one or more macro definitions.%EVAL is a widely used yet frequently misunderstood SAS(r) macro language function due to its seemingly simple form.
However, when its actual argument is a complex macro expression interlaced with special characters, mixed arithmetic and logical operators, or macro quotation functions, its usage and result become elusive and problematic. %IF condition in macro is evaluated by %eval, to reduce it to true or false.
21. Describe the ways in which you can create macro variables?
There are the 5 ways to create macro variables:
%Let
%Global
Call Symput
Proc SQl
Parameters.
22. Tell me more about the parameters in macro?
Parameters are macro variables whose value you set when you invoke a macro. To add the parameters to a macro, you simply name the macro vars names parenthesis in the %macro statement.
Syntax:
%MACRO macro-name (parameter-1= , parameter-2= , ……parameter-n = );
macro-text%;
MEND macro-name;
23. What is the maximum length of the macro variable?
32 characters long.
24. Automatic variables for macro?
Every time we invoke SAS, the macro processor automatically creates certain macro var. eg: &sysdate &sysday.
25. What system options would you use to help debug a macro?
Debugging a Macro with SAS System Options. The SAS System offers users a number of useful system options to help debug macro issues and problems. The results associated with using macro options are automatically displayed on the SAS Log.
Specific options related to macro debugging appear in alphabetical order in the table below.SAS Option Description:
MEMRPT Specifies that memory usage statistics be displayed on the SAS Log.
MERROR: SAS will issue warning if we invoke a macro that SAS didn’t find. Presents Warning Messages when there are misspellings or when an undefined macro is called.
SERROR: SAS will issue warning if we use a macro variable that SAS can’t find.
MLOGIC: SAS prints details about the execution of the macros in the log.
MPRINT: Displays SAS statements generated by macro execution are traced on the SAS Log for debugging purposes.
SYMBOLGEN: SAS prints the value of macro variables in log and also displays text from expanding macro variables to the SAS Log.
26. If you need the value of a variable rather than the variable itself what would you use to load the value to a macro variable?
If we need a value of a macro variable then we must define it in such terms so that we can call them everywhere in the program. Define it as Global.
There are different ways of assigning a global variable.
Simplest method is %LET.
Ex:A, is macro variable. Use following statement to assign the value of a rather than the variable itselfe.g.
%Let A=xyzx="&A";
This will assign "xyz" to x, not the variable xyz to x.
27. Can you execute macro within another macro? If so, how would SAS know where the current macro ended and the new one began?
Yes, I can execute macro within a macro, what we call it as nesting of macros, which is allowed. Every macro's beginning is identified the keyword %macro and end with %mend.
28. How are parameters passed to a macro?
A macro variable defined in parentheses in a %MACRO statement is a macro parameter. Macro parameters allow you to pass information into a macro. Here is a simple example:
%macro plot(yvar= ,xvar= );
proc plot;
plot &yvar*&xvar;
run;
%mend plot;
29. How would you code a macro statement to produce information on the SAS log?
This statement can be coded anywhere?
OPTIONS MPRINT MLOGIC MERROR SYMBOLGEN;
30. How we can call macros with in data step?
We can call the macro with CALLSYMPUT, Proc SQL and %LET statement.
31. What are SYMGET and SYMPUT?
SYMPUT puts the value from a dataset into a macro variable where as
SYMGET gets the value from the macro variable to the dataset.
32. What are the macros you have used in your programs?
Used macros for various puposes, few of them are..
1) Macros written to determine the list of variables in a dataset:
%macro varlist (dsn);
proc contents data = &dsn out = cont noprint;
run;
proc sql noprint;
select distinct name into:varname1-:varname22from cont;
quit;
%do i =1 %to &sqlobs;
%put &i &&varname&i;
%end;
%mend varlist;
%varlist(adverse)
2) Distribution or Missing / Non-Missing Values%macro missrep(dsn, vars=_numeric_);
proc freq data=&dsn.;
tables &vars. / missing;
format _character_ $missf. _numeric_ missf.;
title1 ‘Distribution or Missing / Non-Missing Values’;
run;
%mend missrep;
%missrep(study.demog, vars=age gender bdate);
3) Written macros for sorting common variables in various datasets%macro sortit (datasetname, pid, investigator, timevisit)
PROC SORT DATA = &DATASETNAME;
BY &PID &INVESTIGATOR;
%mend sortit;
4) Macros written to split the number of observations in a dataset
%macro split (dsnorig, dsnsplit1, dsnsplit2, obs1);
data &dsnsplit1;
set &dsnorig (obs = &obs1);
run;data &dsnsplit2;
set &dsnorig (firstobs = %eval(&obs1 + 1));
run;%mend split;
%split(sasuser.admit,admit4,admit5,2)
33. What is auto call macro and how to create a auto call macro? What is the use of it? How to use it in SAS with macros?
SAS Enables the user to call macros that have been stored as SAS programs.
The auto call macro facility allows users to access the same macro code from multiple SAS programs. Rather than having the same macro code for in each program where the code is required, with an autocall macro, the code is in one location. This permits faster updates and better consistency across all the programs.Macro set-up:The fist step is to set-up a program that contains a macro, desired to be used in multiple programs. Although the program may contain other macros and/or open code, it is advised to include only one macro.
Set MAUTOSOURSE and SASAUTOS:
Before one can use the autocall macro within a SAS program, The MAUTOSOURSE option must be set open and the SASAUTOS option should be assigned. The MAUTOSOURSE option indicates to SAS that the autocall facility is to be activated. The SASAUTOS option tells SAS where to look for the macros.For ex: sasauto=’g:\busmeas\internal\macro\’;34. What %put do?It displays the macro variable value when we specify%put (my first macro variable… is &……..)% Put _automatic_ option displays all the SAS system macro variables includind &SYSDATE AND &SYSTIME.
Yes I have, I used macros in creating datasets and tables where it is necessary to make a small change through out the program where it is necessary to use the code and again.
2. How would you invoke a macro?
After I have defined a macro I can invoke it by adding the percent sign prefix to its name like this: % macro name a semicolon is not required when invoking a macro, though adding one generally does no harm.
3. How we can call macros with in data step?
We can call the macro with CALLSYMPUT
4. How do u identify a macro variable?
Ampersand (&)
5. How do you define the end of a macro?
The end of the macro is defined by %Mend Statement
6. For what purposes have you used SAS macros?
If we want use a program step for executing to execute the same Proc step on multiple data sets. We can accomplish repetitive tasks quickly and efficiently. A macro program can be reused many times. Parameters passed to the macro program customize the results without having to change the code within the macro program. Macros in SAS make a small change in the program and have SAS echo that change thought that program.
7. What is the difference between %LOCAL and % Global?
% Local is a macro variable defined inside a macro.%Global is a macro variable defined in open code (outside the macro or can use anywhere).
8. How long can a macro variable be? A token?
A component of SAS known as the word scanner breaks the program text into fundamental units called tokens.· Tokens are passed on demand to the compiler.· The compiler then requests token until it receives a semicolon.· Then the compiler performs the syntax check on the statement.
9. If you use a SYMPUT in a DATA step, when and where can you use the macro variable?
Macro variable is used inside the Call Symput statement and is enclosed in quotes.
10. What do you code to create a macro? End one?
%MACRO and %MEND
11. What is the difference between %PUT and SYMBOLGEN?
%PUT is used to display user defined messages on log window after execution of a program where as % SYMBOLGEN is used to print the value of a macro variable resolved, on log window.
12. How do you add a number to a macro variable?
Using %eval function
13. Can you execute a macro within a macro? Describe.
Yes, Such macros are called nested macros. They can be obtained by using symget and call symput macros.
14. If you need the value of a variable rather than the variable itself what would you use to load the value to a macro variable?
If we need a value of a macro variable then we must define it in such terms so that we can call them everywhere in the program. Define it as Global. There are different ways of assigning a global variable. Simplest method is %LET.
Ex:A, is macro variable. Use following statement to assign the value of a rather than the variable itselfe.g.
%Let A=xyzx="&A";
This will assign "xyz" to x, not the variable xyz to x.
15. Can you execute macro within another macro? If so, how would SAS know where the current macro ended and the new one began?
Yes, I can execute macro within a macro, what we call it as nesting of macros, which is allowed. Every macro's beginning is identified the keyword %macro and end with %mend.
16. How are parameters passed to a macro?
A macro variable defined in parentheses in a %MACRO statement is a macro parameter. Macro parameters allow you to pass information into a macro.
Here is a simple example:
%macro plot(yvar= ,xvar= );
proc plot;
plot &yvar*&xvar;
run;
%mend plot;
17. How would you code a macro statement to produce information on the SAS log?
This statement can be coded anywhere?
OPTIONS, MPRINT MLOGIC MERROR SYMBOLGEN;
18. How we can call macros with in data step?
We can call the macro with
CALLSYMPUT,
Proc SQL and
%LET statement.
19. Tell me about call symput?
CALL SYMPUT takes a value from a data step and assigns it to a macro variable. I can then use this macro variable in later steps. To assign a value to a single macro variable,
I use CALL SYMPUT with this general form:
CALL SYMPUT (“macro-variable-name”, value);
Where macro-variable-name, enclosed in quotation marks, is the name of a macro variable, either new or old, and value is the value I want to assign to that macro variable. Value can be the name of a variable whose value SAS will use, or it can be a constant value enclosed quotation marks.
CALL SYMPUT is often used in if-then statements such as this:
If age>=18 then call symput (“status”,”adult”);
Else call symput (“status”,”minor”);
These statements create a macro variable named &status and assign to it a value of either adult or minor depending on the variable age.Caution: We cannot create a macro variable with CALL SYMPUT and use it in the same data step because SAS does not assign a value to the macro variable until the data step executes. Data steps executes when SAS encounters a step boundary such as a subsequent data, proc, or run statement.
20. Tell me about % include and % eval?
The %include statement, despite its percent sign, is not a macro statement and is always executed in SAS, though it can be conditionally executed in a macro.It can be used to setting up a macro library. But this is a least approach.
The use of %include does not actually set up a library. The %include statement points to a file and when it executed the indicated file (be it a full program, macro definition, or a statement fragment) is inserted into the calling program at the location of the call. When using the %include building a macro library, the included file will usually contain one or more macro definitions.%EVAL is a widely used yet frequently misunderstood SAS(r) macro language function due to its seemingly simple form.
However, when its actual argument is a complex macro expression interlaced with special characters, mixed arithmetic and logical operators, or macro quotation functions, its usage and result become elusive and problematic. %IF condition in macro is evaluated by %eval, to reduce it to true or false.
21. Describe the ways in which you can create macro variables?
There are the 5 ways to create macro variables:
%Let
%Global
Call Symput
Proc SQl
Parameters.
22. Tell me more about the parameters in macro?
Parameters are macro variables whose value you set when you invoke a macro. To add the parameters to a macro, you simply name the macro vars names parenthesis in the %macro statement.
Syntax:
%MACRO macro-name (parameter-1= , parameter-2= , ……parameter-n = );
macro-text%;
MEND macro-name;
23. What is the maximum length of the macro variable?
32 characters long.
24. Automatic variables for macro?
Every time we invoke SAS, the macro processor automatically creates certain macro var. eg: &sysdate &sysday.
25. What system options would you use to help debug a macro?
Debugging a Macro with SAS System Options. The SAS System offers users a number of useful system options to help debug macro issues and problems. The results associated with using macro options are automatically displayed on the SAS Log.
Specific options related to macro debugging appear in alphabetical order in the table below.SAS Option Description:
MEMRPT Specifies that memory usage statistics be displayed on the SAS Log.
MERROR: SAS will issue warning if we invoke a macro that SAS didn’t find. Presents Warning Messages when there are misspellings or when an undefined macro is called.
SERROR: SAS will issue warning if we use a macro variable that SAS can’t find.
MLOGIC: SAS prints details about the execution of the macros in the log.
MPRINT: Displays SAS statements generated by macro execution are traced on the SAS Log for debugging purposes.
SYMBOLGEN: SAS prints the value of macro variables in log and also displays text from expanding macro variables to the SAS Log.
26. If you need the value of a variable rather than the variable itself what would you use to load the value to a macro variable?
If we need a value of a macro variable then we must define it in such terms so that we can call them everywhere in the program. Define it as Global.
There are different ways of assigning a global variable.
Simplest method is %LET.
Ex:A, is macro variable. Use following statement to assign the value of a rather than the variable itselfe.g.
%Let A=xyzx="&A";
This will assign "xyz" to x, not the variable xyz to x.
27. Can you execute macro within another macro? If so, how would SAS know where the current macro ended and the new one began?
Yes, I can execute macro within a macro, what we call it as nesting of macros, which is allowed. Every macro's beginning is identified the keyword %macro and end with %mend.
28. How are parameters passed to a macro?
A macro variable defined in parentheses in a %MACRO statement is a macro parameter. Macro parameters allow you to pass information into a macro. Here is a simple example:
%macro plot(yvar= ,xvar= );
proc plot;
plot &yvar*&xvar;
run;
%mend plot;
29. How would you code a macro statement to produce information on the SAS log?
This statement can be coded anywhere?
OPTIONS MPRINT MLOGIC MERROR SYMBOLGEN;
30. How we can call macros with in data step?
We can call the macro with CALLSYMPUT, Proc SQL and %LET statement.
31. What are SYMGET and SYMPUT?
SYMPUT puts the value from a dataset into a macro variable where as
SYMGET gets the value from the macro variable to the dataset.
32. What are the macros you have used in your programs?
Used macros for various puposes, few of them are..
1) Macros written to determine the list of variables in a dataset:
%macro varlist (dsn);
proc contents data = &dsn out = cont noprint;
run;
proc sql noprint;
select distinct name into:varname1-:varname22from cont;
quit;
%do i =1 %to &sqlobs;
%put &i &&varname&i;
%end;
%mend varlist;
%varlist(adverse)
2) Distribution or Missing / Non-Missing Values%macro missrep(dsn, vars=_numeric_);
proc freq data=&dsn.;
tables &vars. / missing;
format _character_ $missf. _numeric_ missf.;
title1 ‘Distribution or Missing / Non-Missing Values’;
run;
%mend missrep;
%missrep(study.demog, vars=age gender bdate);
3) Written macros for sorting common variables in various datasets%macro sortit (datasetname, pid, investigator, timevisit)
PROC SORT DATA = &DATASETNAME;
BY &PID &INVESTIGATOR;
%mend sortit;
4) Macros written to split the number of observations in a dataset
%macro split (dsnorig, dsnsplit1, dsnsplit2, obs1);
data &dsnsplit1;
set &dsnorig (obs = &obs1);
run;data &dsnsplit2;
set &dsnorig (firstobs = %eval(&obs1 + 1));
run;%mend split;
%split(sasuser.admit,admit4,admit5,2)
33. What is auto call macro and how to create a auto call macro? What is the use of it? How to use it in SAS with macros?
SAS Enables the user to call macros that have been stored as SAS programs.
The auto call macro facility allows users to access the same macro code from multiple SAS programs. Rather than having the same macro code for in each program where the code is required, with an autocall macro, the code is in one location. This permits faster updates and better consistency across all the programs.Macro set-up:The fist step is to set-up a program that contains a macro, desired to be used in multiple programs. Although the program may contain other macros and/or open code, it is advised to include only one macro.
Set MAUTOSOURSE and SASAUTOS:
Before one can use the autocall macro within a SAS program, The MAUTOSOURSE option must be set open and the SASAUTOS option should be assigned. The MAUTOSOURSE option indicates to SAS that the autocall facility is to be activated. The SASAUTOS option tells SAS where to look for the macros.For ex: sasauto=’g:\busmeas\internal\macro\’;34. What %put do?It displays the macro variable value when we specify%put (my first macro variable… is &……..)% Put _automatic_ option displays all the SAS system macro variables includind &SYSDATE AND &SYSTIME.
SAS Interview Questions and Answers: CDISC,SDTM,ADAM etc
1) What do you know about CDISC and its standards?
CDISC stands for Clinical Data Interchange Standards Consortium and it is developed keeping in mind to bring great deal of efficiency in the entire drug development process. CDISC brings efficiency to the entire drug development process by improving the data quality and speed-up the whole drug development process and to do that CDISC developed a series of standards, which include Operation data Model (ODM), Study data Tabulation Model (SDTM) and the Analysis Data Model ADaM).
2) Why people these days are more talking about CDSIC and what advantages it brings to the Pharmaceutical Industry?
A) Generally speaking, Only about 30% of programming time is used to generate statistical results with SAS®, and the rest of programming time is used to familiarize data structure, check data accuracy, and tabulate/list raw data and statistical results into certain formats. This non-statistical programming time will be significantly reduced after implementing the CDISC standards.
3) What are the challenges as SAS programmer you think you will face when you first implement CDISC standards in you company?
A) With the new requirements of electronic submission, CRT datasets need to conform to a set of standards for facilitating reviewing process. They no longer are created solely for programmers convenient. SDS will be treated as specifications of datasets to be submitted, potentially as reference of CRF design. Therefore, statistical programming may need to start from this common ground. All existing programs/macros may also need to be remapped based on CDISC so one can take advantage to validate submission information by using tools which reviewer may use for reviewing and to accelerate reviewing process without providing unnecessary data, tables and listings. With the new requirements from updating electronic submission and CDISC implementation, understanding only SAS® may not be good enough to fulfill for final deliverables. It is a time to expand and enhance the job skills from various aspects under new change so that SAS® programmers can take a competitive advantage, and continue to play a main role in both statistical analysis and reporting for drug development.
References:
Pharmasug/2007/fc/fc05
pharmasug/2003/fda compliance/fda055
1) What do you understand about SDTM and its importance?
SDTM stands for Standard data Tabulation Model, which defines a standard structure for study data tabulations that are to be submitted as part of a product application to a regulatory authority such as the United States Food and Drug Administration (FDA) 2.
In July 2004 the Clinical Data Interchange Standards Consortium (CDISC) published standards on the design and content of clinical trial tabulation data sets, known as the Study Data Tabulation Model (SDTM). According to the CDISC standard, there are four ways to represent a subject in a clinical study: tabulations, data listings, analysis datasets, and subject profiles6.
Before SDTM:
There are different names for each domain and domains don’t have a standard structure. There is no standard variables list for each and every domain.
Because of this FDA reviewers always had to take so much pain in understanding themselves with different data, domain names and name of the variable in each analysis dataset. Reviewers will have spent most of the valuable time in cleaning up the data into a standard format rather than reviewing the data for the accuracy. This process will delay the drug development process as such.
After SDTM:
There will be standard domain names and standard structure for each domain. There will be a list of standard variables and names for each and every dataset. Because of this, it will become easy to find and understand the data and reviewers will need less time to review the data than the data without SDTM standards. This process will improve the consistency in reviewing the data and it can be time efficient.
The purpose of creating SDTM domain data sets is to provide Case Report Tabulation (CRT) data FDA, in a standardized format. If we follow these standards it can greatly reduce the effort necessary for data mapping. Improper use of CDISC standards, such as using a valid domain or variable name incorrectly, can slow the metadata mapping process and should be avoided4.
2) PROC CDISC for SDTM 3.1 Format 2?
Syntax
The PROC CDISC syntax for CDISC SDTM is presented below. The DATA= parameter specifies the location of your SDTM conforming data source.PROC CDISC MODEL=SDTM;SDTM SDTMVersion = "3.1";DOMAINDATA DATA = results. AE DOMAIN = AE CATEGORY = EVENT;RUN;
3) What are the capabilities of PROC CDISC 2?
PROC CDISC performs the following checks on domain content of the source:
Verifies that all required variables are present in the data set
Reports as an error any variables in the data set that are not defined in the domain
Reports a warning for any expected domain variables that are not in the data set
Notes any permitted domain variables that are not in the data set
Verifies that all domain variables are of the expected data type and proper length
Detects any domain variables that are assigned a controlled terminology specification by the domain and do not have a format assigned to them.
The procedure also performs the following checks on domain data content of the source on a per observation basis:
Verifies that all required variable fields do not contain missing values
Detects occurrences of expected variable fields that contain missing values
Detects the conformance of all ISO-8601 specification assigned values; including date, time, date time, duration, and interval types
Notes correctness of yes/no and yes/no/null responses,
4) What are the different approaches for creating the SDTM 3?
There are 3 general approaches to create the SDTM datasets:
a) Build the SDTM entirely in the CDMS,
b) Build the SDTM entirely on the “back-end” in SAS,
c) or take a hybrid approach and build the SDTM partially in the CDMS and partially in SAS.
BUILD THE SDTM ENTIRELY IN THE CDMS
It is possible to build the SDTM entirely within the CDMS. If the CDMS allows for broad structural control of the underlying database, then you could build your eCRF or CRF based clinical database to SDTM standards.
Advantages:
• Your “raw” database is equivalent to your SDTM which provides the most elegant solution.
• Your clinical data management staff will be able to converse with end-users/sponsors about the data easily since your clinical data manager and the und-user/sponsor will both be looking at SDTM datasets.
• As soon as the CDMS database is built, the SDTM datasets are available.
Disadvantages:
• This approach may be cost prohibitive. Forcing the CDMS to create the SDTM structures may simply be too cumbersome to do efficiently.
• Forcing the CDMS to adapt to the SDTM may cause problems with the operation of the CDMS which could reduce data quality.
BUILD THE SDTM ENTIRELY ON THE “BACK-END” IN SAS
Assuming that SAS is not your CDMS solution, another approach is to take the clinical data from your CDMS and manipulate it into the SDTM with SAS programming.
Advantages:
• The great flexibility of SAS will let you transform any proprietary CDMS structure into the SDTM. You do not have to work around the rigid constraints of the CDMS.
• Changes could be made to the SDTM conversion without disturbing clinical data management processes.
• The CDMS is allowed to do what it does best which is to enter, manage, and clean data.
Disadvantages:
• There would be additional cost to transform the data from your typical CDMS structure into the SDTM.
Specifications, programming, and validation of the SAS programming transformation would be required.
• Once the CDMS database is up, there would then be a subsequent delay while the SDTM is created in SAS.
This delay would slow down the production of analysis datasets and reporting. This assumes that you follow the linear progression of CDMS -> SDTM -> analysis datasets (ADaM).
• Since the SDTM is a derivation of the “raw” data, there could be errors in translation from the “raw” CDMS data to the SDTM.
• Your clinical data management staff may be at a disadvantage when speaking with end-users/sponsors about the data since the data manager will likely be looking at the CDMS data and the sponsor will see SDTM data.
BUILD THE SDTM USING A HYBRID APPROACH
Again, assuming that SAS is not your CDMS solution, you could build some of the SDTM within the confines of the CDMS and do the rest of the work in SAS. There are things that could be done easily in the CDMS such as naming data tables the same as SDTM domains, using SDTM variable names in the CTMS, and performing simple derivations (such as age) in the CDMS. More complex SDTM derivations and manipulations can then be performed in SAS.
Advantages:
• The changes to the CDMS are easy to implement.
• The SDTM conversions to be done in SAS are manageable and much can be automated.
Disadvantages:
• There would still be some additional cost needed to transform the data from the SDTM-like CDMS structure into the SDTM. Specifications, programming, and validation of the transformation would be required.
• There would be some delay while the SDTM-like CDMS data is converted to the SDTM.
• Your clinical data management staff may still have a slight disadvantage when speaking with endusers/ sponsors about the data since the clinical data manager will be looking at the SDTM-like data and the sponsor will see the true SDTM data.
5) What do you know about SDTM domains?
A basic understanding of the SDTM domains, their structure and their interrelations is vital to determining which domains you need to create and in assessing the level to which your existing data is compliant. The SDTM consists of a set of clinical data file specifications and underlying guidelines. These different file structures are referred to as domains. Each domain is designed to contain a particular type of data associated with clinical trials, such as demographics, vital signs or adverse events.
The CDISC SDTM Implementation Guide provides specifications for 30 domains. The SDTM domains are divided into six classes.
The 21 clinical data domains are contained in three of these classes:
Interventions,
Events and
Findings.
The trial design class contains seven domains and the special-purpose class contains two domains (Demographics and Comments).
The trial design domains provide the reviewer with information on the criteria, structure and scheduled events of a clinical trail. The only required domain is demographics.
There are two other special purpose relationship data sets, the Supplemental Qualifiers (SUPPQUAL) data set and the Relate Records (RELREC) data set. SUPPQUAL is a highly normalized data set that allows you to store virtually any type of information related to one of the domain data sets. SUPPQUAL domain also accommodates variables longer than 200, the Ist 200 characters should be stored in the domain variable and the remaining should be stored in it5.
6) What are the general guidelines to SDTM variables?
Each of the SDTM domains has a collection of variables associated with it.
There are five roles that a variable can have:
Identifier,
Topic,
Timing,
Qualifier,
and for trial design domains,
Rule. Using lab data as an example, the subject ID, domain ID and sequence (e.g. visit) are identifiers.
The name of the lab parameter is the topic,
the date and time of sample collection are timing variables,
the result is a result qualifier and the variable containing the units is a variable qualifier.
Variables that are common across domains include the basic identifiers study ID (STUDYID), a two-character domain ID (DOMAIN) and unique subject ID (USUBJID).
In studies with multiple sites that are allowed to assign their own subject identifiers, the site ID and the subject ID must be combined to form USUBJID.
Prefixing a standard variable name fragment with the two-character domain ID generally forms all other variable names.
The SDTM specifications do not require all of the variables associated with a domain to be included in a submission. In regard to complying with the SDTM standards, the implementation guide specifies each variable as being included in one of three categories:
Required, Expected, and Permitted4.
REQUIRED – These variables are necessary for the proper functioning of standard software tools used by reviewers. They must be included in the data set structure and should not have a missing value for any observation.
EXPECTED – These variables must be included in the data set structure; however it is permissible to have missing values.
PERMISSIBLE – These variables are not a required part of the domain and they should not be included in the data set structure if the information they were designed to contain was not collected.
7) Can you tell me more About SDTM Domains5?
SDTM Domains are grouped by classes, which is useful for producing more meaningful relational schemas. Consider the following domain classes and their respective domains.
• Special Purpose Class – Pertains to unique domains concerning detailed information about the subjects in a study.
Demography (DM), Comments (CM)
• Findings Class – Collected information resulting from a planned evaluation to address specific questions about the subject, such as whether a subject is suitable to participate or continue in a study.
Electrocardiogram (EG)
Inclusion / Exclusion (IE)
Lab Results (LB)
Physical Examination (PE)
Questionnaire (QS)
Subject Characteristics (SC)
Vital Signs (VS)
• Events Class – Incidents independent of the study that happen to the subject during the lifetime of the study.
Adverse Events (AE)
Patient Disposition (DS)
Medical History (MH)
• Interventions Class – Treatments and procedures that are intentionally administered to the subject, such as treatment coincident with the study period, per protocol, or self-administered (e.g., alcohol and tobacco use).
Concomitant Medications (CM)
Exposure to Treatment Drug (EX)
Substance Usage (SU)
• Trial Design Class – Information about the design of the clinical trial (e.g., crossover trial, treatment arms) including information about the subjects with respect to treatment and visits.
Subject Elements (SE)
Subject Visits (SV)
Trial Arms (TA)
Trial Elements (TE)
Trial Inclusion / Exclusion Criteria (TI)
Trial Visits (TV)
7) Can you tell me how to do the Mapping for existing Domains?
First step is the comparison of metadata with the SDTM domain metadata. If the data getting from the data management is in somewhat compliance to SDTM metadata, use automated mapping as the Ist step.
If the data management metadata is not in compliance with SDTM then avoid auto mapping. So do manual mapping the datasets to SDTM datasets and the mapping each variable to appropriate domain.
The whole process of mapping include:
*Read in the corporate data standards into a database table.
• Assign a CDISC domain prefix to each database module.
• Attach a combo box containing the SDTM variable for the selected domain to a new mapping variable field.
• Search each module, and within each module select the most appropriate CDISC variable.
•Then search for variables mapped to the wrong type Character not equal to Character; Numeric not equal to Numeric.
• Review the mapping to see if any conflicts are resolvable by mapping to a more appropriate variable.
• We need to verify that the mapped variable is appropriate for each role.
• Then finally we have to ensure all ‘required’ variables are present in the domain6.
8) What do you know about SDTM Implementation Guide, Have you used it, if you have can you tell me which version you have used so far?
SDTM Implementation guide provides documentation on metadata (data of data) for the domain datasets that includes filename, variable names, type of variables and its labels etc. I have used SDTM implementation guide version 3.1.1.
9) Can you identify which variables should we have to include in each domain?
A) SDTM implementation guide V 3.1.1 specifies each variable is being included in one of the 3 types.
REQUIRED –They must be included in the data set structure and should not have a missing value for any observation.
EXPECTED – These variables must be included in the data set; however it is permissible to have missing values.
PERMISSIBLE – These variables are not a required part of the domain and they should not be included in the data set structure if the information they were designed to contain was not collected.
10) Can you give some examples for MAPPING 6?
Here are some examples for SDTM mapping:
• Character variables defined as Numeric
• Numeric Variables defined as Character
• Variables collected without an obvious corresponding domain in the CDISC SDTM mapping. So must go into SUPPQUAL
• Several corporate modules that map to one corresponding domain in CDISC SDTM.
• Core SDTM is a subset of the existing corporate standards
• Vertical versus Horizontal structure, (e.g. Vitals)
• Dates – combining date and times; partial dates.
• Data collapsing issues e.g. Adverse Events and Concomitant Medications.
• Adverse Events maximum intensity
• Metadata needed to laboratory data standardization.
10) Explain the Process of SDTM Mapping?
A list of basic variable mappings is given below4.
DIRECT: a CDM variable is copied directly to a domain variable without any changes other than assigning the CDISC standard label.
RENAME: only the variable name and label may change but the contents remain the same.
STANDARDIZE: mapping reported values to standard units or standard terminology
REFORMAT: the actual value being represented does not change, only the format in which is stored changes, such as converting a SAS date to an ISO8601 format character string.
COMBINING: directly combining two or more CDM variables to form a single SDTM variable.
SPLITTING: a CDM variable is divided into two or more SDTM variables.
DERIVATION: creating a domain variable based on a computation, algorithm, series of logic rules or decoding using one or more CDM variables.
11) Can you explain AdaM or AdaM datasets7?
The Analysis Data Model describes the general structure, metadata, and content typically found in Analysis Datasets and accompanying documentation. The three types of metadata associated with analysis datasets (analysis dataset metadata, analysis variable metadata, and analysis results metadata) are described and examples provided. (source:CDISC Analysis Data Model: Version 2.0)
Analysis datasets (AD) are typically developed from the collected clinical trial data and used to create statistical summaries of efficacy and safety data. These AD’s are characterized by the creation of derived analysis variables and/or records. These derived data may represent a statistical calculation of an important outcome measure, such as change from baseline, or may represent the last observation for a subject while under therapy. As such, these datasets are one of the types of data sent to the regulatory agency such as FDA.
The CDISC Analysis Data Model (ADaM) defines a standard for Analysis Dataset’s to be submitted to the regulatory agency. This provides a clear content, source, and quality of the datasets submitted in support of the statistical analysis performed by the sponsor.
In ADaM, the descriptions of the AD’s build on the nomenclature of the SDTM with the addition of attributes, variables and data structures needed for statistical analyses. To achieve the principle of clear and unambiguous communication relies on clear AD documentation. This documentation provides the link between the general description of the analysis found in the protocol or statistical analysis plan and the source data.
References:
1) http://support.sas.com/rnd/base/xmlengine/proccdisc/cdiscsdtm.html
2) http://www.fda.gov/oc/datacouncil/meetings/oliva.pdf
3) http://www.lexjansen.com/pharmasug/2005/fdacompliance/fc01.pdf
4) http://www2.sas.com/proceedings/forum2008/207-2008.pdf
5) http://analytics.ncsu.edu/sesug/2006/PO08_06.PDF
6) http://www.lexjansen.com/phuse/2005/cd/cd11.pdf
7) http://www.pharmasug.org/2005/FC03.pdf
contd.....................
CDISC stands for Clinical Data Interchange Standards Consortium and it is developed keeping in mind to bring great deal of efficiency in the entire drug development process. CDISC brings efficiency to the entire drug development process by improving the data quality and speed-up the whole drug development process and to do that CDISC developed a series of standards, which include Operation data Model (ODM), Study data Tabulation Model (SDTM) and the Analysis Data Model ADaM).
2) Why people these days are more talking about CDSIC and what advantages it brings to the Pharmaceutical Industry?
A) Generally speaking, Only about 30% of programming time is used to generate statistical results with SAS®, and the rest of programming time is used to familiarize data structure, check data accuracy, and tabulate/list raw data and statistical results into certain formats. This non-statistical programming time will be significantly reduced after implementing the CDISC standards.
3) What are the challenges as SAS programmer you think you will face when you first implement CDISC standards in you company?
A) With the new requirements of electronic submission, CRT datasets need to conform to a set of standards for facilitating reviewing process. They no longer are created solely for programmers convenient. SDS will be treated as specifications of datasets to be submitted, potentially as reference of CRF design. Therefore, statistical programming may need to start from this common ground. All existing programs/macros may also need to be remapped based on CDISC so one can take advantage to validate submission information by using tools which reviewer may use for reviewing and to accelerate reviewing process without providing unnecessary data, tables and listings. With the new requirements from updating electronic submission and CDISC implementation, understanding only SAS® may not be good enough to fulfill for final deliverables. It is a time to expand and enhance the job skills from various aspects under new change so that SAS® programmers can take a competitive advantage, and continue to play a main role in both statistical analysis and reporting for drug development.
References:
Pharmasug/2007/fc/fc05
pharmasug/2003/fda compliance/fda055
1) What do you understand about SDTM and its importance?
SDTM stands for Standard data Tabulation Model, which defines a standard structure for study data tabulations that are to be submitted as part of a product application to a regulatory authority such as the United States Food and Drug Administration (FDA) 2.
In July 2004 the Clinical Data Interchange Standards Consortium (CDISC) published standards on the design and content of clinical trial tabulation data sets, known as the Study Data Tabulation Model (SDTM). According to the CDISC standard, there are four ways to represent a subject in a clinical study: tabulations, data listings, analysis datasets, and subject profiles6.
Before SDTM:
There are different names for each domain and domains don’t have a standard structure. There is no standard variables list for each and every domain.
Because of this FDA reviewers always had to take so much pain in understanding themselves with different data, domain names and name of the variable in each analysis dataset. Reviewers will have spent most of the valuable time in cleaning up the data into a standard format rather than reviewing the data for the accuracy. This process will delay the drug development process as such.
After SDTM:
There will be standard domain names and standard structure for each domain. There will be a list of standard variables and names for each and every dataset. Because of this, it will become easy to find and understand the data and reviewers will need less time to review the data than the data without SDTM standards. This process will improve the consistency in reviewing the data and it can be time efficient.
The purpose of creating SDTM domain data sets is to provide Case Report Tabulation (CRT) data FDA, in a standardized format. If we follow these standards it can greatly reduce the effort necessary for data mapping. Improper use of CDISC standards, such as using a valid domain or variable name incorrectly, can slow the metadata mapping process and should be avoided4.
2) PROC CDISC for SDTM 3.1 Format 2?
Syntax
The PROC CDISC syntax for CDISC SDTM is presented below. The DATA= parameter specifies the location of your SDTM conforming data source.PROC CDISC MODEL=SDTM;SDTM SDTMVersion = "3.1";DOMAINDATA DATA = results. AE DOMAIN = AE CATEGORY = EVENT;RUN;
3) What are the capabilities of PROC CDISC 2?
PROC CDISC performs the following checks on domain content of the source:
Verifies that all required variables are present in the data set
Reports as an error any variables in the data set that are not defined in the domain
Reports a warning for any expected domain variables that are not in the data set
Notes any permitted domain variables that are not in the data set
Verifies that all domain variables are of the expected data type and proper length
Detects any domain variables that are assigned a controlled terminology specification by the domain and do not have a format assigned to them.
The procedure also performs the following checks on domain data content of the source on a per observation basis:
Verifies that all required variable fields do not contain missing values
Detects occurrences of expected variable fields that contain missing values
Detects the conformance of all ISO-8601 specification assigned values; including date, time, date time, duration, and interval types
Notes correctness of yes/no and yes/no/null responses,
4) What are the different approaches for creating the SDTM 3?
There are 3 general approaches to create the SDTM datasets:
a) Build the SDTM entirely in the CDMS,
b) Build the SDTM entirely on the “back-end” in SAS,
c) or take a hybrid approach and build the SDTM partially in the CDMS and partially in SAS.
BUILD THE SDTM ENTIRELY IN THE CDMS
It is possible to build the SDTM entirely within the CDMS. If the CDMS allows for broad structural control of the underlying database, then you could build your eCRF or CRF based clinical database to SDTM standards.
Advantages:
• Your “raw” database is equivalent to your SDTM which provides the most elegant solution.
• Your clinical data management staff will be able to converse with end-users/sponsors about the data easily since your clinical data manager and the und-user/sponsor will both be looking at SDTM datasets.
• As soon as the CDMS database is built, the SDTM datasets are available.
Disadvantages:
• This approach may be cost prohibitive. Forcing the CDMS to create the SDTM structures may simply be too cumbersome to do efficiently.
• Forcing the CDMS to adapt to the SDTM may cause problems with the operation of the CDMS which could reduce data quality.
BUILD THE SDTM ENTIRELY ON THE “BACK-END” IN SAS
Assuming that SAS is not your CDMS solution, another approach is to take the clinical data from your CDMS and manipulate it into the SDTM with SAS programming.
Advantages:
• The great flexibility of SAS will let you transform any proprietary CDMS structure into the SDTM. You do not have to work around the rigid constraints of the CDMS.
• Changes could be made to the SDTM conversion without disturbing clinical data management processes.
• The CDMS is allowed to do what it does best which is to enter, manage, and clean data.
Disadvantages:
• There would be additional cost to transform the data from your typical CDMS structure into the SDTM.
Specifications, programming, and validation of the SAS programming transformation would be required.
• Once the CDMS database is up, there would then be a subsequent delay while the SDTM is created in SAS.
This delay would slow down the production of analysis datasets and reporting. This assumes that you follow the linear progression of CDMS -> SDTM -> analysis datasets (ADaM).
• Since the SDTM is a derivation of the “raw” data, there could be errors in translation from the “raw” CDMS data to the SDTM.
• Your clinical data management staff may be at a disadvantage when speaking with end-users/sponsors about the data since the data manager will likely be looking at the CDMS data and the sponsor will see SDTM data.
BUILD THE SDTM USING A HYBRID APPROACH
Again, assuming that SAS is not your CDMS solution, you could build some of the SDTM within the confines of the CDMS and do the rest of the work in SAS. There are things that could be done easily in the CDMS such as naming data tables the same as SDTM domains, using SDTM variable names in the CTMS, and performing simple derivations (such as age) in the CDMS. More complex SDTM derivations and manipulations can then be performed in SAS.
Advantages:
• The changes to the CDMS are easy to implement.
• The SDTM conversions to be done in SAS are manageable and much can be automated.
Disadvantages:
• There would still be some additional cost needed to transform the data from the SDTM-like CDMS structure into the SDTM. Specifications, programming, and validation of the transformation would be required.
• There would be some delay while the SDTM-like CDMS data is converted to the SDTM.
• Your clinical data management staff may still have a slight disadvantage when speaking with endusers/ sponsors about the data since the clinical data manager will be looking at the SDTM-like data and the sponsor will see the true SDTM data.
5) What do you know about SDTM domains?
A basic understanding of the SDTM domains, their structure and their interrelations is vital to determining which domains you need to create and in assessing the level to which your existing data is compliant. The SDTM consists of a set of clinical data file specifications and underlying guidelines. These different file structures are referred to as domains. Each domain is designed to contain a particular type of data associated with clinical trials, such as demographics, vital signs or adverse events.
The CDISC SDTM Implementation Guide provides specifications for 30 domains. The SDTM domains are divided into six classes.
The 21 clinical data domains are contained in three of these classes:
Interventions,
Events and
Findings.
The trial design class contains seven domains and the special-purpose class contains two domains (Demographics and Comments).
The trial design domains provide the reviewer with information on the criteria, structure and scheduled events of a clinical trail. The only required domain is demographics.
There are two other special purpose relationship data sets, the Supplemental Qualifiers (SUPPQUAL) data set and the Relate Records (RELREC) data set. SUPPQUAL is a highly normalized data set that allows you to store virtually any type of information related to one of the domain data sets. SUPPQUAL domain also accommodates variables longer than 200, the Ist 200 characters should be stored in the domain variable and the remaining should be stored in it5.
6) What are the general guidelines to SDTM variables?
Each of the SDTM domains has a collection of variables associated with it.
There are five roles that a variable can have:
Identifier,
Topic,
Timing,
Qualifier,
and for trial design domains,
Rule. Using lab data as an example, the subject ID, domain ID and sequence (e.g. visit) are identifiers.
The name of the lab parameter is the topic,
the date and time of sample collection are timing variables,
the result is a result qualifier and the variable containing the units is a variable qualifier.
Variables that are common across domains include the basic identifiers study ID (STUDYID), a two-character domain ID (DOMAIN) and unique subject ID (USUBJID).
In studies with multiple sites that are allowed to assign their own subject identifiers, the site ID and the subject ID must be combined to form USUBJID.
Prefixing a standard variable name fragment with the two-character domain ID generally forms all other variable names.
The SDTM specifications do not require all of the variables associated with a domain to be included in a submission. In regard to complying with the SDTM standards, the implementation guide specifies each variable as being included in one of three categories:
Required, Expected, and Permitted4.
REQUIRED – These variables are necessary for the proper functioning of standard software tools used by reviewers. They must be included in the data set structure and should not have a missing value for any observation.
EXPECTED – These variables must be included in the data set structure; however it is permissible to have missing values.
PERMISSIBLE – These variables are not a required part of the domain and they should not be included in the data set structure if the information they were designed to contain was not collected.
7) Can you tell me more About SDTM Domains5?
SDTM Domains are grouped by classes, which is useful for producing more meaningful relational schemas. Consider the following domain classes and their respective domains.
• Special Purpose Class – Pertains to unique domains concerning detailed information about the subjects in a study.
Demography (DM), Comments (CM)
• Findings Class – Collected information resulting from a planned evaluation to address specific questions about the subject, such as whether a subject is suitable to participate or continue in a study.
Electrocardiogram (EG)
Inclusion / Exclusion (IE)
Lab Results (LB)
Physical Examination (PE)
Questionnaire (QS)
Subject Characteristics (SC)
Vital Signs (VS)
• Events Class – Incidents independent of the study that happen to the subject during the lifetime of the study.
Adverse Events (AE)
Patient Disposition (DS)
Medical History (MH)
• Interventions Class – Treatments and procedures that are intentionally administered to the subject, such as treatment coincident with the study period, per protocol, or self-administered (e.g., alcohol and tobacco use).
Concomitant Medications (CM)
Exposure to Treatment Drug (EX)
Substance Usage (SU)
• Trial Design Class – Information about the design of the clinical trial (e.g., crossover trial, treatment arms) including information about the subjects with respect to treatment and visits.
Subject Elements (SE)
Subject Visits (SV)
Trial Arms (TA)
Trial Elements (TE)
Trial Inclusion / Exclusion Criteria (TI)
Trial Visits (TV)
7) Can you tell me how to do the Mapping for existing Domains?
First step is the comparison of metadata with the SDTM domain metadata. If the data getting from the data management is in somewhat compliance to SDTM metadata, use automated mapping as the Ist step.
If the data management metadata is not in compliance with SDTM then avoid auto mapping. So do manual mapping the datasets to SDTM datasets and the mapping each variable to appropriate domain.
The whole process of mapping include:
*Read in the corporate data standards into a database table.
• Assign a CDISC domain prefix to each database module.
• Attach a combo box containing the SDTM variable for the selected domain to a new mapping variable field.
• Search each module, and within each module select the most appropriate CDISC variable.
•Then search for variables mapped to the wrong type Character not equal to Character; Numeric not equal to Numeric.
• Review the mapping to see if any conflicts are resolvable by mapping to a more appropriate variable.
• We need to verify that the mapped variable is appropriate for each role.
• Then finally we have to ensure all ‘required’ variables are present in the domain6.
8) What do you know about SDTM Implementation Guide, Have you used it, if you have can you tell me which version you have used so far?
SDTM Implementation guide provides documentation on metadata (data of data) for the domain datasets that includes filename, variable names, type of variables and its labels etc. I have used SDTM implementation guide version 3.1.1.
9) Can you identify which variables should we have to include in each domain?
A) SDTM implementation guide V 3.1.1 specifies each variable is being included in one of the 3 types.
REQUIRED –They must be included in the data set structure and should not have a missing value for any observation.
EXPECTED – These variables must be included in the data set; however it is permissible to have missing values.
PERMISSIBLE – These variables are not a required part of the domain and they should not be included in the data set structure if the information they were designed to contain was not collected.
10) Can you give some examples for MAPPING 6?
Here are some examples for SDTM mapping:
• Character variables defined as Numeric
• Numeric Variables defined as Character
• Variables collected without an obvious corresponding domain in the CDISC SDTM mapping. So must go into SUPPQUAL
• Several corporate modules that map to one corresponding domain in CDISC SDTM.
• Core SDTM is a subset of the existing corporate standards
• Vertical versus Horizontal structure, (e.g. Vitals)
• Dates – combining date and times; partial dates.
• Data collapsing issues e.g. Adverse Events and Concomitant Medications.
• Adverse Events maximum intensity
• Metadata needed to laboratory data standardization.
10) Explain the Process of SDTM Mapping?
A list of basic variable mappings is given below4.
DIRECT: a CDM variable is copied directly to a domain variable without any changes other than assigning the CDISC standard label.
RENAME: only the variable name and label may change but the contents remain the same.
STANDARDIZE: mapping reported values to standard units or standard terminology
REFORMAT: the actual value being represented does not change, only the format in which is stored changes, such as converting a SAS date to an ISO8601 format character string.
COMBINING: directly combining two or more CDM variables to form a single SDTM variable.
SPLITTING: a CDM variable is divided into two or more SDTM variables.
DERIVATION: creating a domain variable based on a computation, algorithm, series of logic rules or decoding using one or more CDM variables.
11) Can you explain AdaM or AdaM datasets7?
The Analysis Data Model describes the general structure, metadata, and content typically found in Analysis Datasets and accompanying documentation. The three types of metadata associated with analysis datasets (analysis dataset metadata, analysis variable metadata, and analysis results metadata) are described and examples provided. (source:CDISC Analysis Data Model: Version 2.0)
Analysis datasets (AD) are typically developed from the collected clinical trial data and used to create statistical summaries of efficacy and safety data. These AD’s are characterized by the creation of derived analysis variables and/or records. These derived data may represent a statistical calculation of an important outcome measure, such as change from baseline, or may represent the last observation for a subject while under therapy. As such, these datasets are one of the types of data sent to the regulatory agency such as FDA.
The CDISC Analysis Data Model (ADaM) defines a standard for Analysis Dataset’s to be submitted to the regulatory agency. This provides a clear content, source, and quality of the datasets submitted in support of the statistical analysis performed by the sponsor.
In ADaM, the descriptions of the AD’s build on the nomenclature of the SDTM with the addition of attributes, variables and data structures needed for statistical analyses. To achieve the principle of clear and unambiguous communication relies on clear AD documentation. This documentation provides the link between the general description of the analysis found in the protocol or statistical analysis plan and the source data.
References:
1) http://support.sas.com/rnd/base/xmlengine/proccdisc/cdiscsdtm.html
2) http://www.fda.gov/oc/datacouncil/meetings/oliva.pdf
3) http://www.lexjansen.com/pharmasug/2005/fdacompliance/fc01.pdf
4) http://www2.sas.com/proceedings/forum2008/207-2008.pdf
5) http://analytics.ncsu.edu/sesug/2006/PO08_06.PDF
6) http://www.lexjansen.com/phuse/2005/cd/cd11.pdf
7) http://www.pharmasug.org/2005/FC03.pdf
contd.....................
Subscribe to:
Posts (Atom)