Zu Hauptinhalten wechseln

FV Decipher Unterstützung

Alle Themen, Ressourcen für FV Decipher benötigt.

 
decipher

Custom Scripts

  Requires Decipher Cloud

1: Get files from upload folder [get-ftp]

From time to time we need to copy files from the upload directory (/home/jaminb/www/misc/int/upload) into another directory. Just run the script from the directory where you would like to copy the file. See below for usage instructions.

You can now copy and paste the links directly from email to copy files:

Example:

get-ftp https://v2.decipherinc.com/misc/int/upload/file%20name.txt

This will copy files from /www/misc/int/upload/, just like normal.

get-ftp <filename>

Copies <filename> from the upload directory
(/www/misc/int/upload) into whatever
directory you are currently in.

get-ftp <filename> <new filename>

Copies <filename>, renaming it to <new filename>,
from the upload directory (/www/misc/int/upload)
into whatever directory you are currently in.

**Note: If you file name has spaces in it quote the filename or escape it
        get-ftp 'file name.txt'
        get-ftp file\ name.txt
        get-ftp https://v2.decipherinc.com/misc/int/upload/file%20name.txt

2: Replace All for Multiple Files

We all like to use the "replace all" function in NoteTab. Now there's a way to do it across multiple files in the shell. The command below allows you to search across multiple files and replace text in multiple files in a folder. No longer do you have to change the link one by one for 25 Video files. Big thanks to Erwin and Louis.

perl -i.bak -pe 's,from,to,gi' files*.txt

Basically, you run the script above in the folder where all the files are that you want to update. "from" specifies the current text, "to" is what you want it to change to. "files*.txt" are the files that you want to change. The files have to be plain text files. Also, after running the script, it will create a .bak file for each file updated. Let me know if you guys have any questions.

Also note that this is a case-insensitive search. If you want it to be case sensitive, change the "gi" part of the command to just "g". So the command would be:

perl -i.bak -pe 's,from,to,g' files*.txt

This will match the 'from' string exactly as you have specified.

3: Finding files

You can search for files system-wide, or run this inside some directory to narrow your results:

find . -name "FILENAME" -print

Simply replace FILENAME with the file you are looking for. You can also use find . -iname "*something*" if you want to search case-insensitively (iname instead of name) for a something any place within the filename.

4: Creating email-send lists (bulk-prepare)

This commands allows you to take tab-delimited files and create email-send lists w/ them. For more information, please go to bulk-prepare.

5: Generating data sets from the command line

You can generate data sets from the command line using the generate script. This allows you to create them in different formats and you can specify which variables you would like to appear in the data set. For more information, please go to generate.

6: Removing duplicates (deduping)

If you have 2 files that you need to dedupe against each other, you can use the dedupe command in the shell to do this. A detailed explanation of its usage can be found at dedupe.

7: merge-keywords

You have main survey (survey) and you have sub-surveys survey/r1 and survey/r2 where you have done some keyword coding (and you only have buckets for those questions only, the rest are blank). If you now do:

merge-keywords survey/r1 survey
merge-keywords survey/r2 survey

Then you will copy the bucket data for all the questions that have some in survey/r1 and survey/r2, on top of those in survey.

Because any save from the keywords code saves over all of the keyword coding data, no one must be editing the survey at that point with the keyword coder.

Also, make sure you take a backup of the file, buckets.bin. The merge-keywords will do that as well, to buckets.bin.bak.

8: copy-text

copy-text is used to copy the alts from one survey into another. This is useful to get an English report for a translated survey, provided both surveys are the same, though the script has some error checking built-in. Use -o to overwrite any existing alts in the destination survey.

Syntax:

copy-text -o [source dir] [destination dir]

Example:

copy-text -o demo/recode demo/recode/de

9: Differences between two tab-delimited files

If you have two tab-delimited files which have the same structure but just slightly different contents, you can see the differences by using tabdiff file1.txt file2.txt which will display output as:

     2 list '22' != '2'
     3 id 'xx' != ''

denoting that on line 2 of the file, the field list is in mismatch, and on line 3, the id field has different values.

10: itabcut

 Note: Obsoleted by "tabcut -i".

11: Extract/Reorder Tab-Delimited Fields (tabcut)

If you need to extract or reorder fields in a tab delimited file, you can use tabcut. For example:

tabcut list.txt source

will output only the source column of the list.txt file, while:

tabcut list.txt email source

would output the email column, then source.

The output will go as default to your terminal; if you want it to a file, use > or | to pipe it to less:

tabcut list.txt source > list-with-source.txt

By default this does not keep column headers. Use the --header flag to keep them.

tabcut --header list.txt source > list-with-source.txt

You can also select you fields interactively using the -i. Note: A output file must be given.

tabcut --header -i list.txt

1)  user_id
2)  segment
3)  user_slctd id
4)  user name
5)  email
6)  purchases
7)  gmb_usd
8)  items_sold
9)  gmv_usd
10)  source
11)  list
Which field(s) would you like? (ex: 1-11)

 Note: If you choose fields which are in sequential order (1,2,3,4), then the unix script "cut" is used. Which keeps the headers by default.

12: Encoding & Decoding files

The first thing you need to do is find out what your target encoding is. UTF 8 is useful for most international files, ones that could contain characters that the English language doesn't have, or if some parts look funny in your editor in the console, it should be in utf 8. iso-8859-1 which is the other format we commonly use.

Use "guess" to try to figure out the encoding of a file. If the file is not the encoding you would like, use recode,

$ guess list.txt
guessed:  (0, 'iso-8859-1')
$ recode iso-8859-1..utf-8 list.txt

13: Converting between excel and tab-delimited files

There are two complimentary scripts that provide the ability to convert from excel to tab-delimited files and from tab-delimited to excel files.

13.1:  xls2td

xls2td will convert a xls or xlsx file into a tab-delimited file or files. Every worksheet will become a separate tab-delimited file. xls2td takes the name of the Excel file you intend to convert. Run xls2td to see the available command-line options.

For example:

xls2td original.xls

or for XLSX/Excel 2007:

xls2td original.xlsx

The command above will create one tab-delimited file for every worksheet with content in the original.xls(x) file.

If you want to know what those files will be named before you run xls2td to produce them, you can run

xls2td -l oringinal.xls

The command above will print a list of all the files that would be created if you ran xls2td to original.xls(s), or you can use it in combination with -s to print a single sheet.

If your output has only one sheet, you can specify the filename to write with -ofilename. The special option -o- stands for "write to stdout", so you can use xls2td in a unix pipeline. This only works when there is one worksheet in the output.

13.2: td2xls

As you may have already guessed, td2xls is the xls2td process in reverse. td2xls will convert a tab-delimited file or files into an excel file, with each tab-delimited file becoming its own worksheet within the new excel file. td2xls takes at least two arguments: the name of the new excel file (with the extension .xls) and one or more tab-delimited text files as sources.

For example:

td2xls mynewexcelfile worksheet1.txt worksheet2.txt worksheet3.txt

The example above will create a file called "mynewexcelfile.xls" that has worksheets called worksheet1, worksheet2, and worksheet3. You can also reference the same worksheet source more than once if you want to start with duplicate worksheets in the same excel file.

Some of your original tab-delimited text files may have interesting or non-Latin characters that cause encoding issues. In this case you can specify the encoding from which td2xls will be decoding with the encoding option:

td2xls --encoding 'elbonianate-5' mynewexcelfile elbonianEmailList.txt

You can also just use -e.

td2xls -e 'elbonianate-5' mynewexcelfile elbonianEmailList.txt

td2xls will make an attempt to notice that columns in your text file look like numbers, and make them numbers in the Excel sheet it produces. If you would like it to treat them as numbers, try --strings-only as an option on the command line.

You can now add styles to individual rows.

td2xls --font=[bold,italic,underline] --font-color=[color] --highlight--color=[color] <outputxls> <input tab delimited file>

Valid colors: black, white, red, green, blue, yellow, magenta, and cyan (a valid excel color palette index number instead if a color can be used).

td2xls --font=bold --font-color=red --highlight-color=yellow out.xls in.txt:1,3 # adds the style to row 1 and 3

td2xls --style="font: color red bold on" out.xls in.txt #in a string format

td2xls --style="font: height 400, name Arial Black, color blue, bold on; align: wrap on, vert centre, horiz center; borders: top double, bottom double, left double, right double" in.xls out.txt:1 #more complex style

14: Random Lines

This will randomize every line in a file.

rl list.txt > newlist.txt

15: Sending a single email

At times, for example when you want to automatically notify someone that a particular task is done, you want to send only a single email, to a handful of fixed recipients. mailimp has been designed for this purpose.

mailimp is a simple command-line program for generating emails. It allows you to set the "from" address, and attach attachments, making it superior to the command line "mail" program. Sender, recipients, subject, extra headers, and attachments are all specified with command-line arguments. The body of the email is read from stdin.

Examples:

echo "<html><body><h1>This is an h1 with text</h1><br/>Such foobar</body></html>" | mailimp -s "HTML email" -Z user@decipherinc.com
  • Send an email notification to your account manager and PM
    echo "Horton heard a who today."  | mailimp -s "re: horton" acc_manager@decipherinc.com proj_manager@decipherinc.com
    
  • Send an email notification to your account manager, from a different user. (Shows a different way to pipe stdin, also.)
    mailimp -s "re: this job" -f otheruser@decipherinc.com acc_manager@decipherinc.com << EOF
    This job has been completed.
    EOF
    
  • Send an email with an attachment
    echo "Your XLS file is attached." | mailimp -s "re: xls file" -F newList.xls user@decipherinc.com
    
  • Send an email with HTML markup
  • Specify custom headers
    echo "Beware! Snodgrass." | mailimp -s "re: snodgrass" -H X-Ninjas=deadly user@decipherinc.com
    

mailimp normally prints a message saying how many recipients received the email; use -q to suppress all normal output.

See mailimp --help for the full list of options.

16: Dupe-checking script

This script can check for duplicate answers/expressions for any number of surveys and for any amount of expressions. The expression can be any Python code so this can be used to ask some complex data questions.

Syntax: dupecheck -s survey1 -s survey2 -s survey3 -e expr1 -e expr2

Example:

dupecheck -s demo/survey1 -s demo/survey2 -e "q1.val.lower() + q1.val.lower()" -e "q3.val[[5]].lower()"

17: Excel variable condition logic import and export

With xlscond script, you can import and export excel variable condition logic.

Syntax: xlscond [options] <survey path> <XLS file>

First step is to export a conditions excel file.

 

xlscond -e demo/survey conditions.xls
xlscond demo/survey conditions.xls    #also exports
xlscond -e . conditions.xls           #if you are in the survey folder

conditions.xls file will have the headers labe, title, cond, rowCond, colCond, choiceCond, and sbase.

Please not that changes in the title column wouldn't import to the survey. It is for reference only.

This script does not check the validity of the logic. It can only be tested by running the survey. So be careful when adding conditions and make sure you have entered the right variable.

After making the changes, you can import them by running

 

xlscond -i demo/survey conditions.xls
xlscond -i . conditions #if in survey folder

You can import and export XLS files from any location other than the survey folder.

18: coladd

This tool adds dynamic or static variables to tab-delimited files; just like bulk-prepare. You can specify logic in a addcolumn.py file just like in a prepare.py.

  • $ coladd -v sampleFile.txt:list=1:batch=2:co=us
    

This would output the following:

 

source       email                    user_name          user_slctd_id   list  batch  co
iM4xkETQ0F   someone@aol.com          Some Name          someone123      1     2      us
8vYKtSwnrr   user@hotmail.com         John Doe           jnamed0ne       1     2      us
EQgwNxbLHM   herro@aol.com            Herro              hi_there3       1     2      us
tR8x08GbY7   contact@surfbest.net     C. Guy             c_surfsup       1     2      us
YoPgsSMTcz   name@sirruss.com         Y. S. Serious      the_joker_xx    1     2      us

18.1:  addcolumn.py

  • By creating a addcolumn.py file, you can utilize conditional record processing and create new variables using python logic. A addcolumn.py file is not mandatory.

$ coladd list.txt
  • #ADDS CONDITIONAL RECORD PROCESSING.  IF top_buyer_bucket == '', THEN THAT RECORD IS SKIPPED.
    assert top_buyer_bucket != ''
    
    #WE CAN CHECK THE VALUE OF VARIABLES IN THE FILE(S) YOU ARE PROCESSING AND CREATE NEW VARIABLES BASED ON THOSE VALUES.
    if top_buyer_bucket < 100:
        #IF THE VALUE OF top_buyer_bucket IS LESS THAN 100, A NEW VARIABLE IS CREATED WITH A VALUE OF 1
        row.list = 1
    else:
        #IF THE VALUE OF top_buyer_bucket IS GREATER THAN OR EQUAL TO 0, A NEW VARIABLE IS CREATED WITH A VALUE OF 2
        row.list = 2

19: Changing a User's v2 password from the command line

Note:  Users should be encouraged to use the "reset password" link to change their password when they forget it. This is to be used only when reset password has failed.

here reset-v2-password <email or shared user name>

After running this command, you will be prompted for a new password.

20: Listing Selfserve Client Folders from the Command Line

The ccd command can be used to switch to a client’s selfserve folder.
For example

$ ccd jwt

or

$ ccd group

will list and select any client with a “group” in its name. ccd can also accept a URL to a survey or old report.

21: DCM Survey Code Generator

dcmbuilder is a DCM (Discrete Choice Model) trade-off question code generator script.

It reads in the DCM data and rules from an Excel (XLS) file and outputs the generated survey code to the stdout (screen). The generated code includes the survey labels, plan, question macro and macro calls. The programmer is responsible for merging the DCM code into the survey.

dcmbuilder --xlspath=<XLS PATH> --label=<QUESTION LABEL> --title=<QUESTION TITLE> --comment=<QUESTION COMMENT>

--xlspath

  • Path to the XLS file that contains the Design and Levels worksheets. Mandatory.

--label

  • Base question label. Ex. 'q1' will help create question labels such as q1a, q1b, q1c... Mandatory.

--title

  • Question title. Used in the <title></title> tag. Optional.

--comment

  • Question comment. Used in the <comment></comment> tag. Optional.

The XLS file must contain two worksheets: Design and Levels.

  • Design columns:
    • A=Version, B=Scenario, C=Alternative, D+=the question options (as found in Levels) Row 1 is expected to contain the worksheet headers and will be skipped.
    Levels columns:
    • A=Attribute, B+=levels (the various options that can be used for that attribute) Row 1 is expected to contain the worksheet headers and will be skipped.

The generated DCM macro is a fairly generic representation of the question layout. The programmers can modify the generated code to suit their needs.

Example:

scripts/tradeoff/dcmbuilder.py --xlspath='../../Documents/DCM Example.xls' --label=q1 --title='This is my question title' --comment='This is my question comment'

Produces 3 sections of code:

1. Resources

<!-- Start of DCM code -->
<!-- Make sure to use compat=7 or higher -->

<res label="q1labela">Reporting style</res>
<res label="q1labelb">Main Cover, Section A</res>
<res label="q1labelc">Inclusion of AP/wire stories</res>
<res label="q1labeld">Local News Coverage</res>
<res label="q1labele">Editorial/Opinion</res>
<res label="q1labelf">Sports News Coverage</res>
<res label="q1labelg">Religious Focus</res>
<res label="q1labelh">LDS Church - General News Coverage</res>
<res label="q1labeli">LDS Feature Coverage</res>
<res label="q1labelj">Controversial LDS Issues</res>
<res label="q1labelk">Special Sections</res>
<res label="q1labell">Special Offers</res>
<res label="q1labelm">Deseret News Online</res>
<res label="q1labeln">Church News (Print)</res>
<res label="q1labelo">Church News (Online)</res>
<res label="q1a1">"I want a quick &amp; broad summary of the daily news."</res>
<res label="q1a2">"I want an overview of the daily news, with more depth and context."</res>
<res label="q1a3">"I want to know how the news impacts/relates directly to me and my neighborhood."</res>
<res label="q1b1">As-is</res>
<res label="q1b2">More national/international coverage</res>
<res label="q1b3">More local coverage</res>
<res label="q1b4">More human interest stories</res>
<res label="q1b5">More LDS church news</res>
<res label="q1c1">As-is</res>
<res label="q1c2">More AP/wire articles</res>
<res label="q1c3">Fewer AP/wire articles</res>
<res label="q1d1">As-is</res>
<res label="q1d2">More local coverage with increased Salt Lake focus</res>
<res label="q1d3">More local coverage with increased Utah and neighboring County focus</res>
<res label="q1d4">More Wasatch Front coverage rather than entire state</res>
<res label="q1d5">More coverage but from community writers and contributors</res>
<res label="q1e1">As is</res>
<res label="q1e2">Same topics but with greater variety of perspectives</res>
<res label="q1e3">Broader range of topics with current mix of perspectives</res>
<res label="q1e4">Both broader range of topics and perspectives</res>
<res label="q1f1">As-is</res>
<res label="q1f2">More BYU coverage</res>
<res label="q1f3">More U of U coverage</res>
<res label="q1f4">More national sports  coverage</res>
<res label="q1f5">More local high school sports coverage</res>
<res label="q1f6">More Utah Jazz coverage</res>
<res label="q1g1">As-is</res>
<res label="q1g2">More LDS focus</res>
<res label="q1g3">Less LDS focus</res>
<res label="q1g4">More links to non-DMN web articles and sites with LDS news</res>
<res label="q1g5">More LDS perspectives on news stories throughout paper</res>
<res label="q1h1">As-is</res>
<res label="q1h2">Less LDS Church coverage</res>
<res label="q1h3">More on how the nation and others view the Church</res>
<res label="q1h4">More on how to bridge the understanding gap with non-LDS (Church doctrine, policy, history, etc.)</res>
<res label="q1h5">More stories on successful interactions (e.g. joint relief efforts) with other religions</res>
<res label="q1h6">More details on upcoming church events locally and nationally</res>
<res label="q1h7">Daily excerpts of General Authorities' talks around the world</res>
<res label="q1i1">As-is</res>
<res label="q1i2">Interviews with prominent LDS members  on experiences being LDS and in the public eye</res>
<res label="q1i3">More profiles and columns from non-LDS writers: stories &amp; experiences on not being LDS and living in Utah (humorously or tastefully done)</res>
<res label="q1i4">Regular columns or blogs from prominent LDS figures on their life experiences and viewpoints</res>
<res label="q1j1">As-is</res>
<res label="q1j2">More coverage of controversial issues involving LDS Church</res>
<res label="q1j3">Less coverage of controversial issues involving LDS Church</res>
<res label="q1k1">As-is</res>
<res label="q1k2">Parenting:
Guidance on helping parents teach moral values, address youth challenges (drugs, sex, dress, etc.)</res>
<res label="q1k3">Kids:
Including games, art projects, children's stories, jokes and riddles, etc.</res>
<res label="q1k4">Inspirational Stories:
Overcoming challenges, making the world a "better place", profiles of local/LDS heroes</res>
<res label="q1k5">Religious Outreach:
Provide specific information to help explain non-LDS misconceptions, show how to engage others with opposing/erroneous viewpoints, success stories of understanding</res>
<res label="q1k6">Week in Review:
Summary of recent headline articles over the past few days to catch up on news</res>
<res label="q1k7">Religion in the Home:
Guidance on helping parents teach basic religious values and ideals in the home</res>
<res label="q1l1">None</res>
<res label="q1l2">Daily News E-mail</res>
<res label="q1l3">Free subscription to LDS library</res>
<res label="q1m1">As-is</res>
<res label="q1m2">Improve the classified section (similar to KSL)</res>
<res label="q1m3">More cross-references in the print news to easily follow stories in-depth online</res>
<res label="q1m4">Improve the website's search engine</res>
<res label="q1m5">Improved ad placement to reduce visual clutter</res>
<res label="q1m6">Improved layout</res>
<res label="q1m7">Add wikis or forums (interactive; moderated, more formal than current comments)</res>
<res label="q1n1">As-is; Delivery with Saturday DMN paper</res>
<res label="q1n2">Delivery with Sunday DMN paper</res>
<res label="q1n3">Pick-up at local convenience stores (no DMN subscription required)</res>
<res label="q1n4">Newspaper dispenser (no DMN subscription required)</res>
<res label="q1n5">Weekly by mail; must sign up (no DMN subscription required)</res>
<res label="q1n6">Weekly by mail; free with Ensign subscription (no DMN subscription required)</res>
<res label="q1o1">As is with weekly updates, access for DMN subscribers</res>
<res label="q1o2">Weekly updates, no DMN subscription login requirement</res>
<res label="q1o3">Daily updates, access for DMN subscribers</res>
<res label="q1o4">Daily updates, no DMN subscription login requirement</res>
<exec>
dcm = Struct()
dcm.q1_labels = ['', res.q1labela, res.q1labelb, res.q1labelc, res.q1labeld, res.q1labele, res.q1labelf, res.q1labelg, res.q1labelh, res.q1labeli, res.q1labelj, res.q1labelk, res.q1labell, res.q1labelm, res.q1labeln, res.q1labelo]
dcm.q1_attributes_a = ['', res.q1a1, res.q1a2, res.q1a3]
dcm.q1_attributes_b = ['', res.q1b1, res.q1b2, res.q1b3, res.q1b4, res.q1b5]
dcm.q1_attributes_c = ['', res.q1c1, res.q1c2, res.q1c3]
dcm.q1_attributes_d = ['', res.q1d1, res.q1d2, res.q1d3, res.q1d4, res.q1d5]
dcm.q1_attributes_e = ['', res.q1e1, res.q1e2, res.q1e3, res.q1e4]
dcm.q1_attributes_f = ['', res.q1f1, res.q1f2, res.q1f3, res.q1f4, res.q1f5, res.q1f6]
dcm.q1_attributes_g = ['', res.q1g1, res.q1g2, res.q1g3, res.q1g4, res.q1g5]
dcm.q1_attributes_h = ['', res.q1h1, res.q1h2, res.q1h3, res.q1h4, res.q1h5, res.q1h6, res.q1h7]
dcm.q1_attributes_i = ['', res.q1i1, res.q1i2, res.q1i3, res.q1i4]
dcm.q1_attributes_j = ['', res.q1j1, res.q1j2, res.q1j3]
dcm.q1_attributes_k = ['', res.q1k1, res.q1k2, res.q1k3, res.q1k4, res.q1k5, res.q1k6, res.q1k7]
dcm.q1_attributes_l = ['', res.q1l1, res.q1l2, res.q1l3]
dcm.q1_attributes_m = ['', res.q1m1, res.q1m2, res.q1m3, res.q1m4, res.q1m5, res.q1m6, res.q1m7]
dcm.q1_attributes_n = ['', res.q1n1, res.q1n2, res.q1n3, res.q1n4, res.q1n5, res.q1n6]
dcm.q1_attributes_o = ['', res.q1o1, res.q1o2, res.q1o3, res.q1o4]
</exec>

2. Plan

dcm.plan = {1: {1: {1: [1, 3, 2, 3, 1, 6, 1, 4, 2, 1, 5, 1, 7, 3, 1],
         2: [2, 4, 3, 4, 2, 5, 5, 2, 1, 2, 4, 2, 6, 5, 3]},
     2: {1: [2, 5, 3, 3, 1, 2, 5, 5, 4, 3, 4, 3, 3, 1, 4],
         2: [3, 4, 2, 5, 4, 1, 4, 3, 2, 2, 7, 1, 5, 5, 3]},
     3: {1: [1, 5, 3, 3, 2, 1, 4, 7, 4, 1, 7, 3, 1, 5, 2],
         2: [2, 4, 2, 1, 3, 5, 5, 1, 3, 3, 1, 1, 7, 2, 1]},
     4: {1: [1, 5, 3, 2, 4, 2, 4, 2, 3, 3, 5, 1, 2, 1, 1],
         2: [3, 1, 1, 1, 1, 6, 3, 5, 4, 2, 7, 2, 4, 4, 4]},
     5: {1: [3, 2, 2, 2, 3, 2, 1, 7, 2, 1, 6, 1, 4, 6, 3],
         2: [1, 3, 3, 5, 2, 5, 2, 5, 3, 2, 3, 3, 5, 6, 4]},
     6: {1: [3, 5, 1, 4, 4, 5, 2, 6, 2, 1, 1, 3, 1, 5, 4],
         2: [2, 4, 3, 3, 3, 2, 3, 4, 3, 2, 7, 2, 2, 4, 1]},
     7: {1: [2, 4, 3, 1, 1, 4, 5, 4, 1, 1, 3, 1, 1, 6, 1],
         2: [3, 3, 1, 3, 4, 3, 3, 7, 2, 2, 4, 2, 5, 3, 4]},
     8: {1: [2, 3, 3, 2, 1, 5, 4, 7, 3, 2, 2, 1, 6, 4, 4],
         2: [1, 4, 1, 5, 3, 3, 3, 1, 4, 3, 3, 3, 4, 2, 3]},
     9: {1: [1, 2, 2, 5, 1, 4, 2, 3, 4, 1, 1, 2, 3, 4, 2],
         2: [3, 1, 3, 3, 4, 5, 4, 4, 1, 3, 6, 3, 1, 2, 2]},
     10: {1: [1, 3, 2, 1, 1, 3, 1, 2, 1, 2, 1, 3, 1, 1, 2],
          2: [2, 1, 1, 4, 2, 1, 2, 7, 3, 3, 5, 2, 4, 6, 1]},
     11: {1: [2, 2, 2, 2, 4, 6, 2, 5, 1, 2, 2, 1, 2, 2, 3],
          2: [1, 1, 3, 3, 3, 4, 1, 1, 3, 3, 7, 2, 6, 6, 4]},
     12: {1: [3, 4, 1, 3, 2, 4, 2, 1, 2, 3, 2, 3, 5, 1, 2],
          2: [1, 5, 2, 4, 1, 3, 4, 5, 3, 1, 3, 2, 6, 5, 1]}},
 2: {1: {1: [1, 3, 2, 1, 3, 5, 5, 6, 4, 3, 4, 3, 2, 4, 3],
         2: [3, 1, 1, 4, 2, 3, 1, 4, 1, 2, 1, 1, 7, 1, 4]},
     2: {1: [1, 1, 1, 1, 4, 5, 5, 1, 2, 1, 6, 1, 1, 3, 4],
         2: [3, 4, 2, 5, 1, 4, 4, 2, 1, 3, 4, 2, 4, 5, 4]},
     3: {1: [1, 1, 3, 5, 2, 2, 2, 3, 4, 3, 2, 1, 1, 5, 1],
         2: [2, 5, 1, 3, 4, 1, 5, 5, 1, 1, 1, 2, 2, 2, 4]},
     4: {1: [3, 2, 3, 4, 3, 1, 1, 1, 2, 2, 7, 3, 1, 6, 2],
         2: [1, 3, 1, 3, 1, 3, 5, 7, 1, 1, 3, 1, 3, 5, 3]},
     5: {1: [2, 2, 3, 5, 2, 4, 2, 6, 3, 1, 6, 2, 6, 1, 2],
         2: [1, 5, 1, 2, 1, 1, 5, 3, 2, 2, 2, 3, 7, 6, 4]},
     6: {1: [1, 2, 2, 1, 2, 1, 1, 4, 2, 1, 2, 3, 3, 4, 1],
         2: [2, 1, 3, 2, 3, 4, 5, 7, 4, 2, 1, 1, 5, 3, 1]},
     7: {1: [2, 4, 2, 2, 1, 3, 1, 5, 3, 1, 5, 1, 1, 5, 2],
         2: [3, 2, 1, 4, 2, 2, 4, 4, 4, 3, 3, 3, 7, 1, 3]},
     8: {1: [1, 5, 1, 2, 2, 2, 3, 6, 1, 2, 7, 1, 5, 2, 1],
         2: [2, 3, 3, 5, 4, 4, 4, 1, 2, 1, 2, 2, 2, 1, 4]},
     9: {1: [3, 4, 1, 2, 3, 1, 5, 6, 2, 3, 6, 2, 3, 5, 1],
         2: [2, 2, 2, 4, 4, 3, 2, 3, 4, 2, 5, 3, 4, 3, 3]},
     10: {1: [2, 1, 3, 5, 1, 1, 3, 1, 4, 2, 5, 3, 3, 6, 3],
          2: [3, 5, 2, 3, 2, 3, 1, 3, 2, 3, 4, 1, 6, 6, 1]},
     11: {1: [2, 5, 2, 5, 3, 5, 4, 4, 1, 2, 6, 2, 7, 4, 2],
          2: [1, 4, 1, 4, 4, 2, 1, 2, 3, 1, 2, 1, 2, 3, 2]},
     12: {1: [1, 4, 2, 4, 2, 5, 3, 3, 3, 1, 1, 2, 2, 3, 1],
          2: [2, 3, 3, 2, 4, 3, 1, 6, 4, 3, 3, 1, 7, 6, 2]

</exec> }}}

3. Macro

@define q1DCM label=xx version=xx scenario=xx
<exec>
p.versionPipe = repr($(version))
p.scenarioPipe = repr($(scenario))
</exec>
<comment label="com_q1$(label)" where="survey" style="dev" cond="gv.isStaff()">This is Version: [versionPipe] Scenario [scenarioPipe]<br /><br /></comment>
<radio label="q1$(label)" style="dcm" sbase="*">
  <title>This is my question title</title>
  <comment>This is my question comment</comment>
<exec>
p.q1_alts = dcm.plan[$(version)][$(scenario)]
p.q1_html = """
<table class="dcm">
  <tr>
    <td />
    <td>Option 1</td>
    <td>Option 2</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[1]}</td>
    <td>${dcm.q1_attributes_a[p.q1_alts[1][0]]}</td>
    <td>${dcm.q1_attributes_a[p.q1_alts[2][0]]}</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[2]}</td>
    <td>${dcm.q1_attributes_b[p.q1_alts[1][1]]}</td>
    <td>${dcm.q1_attributes_b[p.q1_alts[2][1]]}</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[3]}</td>
    <td>${dcm.q1_attributes_c[p.q1_alts[1][2]]}</td>
    <td>${dcm.q1_attributes_c[p.q1_alts[2][2]]}</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[4]}</td>
    <td>${dcm.q1_attributes_d[p.q1_alts[1][3]]}</td>
    <td>${dcm.q1_attributes_d[p.q1_alts[2][3]]}</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[5]}</td>
    <td>${dcm.q1_attributes_e[p.q1_alts[1][4]]}</td>
    <td>${dcm.q1_attributes_e[p.q1_alts[2][4]]}</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[6]}</td>
    <td>${dcm.q1_attributes_f[p.q1_alts[1][5]]}</td>
    <td>${dcm.q1_attributes_f[p.q1_alts[2][5]]}</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[7]}</td>
    <td>${dcm.q1_attributes_g[p.q1_alts[1][6]]}</td>
    <td>${dcm.q1_attributes_g[p.q1_alts[2][6]]}</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[8]}</td>
    <td>${dcm.q1_attributes_h[p.q1_alts[1][7]]}</td>
    <td>${dcm.q1_attributes_h[p.q1_alts[2][7]]}</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[9]}</td>
    <td>${dcm.q1_attributes_i[p.q1_alts[1][8]]}</td>
    <td>${dcm.q1_attributes_i[p.q1_alts[2][8]]}</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[10]}</td>
    <td>${dcm.q1_attributes_j[p.q1_alts[1][9]]}</td>
    <td>${dcm.q1_attributes_j[p.q1_alts[2][9]]}</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[11]}</td>
    <td>${dcm.q1_attributes_k[p.q1_alts[1][10]]}</td>
    <td>${dcm.q1_attributes_k[p.q1_alts[2][10]]}</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[12]}</td>
    <td>${dcm.q1_attributes_l[p.q1_alts[1][11]]}</td>
    <td>${dcm.q1_attributes_l[p.q1_alts[2][11]]}</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[13]}</td>
    <td>${dcm.q1_attributes_m[p.q1_alts[1][12]]}</td>
    <td>${dcm.q1_attributes_m[p.q1_alts[2][12]]}</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[14]}</td>
    <td>${dcm.q1_attributes_n[p.q1_alts[1][13]]}</td>
    <td>${dcm.q1_attributes_n[p.q1_alts[2][13]]}</td>
  </tr>
  <tr>
    <td class="dcmFirstCol">${dcm.q1_labels[15]}</td>
    <td>${dcm.q1_attributes_o[p.q1_alts[1][14]]}</td>
    <td>${dcm.q1_attributes_o[p.q1_alts[2][14]]}</td>
  </tr>
</table>
"""
</exec>
  <group label="g1">[q1_html]</group>
  <col label="c0" groups="g1">Option 1</col>
  <col label="c1" groups="g1">Option 2</col>
</radio>

<suspend />
@end

@q1DCM label=a version=1 scenario=1
@q1DCM label=b version=1 scenario=2
@q1DCM label=c version=1 scenario=3
@q1DCM label=d version=1 scenario=4
@q1DCM label=e version=1 scenario=5
@q1DCM label=f version=1 scenario=6
@q1DCM label=g version=1 scenario=7
@q1DCM label=h version=1 scenario=8
@q1DCM label=i version=1 scenario=9
@q1DCM label=j version=1 scenario=10
@q1DCM label=k version=1 scenario=11
@q1DCM label=l version=1 scenario=12

<!-- End of DCM code -->

22: Additional Scripts

You can find some useful V2 functions here.

23: Program to generate variables.xls

The program get-variables can be used to generate a variables.xls file. This is mostly not neceessary, as you can download it from the drop-down menu on the report page (Excel Variables). Syntax:

get-variables <survey path> [excel] [pure]

You should always specify the excel option, otherwise the variables.txt file will be created which we generally will not use.

The pure option generates an XLS file which has ONLY entries for variables, and none for questions.

If you run get-variables and a variables.xls file already exists in that survey directory, it will update the variables.xls file if any new questions have been added. The new question variables will be appended.

24: Generating a LIVE Survey Test Link

Released in M19, you can now generate a test link to your LIVE survey that will allow you to test a survey while remaining logged out. Once created,  the test link will remain active for 1 week by default and you will not be able to submit any data using this URL.

Use the following command to generate the test link:

here dev/maketoken .

That will generate a link similar to what is shown below:

http://v2.decipherinc.com/survey/selfserve
/9d3/140227?v2test=1393979476,c2f6377402dd89f96589c80b4cf5e82b

You can use the -d attribute to the maketoken script if  you want to specify the number of days until expiration of the test link.

Example:

here dev/maketoken -d 10 .

In this case the link will expire in 10 days instead of 7.

25: Reloading a Survey [reload]

When making changes to your survey outside the survey.xml file (e.g., in the nstyles), you may need to force a reload of the survey.

There are two ways to accomplish this:

[user@host] touch survey.xml

This modifies the survey.xml so that the survey is reloaded when loaded/refreshed in the web browser.

The below alternative forces a reload despite an unchanged survey.xml:

[user@host] reload <path>

26: Expand the Survey XML [save]

Many attributes and elements that actually make up the survey object are absent from the survey.xml file in your project directory. The save script exports the expanded survey object as XML so you can see, among other things, expanded loops, various attributes, and elements that are added to the survey dynamically (e.g., various virtual questions.)

[user@host] save -tc <path>

-t forceTransient (i.e., dynamically-created elements)
-c forceCleanXml (i.e., remove survey attributes, blackbox code, builder IDs)

27: Generate State (Partial) Data [state-dump]

Dump state (partial) data from a survey for a single uuid, for some other extra variable, or for all respondents:

[user@host] here dev/state-dump . wmeqf4bukg7v0tcz
[user@host] here dev/state-dump . ipAddress=184.106.203.86
[user@host] here dev/state-dump . all

28: Generate a Dummy Sample File of Random(ish) Data [generate-random]

Allows you to create dummy sample files or random data, in general

here dev/generate-random -n 100 source email name segment:1,2,3 > out.txt

Script lacks a "help"; see below

"source", "email", and "name" are special fields which will get random source IDs, email addresses, and first & last names

-n Number of records to generate

29:  Listing/Resetting autosave links

The autosave command allows you to list or delete autosave sessions. This is useful in the event you need a particular respondent or a list of respondent to start from the beginning of the survey without having to click the "start over" button.

Usage: autosave <path> <list|delete> [deletion file]

Using list will list out all available keys to resume surveys.
Using delete, you can specify a file of key:values to delete. This will force respondents to start from the beginning of the survey. Enter each key:value on a separate line.

Tip: You can quickly force all users to reset by generating the list" format and then running the delete command on that file.

Example:

autosave list > allkeys.txt
autosave delete allkeys.txt

 

  • War dieser Artikel hilfreich?