I haven’t posted a blog entry in a while, due mostly to spring break and to excessive amounts of midterms; after this Monday, hopefully things will die down and I’ll be able to focus more on the current issues rather than 3 exams.
Before break, I met with Presley to do a review of the survey representation code currently up on Github. We discussed with Emma the issue of whether the library should output JSON or a CSV, and determined that spitting out the JSON directly was bypassing a large portion of the SurveyMan Java backend (not sure of the terminology), and that it might be better to generate some different intermediate form to be passed to the Java. I’m just leaving it as is for now until we figure out something better. Emma created an issue before I left for break regarding calling the java program directly from the python, but we need to fix the issues I’ve been having with SurveyMan and Windows before I can do this.
Presley also suggested that I create an untested branch to push to, but Emma said it was probably unnecessary, since only tested/working changes would be pulled into the main project from my fork anyway. I had been holding off on pushing anything until I figured out how to create an untested branch, but after discussing it with Emma I just committed and pushed my changes to the master branch.
My most recent commits include adding an exceptions class which includes exceptions for bad branching and referencing questions/options/blocks that don’t exist. I implemented a first pass at a validate method for the survey that checks if the survey has all the blocks referenced in the constraints; I haven’t implemented checks for backwards branching yet. I made a few changes in places where I had originally been printing out error messages, throwing exceptions instead. I hope to get started next week on the test module to determine that all of this actually works correctly.
I also created another sample survey based off of a survey that I found in data/samples. This was meant to demonstrate subblocks, since my original survey did not make use of them. However, I haven’t really done anything with it besides just print out the survey structure and eyeball it to make sure it looks right. Again, I should probably create more samples with different properties.
Last week, I did a bit of planning regarding how best to implement the test module. I had originally been working with unittest.py, but during our meeting on Friday, Presley mentioned another module called py.test which she said worked well and was simple to use. I’ll have to look into it and see which one is better for our needs. In terms of testing, I realized that I need more sample surveys that make better use of all the features (i.e. subblocks and more intricate branching). The one I have now doesn’t have any subblocking. In addition, I thought it would be a good idea to create a module full of Survey Exceptions which would be thrown if the user tries to create an invalid survey with invalid blocking and branching. Emma suggested that I add a validation check at some point before the JSON is created to make sure that all of the survey components are valid; I have determined that the best place to put this check is in the top level Survey object, in the form of a function called validate() which is called in the JSON method. This should check for invalid blocking or branching and throw appropriate exceptions if there are issues; my test module should test that these exceptions are thrown for deliberately invalid surveys. I thought about creating checks that would throw exceptions as soon an invalid branch is created, but in order to determine whether branches are invalid, access to the entire list of blocks is needed; since this is stored in the Survey object, it is easiest to check these things at the Survey level rather than at the Question, Block, or Constraint level. This is all still in progress, and I hope to get more done on it this week. I am meeting with Presley tomorrow to go over the current Python code and discuss what needs to be done/how best to do it.
Emma and I also talked about whether it would be better to have the Python output a CSV representation of the survey rather than just spit out JSON, since the CSV is more like the “bytecode” of the survey than the JSON is (in that the JSON is just for transmitting the information, and can’t really be written by hand). CSVs are also easier to validate than the JSON. If I am to add this functionality, I’ll probably just add some methods similar to the jsonize methods to produce a CSV (I won’t get rid of the jsonize methods unless it’s clear that we don’t need them).