Since last week, I got the jsonized survey (with blocking, not with branching) to pass the HTML tests, and the resulting survey HTML appears to be working correctly. I then made an attempt at adding branching, but am still working on getting it to produce the correct JSON for the branchmap. My original thoughts, based on a discussion between Emma and Presley, were to make a Constraint object, which takes a question as a parameter and allows the user to specify branches from the question’s options (based on their id or index). The survey would then contain a list of constraints, along with the top level block list. However, the JSON schema structure suggests that the branch map (Constraint) JSON is meant to be generated within the question, as a question property (rather than a top-level survey property). I am dealing with this by creating a question attribute to reference the constraint when a constraint is constructed for that question; questions with no constraints won’t have this attribute, since it is created and assigned within the constraint constructor. When producing the Question JSON, I check whether it has a constraint attribute, and add the appropriate BranchMap JSON if it does.
The current code on my GitHub repo includes an attempt a branching survey representation, but I’m not sure that the branching JSON is correct. It passes my validators, but it is no longer passing the HTML tests. I will have to try to fix this.
In terms of milestones, I accomplished a portion of what I wanted to get done for the end of this week, but I got less of the testing stuff done that I would have liked; I haven’t really started on any automated tests yet. Until I get more testing material, I figured that it would make more sense to focus on the branching. Unfortunately, I won’t be able to meet with Emma and Presley today because of travel issues for track (which came up unexpectedly), so hopefully I can get back on track when I get home on Sunday and be a bit more productive next week; I’ve been struggling with other classes’ homework.
At this point, Emma and I have established a few milestones for me to accomplish over the next few weeks. The first priority this week was to create a sample survey based on a short MTurk questionnaire (https://github.com/etosch/SurveyMan/blob/master/data/Ipierotis.csv), generate a JSON file from the survey object, and check that the outputted JSON agrees with the schema. My example survey does not have any branching yet; I will start implemented that as soon as I determine that everything else functions properly. While creating the survey, I ran into a few bugs in my survey objects, mainly that adding options to one question would somehow add then to all subsequent questions. Although I fixed this by making an option list a required argument when creating a question, I’m not entirely sure why it was happening. I had some difficulty validating the outputted JSON against the schema, due to a few syntax errors in the schema which I have since resolved (and the fact that I was pulling from the wrong branch). At this point, it appears that the outputted JSON from the example survey is valid, and the next step is to make sure it produces the correct HTML and that everything looks and functions properly. Emma sent me a few HTML tests, and I’ve been copying in the JSON for my example survey to try to verify that it works with the HTML. So far, I haven’t been able to make it work.
For next Friday, the goal is to have the branching implemented, as well as some tests for blocking and branching; basically, I should have a first pass at a Python survey library done. I’m not sure how much I’ll get done over the weekend, since I’ll be away, but hopefully if I can look at the HTML stuff today, I’ll be in good shape to get the required things done for next week.
Last week, I met with Emma and Presley to discuss some of the desired behavior and features of the Python library, as well as the general status of SurveyMan. In terms of the Python library, the linguists want it to eventually provide R integration.
After Monday’s meeting, I started working on a simple Python surface representation for creating survey objects. My current approach is to design a general skeleton of what the user sees, and to deal with any issues/scaling when they become relevant. Emma and I discussed two options for how a user of the library could create surveys; they could write a script to statically create the survey using the Python library, or I could write a repl that the user would interact with to define the survey components. I will address the issue of creating the survey once I have the survey component objects implemented.
I first created a simple outline of the behavior and attributes of Survey, Question, and Option objects, based on the previous Python and Java implementations. I began a code skeleton based on this outline, which I recently pushed to my repository on GitHub. After pushing, I attempted to implement some of the behavior, which lead to my changing the skeleton a bit. I added a new class called idGenerator meant to generate unique ids for the components, but I’m not sure it’s the right way to go about generating ids; I’m trying to figure out how to make it a static/singleton class so that there is just one instance which keeps track of which ids it has already assigned to components. I will post an update and maybe push again once I figure this issue out and have more of the functions implemented.
UPDATE: I have the id generator working, and have most of the functions implemented. I’m currently creating simply question, option, and survey objects to test that everything works properly so far. Pushing the current version to my repo.