Category Archives: Conferences

Posts relating to conferences I have attended to plan to attend.

Celebrities : they’re just like us!

Hopefully this post will be short and sweet, since I’m trying to get back on EST.

Highlights from this year’s PLDI:

  • Year of PLASMA! John Vilk’s DoppioJVM won Best Artifact. His talk was great, and he event got a mid-talk round of applause for a meta-circular evaluator joke. Nothing like Scheme to whet the appetites of PL nerds! (I admit it, I clapped and laughed, too.)
  • Year of PLASMA! In a surprising turn of events, my work on SurveyMan won top prize in the Graduate category of the ACM Student Research Competition. This means I’ll submit a short paper in a few months to compete in the Grand Finals! Exciting!
  • The APPROX was awesome! It was very exciting to see current work presented across approximate computing and probabilistic programming. Emery was chair of the event. Given the amount of discussion it engendered, I would say it was a resounding success.
  • I met a bunch of new people, and connected with those I haven’t seen in a while. Shoutouts to Adrian Sampson and Michael Carbin. I’ll be following Adrian’s blog now, and pestering Michael about formalizing the SurveyMan semantics (using his work on reasoning about relaxed programs as a guide).
  • A cheeky dig at the New York Times lead to Phil Wadler telling me that I had the best teaser! Famous professors : they’re just like us!
  • Shriram Krishnamurthi declared he’d read this blog.

In other news, I need to try uploading my VM for the OOPSLA artifact evaluation, now that I have reasonable internet again. But first, I need to sleep (though I did set aside time to watch GoT — OMG, the ending was awesome! Arya’s face! That exchange! WTF just happened?!?!? Also, shit’s finally starting to get real, north of the wall! You know nothing, Jon Snow…)

Reading Rainbow

It’s only a few short weeks until PLDI 2014. Oh, the tedious and expensive travel! Just kidding (well, not really — it will involve quite a few trains and many, many dollars).

Inspired by Alex Passos‘s yearly NIPS reading list, I’m going throw together one of my own. Rather than listing abstracts, I’m going to just post an ordered list of the papers I’m going to read and post on individual papers as I see fit.

Tier 1 : Authors I know

Unless the conference is massively multi-tracked, I find having to ask if someone I’ve actually met and spoken with IRL if they have a paper at the conference a bit tactless. This isn’t to say I haven’t done it, or that I’ve done so in a completely shameless way. I do however recognize that refraining from such behavior is A Good Thing.

  1. Doppio: Breaking the Browser Language Barrier
    John Vilk, University of Massachusetts, Amherst; Emery Berger, University of Massachusetts, Amherst.
  2. Expressing and Verifying Probabilistic Assertions
    Adrian Sampson, University of Washington; Pavel Panchekha, University of Washington; Todd Mytkowicz, Microsoft Research; Kathryn S McKinley, Microsoft Research; Dan Grossman, University of Washington; Luis Ceze, University of Washington.
  3. Resugaring: Lifting Evaluation Sequences through Syntactic Sugar
    Justin Pombrio, Brown University; Shriram Krishnamurthi, Brown University.
  4. Taming the Parallel Effect Zoo: Extensible Deterministic Parallelism with Lvish
    Lindsey Kuper, Indiana University; Aaron Todd, Indiana University; Sam Tobin-Hochstadt, Indiana University; Ryan R. Newton, Indiana University.
  5. Introspective Analysis: Context-Sensitivity, Across the Board
    Yannis Smaragdakis, University of Athens; George Kastrinis, University of Athens; George Balatsouras, University of Athens.
  6. Dynamic Space Limits for Haskell
    Edward Z. Yang, Stanford University; David Mazières, Stanford University.

Tier 2 : Authors my advisor knows

It’s a reasonable assumption that my advisor probably knows at least one author on each paper, so we can also call this category “Authors whom I might reasonably expect to be introduced to by My Advisor.” These papers include authors whose work I’ve read before and whose names I know from discussions with my advisor. Reading these papers will help prevent the awkward standing-there thing that happens when someone who is much more comfortable than you are (er, than I am) is deep in a conversation and you (I) have nothing to add. It’ll also provide a hook that’s socially more acceptable to whatever random thought happens to be passing through your (my) head. Genius, this plan is!

  1. Fast: a Transducer-Based Language for Tree Manipulation
    Loris D’Antoni, University of Pennsylvania; Margus Veanes, Microsoft Research; Benjamin Livshits, Microsoft Research; David Molnar, Microsoft Research.
  2. Automatic Runtime Error Repair and Containment via Recovery Shepherding
    Fan Long, MIT CSAIL; Stelios Sidiroglou-Douskos, MIT CSAIL; Martin Rinard, MIT CSAIL.
  3. Adapton: Composable, Demand-Driven Incremental Computation
    Matthew A. Hammer, University of Maryland, College Park; Yit Phang Khoo, University of Maryland, College Park; Michael Hicks, University of Maryland, College Park; Jeffrey S. Foster, University of Maryland, College Park.
  4. FlashExtract : A Framework for Data Extraction by Examples
    Vu Le, UC Davis; Sumit Gulwani, Microsoft Research Redmond.
  5. Test-Driven Synthesis
    Daniel Perelman, University of Washington; Sumit Gulwani, Microsoft Research Redmond; Dan Grossman, University of Washington; Peter Provost, Microsoft Corporation.
  6. Consolidation of Queries with User Defined Functions
    Marcelo Sousa, University of Oxford; Isil Dillig, Microsoft Research; Dimitrios Vytiniotis, Microsoft Research; Thomas Dillig, UCL; Christos Gkantsidis, Microsoft Research.
  7. Atomicity Refinement for Verified Compilation
    Suresh Jagannathan, Purdue University; Vincent Laporte, INRIA Rennes; Gustavo Petri, Purdue University; David Pichardie, INRIA Rennes; Jan Vitek, Purdue University.

Tier 3 : The Competition

The Student Research Competition, that is. Some of those presenting at SRC are also presenting work at the main event. Since we’ll presumably have some forced socialization, it’s probably a good call to get an idea of what some of their other work is about.

  1. A Theory of Changes for Higher-Order Languages – Incrementalizing Lambda-Calculi by Static Differentiation
    Yufei Cai, Philipps-Universität Marburg; Paolo G. Giarrusso, Philipps-Universität Marburg; Tillmann Rendel, Philipps-Universität Marburg; Klaus Ostermann, Philipps-Universität Marburg.
  2. Commutativity Race Detection
    Dimitar Dimitrov, ETH Zurich; Veselin Raychev, ETH Zurich; Martin Vechev, ETH Zurich; Eric Koskinen, New York University.
  3. Code Completion with Statistical Language Models
    Veselin Raychev, ETH Zurich; Martin Vechev, ETH Zurich; Eran Yahav, Technion.
  4. Verification Modulo Versions: Towards Usable Verification
    Francesco Logozzo, Microsoft Research; Manuel Fahndrich, Microsoft Research; Shuvendu Lahiri, Microsoft Research; Sam Blackshear, University of Colorado at Boulder.
  5. Adaptive, Efficient Parallel Execution of Parallel Programs
    Srinath Sridharan, University of Wisconsin-Madison; Gagan Gupta, University of Wisconsin-Madison; Gurindar Sohi, University of Wisconsin-Madison.
  6. Globally Precise-restartable Execution of Parallel Programs
    Gagan Gupta, University of Wisconsin-Madison; Srinath Sridharan, University of Wisconsin-Madison; Gurindar S. Sohi, University of Wisconsin-Madison.

There are 13 participants in the SRC total. Five are presenting at the conference proper (One is on a paper in another tier).

Tier 4 : Pure Interest

No motivation, except that the papers look interesting.

  1. Improving JavaScript Performance by Deconstructing the Type System
    Wonsun Ahn, University of Illinois at Urbana Champaign; Jiho Choi, University of Illinois at Urbana Champaign; Thomas Shull, University of Illinois at Urbana Champaign; Maria Garzaran, University of Illinois at Urbana Champaign; Josep Torrellas, University of Illinois at Urbana Champaign.
  2. Automating Formal Proofs for Reactive Systems
    Daniel Ricketts, UC San Diego; Valentin Robert, UC San Diego; Dongseok Jang, UC San Diego; Zachary Tatlock, University of Washington; Sorin Lerner, UC San Diego.
  3. Tracelet-Based Code Search in Executables
    Yaniv David, Technion; Eran Yahav, Technion.
  4. Getting F-Bounded Polymorphism into Shape
    Benjamin Lee Greenman, Cornell University; Fabian Muehlboeck, Cornell University; Ross Tate, Cornell University.
  5. Compositional Solution Space Quantification for Probabilistic Software Analysis
    Mateus Borges, Federal University of Pernambuco; Antonio Filieri, University of Stuttgart; Marcelo D’Amorim, Federal University of Pernambuco; Corina S. Pasareanu, Carnegie Mellon Silicon Valley, NASA Ames; Willem Visser, Stellenbosch University.
  6. Test-Driven Repair of Data Races in Structured Parallel Programs
    Rishi Surendran, Rice University; Raghavan Raman, Oracle Labs; Swarat Chaudhuri, Rice University; John Mellor-Crummey, Rice University; Vivek Sarkar, Rice University.
  7. VeriCon: Towards Verifying Controller Programs in Software-Defined Networks
    Thomas Ball, Microsoft Research; Nikolaj Bjorner, Microsoft Research; Aaron Gember, University of Wisconsin-Madison; Shachar Itzhaky, Tel Aviv University; Aleksandr Karbyshev, Technical University of Munich; Mooly Sagiv, Tel Aviv University; Michael Schapira, Hebrew University of Jerusalem; Asaf Valadarsky, Hebrew University of Jerusalem.
  8. AEminium: A permission based concurrent-by-default programming language approach
    Sven Stork, Carnegie Mellon University; Karl Naden, Carnegie Mellon University; Joshua Sunshine, Carnegie Mellon University; Manuel Mohr, Karlsruhe Institute of Technology; Alcides Fonseca, University of Coimbra; Paulo Marques, University of Coimbra; Jonathan Aldrich, Carnegie Mellon University
  9. First-class Runtime Generation of High-performance Types using Exotypes
    Zachary DeVito, Stanford University; Daniel Ritchie, Stanford University; Matt Fisher, Stanford University; Alex Aiken, Stanford University; Pat Hanrahan, Stanford University.

Why is pure interest ranked last?

People say that a talk should be an advertisement for the paper. If I don’t get through the papers in tier 4 before PLDI, I’ll at least know which talks I want to go to and perhaps prune that list accordingly. Since a conference is actually a social event, it seems like a better use of time to target papers that I would expect to come up in conversation. I haven’t tried this tactic before, so we’ll see how things go!

Finally, I’d like to thank the NSF, the ACM, and President Obama for help on my upcoming travel.

SurveyMan’s debut

Dear Internet Diary,

This past weekend, we presented the SurveyMan work for the first time, at the Off the Beaten Track workshop at POPL. I first want to say that PLASMA seriously represented. We had talks in each of the sessions. Though I didn’t have the chance to see Charlie‘s talk on Causal Profiling, Dan said it definitely engendered discussion and that people in the audience were “nodding vigorously” in response to the work. Dimitar presented Data Debugging, which people clearly found provocative.

I was surprised by the audience’s response to my talk; I know Emery had said that people whom he talked to were excited about this space, but sometime that’s hard to believe when you’re a grad student chugging away at the implementation and theory behind the work. It was invigorating to be able to describe what we’ve done so far and hear enthusiastic feedback. In all my practice talks, I had focused on the language itself, but for OBT, at the behest of my colleagues, I took the debugging angle instead. Most of the people in the audience had used surveys for their research and were quite familiar with these problems. While language designers have tried to tackle surveys before, they frequently come from the perspective of embedding it in a language *they* already use. The approach we take leverages tools that our target audience uses. We limit the expressivity of the language and make statistical guarantees, which is what our users care about the most.

I had a few really interesting questions about system features. Someone made the point that bias cannot be entirely removed through redundancy — that we can’t know if we’ve found enough ways of expressing a question to control for the underlying different interpretations. In response, I suggested that we could think about using approaches from cross-language models to determine whether we have categorically the same questions. The idea is that if a set of questions produces the same distribution of responses, it is sufficiently similar. Of course, this approach neglects the non-local effects of question wording. Whether or not this can be controlled through question order randomization is something I’ll have to think about more.

As a followup question, I was also asked if we could reverse-engineer the distributions we get from the variants to identify different concepts. This was definitely not something I had considered before. I wasn’t sure we would, in practice, have sufficient variants and responses to produce meaningful results, but it’s something to consider as future work.

A lot of the other questions I had were about features of the system that I did not highlight. For example, I did not go into any detail about the language and its control flow. I was also asked if we were considering adding clustering and other automated domain-independent analyses, which I am working on right now. Quite a few of the concerns are addressed by our preference for breakoff over item-nonresponse. There was also an interesting ethics question about using our system to manipulate results. Of course, SurveyMan requires active participation from the survey designer; the idea is not to prevent the end-user from adding bias, but to illuminate its presence.