Copy of the mailing list:
with our R package "exams" (http://CRAN.R-project.org/package=exams) we can (among other things) generate QTI 1.2 XML code for the specification of e-learning exercises. We support various kinds of exercises: single choice, multiple choice, numeric, string, and cloze (= combinations of the previous types). All of these essentially work very well with both OLAT and OpenOLAT, despite some of the types not being offered in OpenOLAT's own test editor. But so far we have successfully used this system for conducting large-scale exams in our universities' OLAT system (Universität Innsbruck).
There is one small nuisance, though, concerning the display of numeric exercises (<response_num> in QTI) in the detailed test results. Attached you find a screenshot of the detailed results of a simple geometric exercise. It shows "Your answer: 2.828" with the solution that I entered correctly and "Correct answer: ". Thus the correct answer is unfortunately blank where it should either display the interval for answers that are accepted as correct [2.818, 2.838] or the corresponding midpoint 2.828. However, neither is currently the case.
The QTI specification of the exercise is attached as dist-num.zip (containing the .xml file) which follows the QTI standard recommended on the official IMS homepage:
We have found two workarounds for this: (1) We give up on the tolerance interval and just accept a specific number as the correct solution. For example by using a string exercise (<response_str> in QTI) in combination with <varequal> in the resprocessing. (2) In addition to the <varlt> and <vargt> resprocessing (yielding the tolerance interval) we add two <varequal> statements adding and subtracting a small number of points. Both of these solutions "work" but are not clean with respect to their QTI specification.