quizzable
provides an easy-to-implement interface to build a framework for educational quiz apps built on top of Python. The quizzable
library allows you to create quizzes consisting of MCQ, FRQ, True-or-false, or Matching questions, allowing you to build educational apps that leverage the power of Python with great ease. The full documentation is described below.
- Quickstart
- Classes
- Exceptions
- Authors
To get started, install the quizzable
package through pip
on a supported version of Python (quizzable
currently supports Python 3.9+):
$ python -m pip install quizzable
Next, import the Terms
class from the quizzable
module:
from quizzable import Terms
Then, create a list of terms:
data = {
"painter": "la pintura",
"brush": "el pincel",
"sculpture": "la escultura",
"palette:": "la paleta",
"self-portrait": "el autorretrato",
"abstract": "abstracto/a",
# more terms...
}
terms = Terms(data)
or create one from JSON data:
import json
with open("vocabulary.json") as terms_file:
terms = Terms(json.loads(terms_file.read()))
Aftewards, you can choose to generate random types of questions using the get_random_question
method:
question = terms.get_random_question()
generate an entire quiz of questions using the get_quiz
method:
quiz = terms.get_quiz(
types=["mcq", "match", "tf"],
prompt="What is the translation of {term}?",
) # customize question types and prompt
Or create different types of questions manually like so:
frq = terms.get_frq_question() # free-response
mcq = terms.get_mcq_question() # multiple-choice
tf = terms.get_true_false_question() # true-or-false
matching = terms.get_frq_question() # matching
A question has different properties, depending on its type:
print(mcq.prompt)
for option in mcq.options:
print(option)
print()
answer = input("Answer: ")
To score a question, simply use its check_answer
method:
correct, actual = mcq.check_answer(answer)
if correct:
print("Correct!")
else:
print(f"Incorrect...the answer was {actual}")
If you'd like, you can convert a question or quiz to back to its raw data at any time:
print(question.to_dict())
print(quiz.to_data())
A list of terms.
Should be a dictionary mapping terms to definitions, where in this case a term represents a question or vocabulary term, and a definition is used to refer to the answer or vocabulary definition. For example, here is a list of terms in which each term is an English word, and its definition is its English translation:
{
"painter": "la pintura",
"brush": "el pincel",
"sculpture": "la escultura",
"palette:": "la paleta",
"self-portrait": "el autorretrato",
"abstract": "abstracto/a"
}
Parameters:
answer_with = "def"
: can be"term"
,"def"
, or"both"
; how the question should be answered (see Functions)
Returns the dictionary terms
modified based on the value for answer_with
. May be useful for making flashcards for which terms and definitions may need to be swapped on-demand.
Returns an FRQQuestion
object with a random FRQ-format question generated from terms
.
Parameters:
prompt = "{term}"
: question prompt (use"{term}"
to reference question term in custom prompts)
Parameters:
n_options = 4
: number of options per question.prompt = "{term}"
: question prompt (use"{term}"
to reference question term in custom prompts)
Returns an MCQQuestion
object with a random MCQ-format question generated from terms
.
Returns a TrueFalseQuestion
object with a random True-or-false format question generated from terms
.
Parameters:
prompt = "{term}"
: question prompt (use"{term}"
to reference question term in custom prompts)
Parameters:
n_terms = 5
: how many terms have to be matchedprompt = "{term}"
: question prompt (use"{term}"
to reference question term in custom prompts)
Returns a MatchQuestion
object with a random matching-format question generated from terms
.
Parameters:
types = ["mcq", "frq", "tf"]
: list that can contain"mcq"
,"frq"
,"tf"
, or"match"
; types of questions that appear on the quizn_options = 4
: (if MCQs are involved) number of options per MCQ questionn_terms = 5
: (if matching questions are involved) number of terms to match per matching questionprompt = "{term}"
: question prompt (use"{term}"
to reference question term in custom prompts)prompts = {}
: prompt map to define specific prompts for specific questions
Returns a Question
object of a random-format question generated from terms
.
Returns a Quiz
object with random questions based on the below parameters.
Parameters:
terms
: map of terms and definitions for quiz (seeTerms
)types = ["mcq", "frq", "tf"]
: list that can contain"mcq"
,"frq"
,"tf"
, or"match"
; types of questions that appear on the quizlength = 10
: number of questions on quizanswer_with = "def"
: can be"term"
,"def"
, or"both"
; how the question should be answered (see below)n_options = 4
: (if MCQs are involved) number of options per MCQ questionn_terms = 5
: (if matching questions are involved) number of terms to match per matching questionprompt = "{term}"
: question prompt (use"{term}"
to reference question term in custom prompts)prompts = {}
: prompt map to define specific prompts for specific questions
answer_with
describes how the user should answer the question, where "term"
means a question should be answered by giving the term, "def"
implies that the question should be answered by providing the definition, and "both"
means that there is a 50/50 chance of the question needing a term or a definition as input.
Arbitrary quiz object.
List of questions within the quiz, represented by a list of arbitrary Question
objects.
Reconstructs a Quiz
object from a listlike representation. See Quiz.to_data()
for more information on formatting.
Returns a listlike representation of the quiz, with each Question
object being represented as its dictionary representation. For example, it could look like this:
[
{
"_type": "tf",
"term": "la iglesia",
"definition": "shop",
"answer": "church",
"prompt": "la iglesia"
},
{
"_type": "mcq",
"term": "la playa",
"options": {
"beach": True,
"park": False,
"downtown": False,
"museum": False,
},
"prompt": "la playa"
},
{
"_type": "frq",
"term": "park",
"answer": "el parque",
"prompt": "park"
}
]
Please see documentation for MCQQuestion
, FRQQuestion
, TrueFalseQuestion
, and MatchQuestion
for more information on the format of the above questions.
Generic question object used for reconstruction of a question from JSON data.
Parameters:
_type
: question typeterm
: question termanswer
: question answerprompt = "{term}"
: question prompt (use"{term}"
to reference question term in custom prompts)**kwargs
: other question data (e.g.options
,definition
, etc.)
Term that the question is based on.
Correct answer to the prompt term
.
Prompt displayed to the user.
Returns a reconstructed Question
object made from data
. Please see MCQQuestion.to_dict()
, FRQQuestion.to_dict()
, TrueFalseQuestion.to_dict()
, and MatchQuestion.to_dict()
for more information on formatting.
Parameters:
data
: dictionary containing question data.
Returns a tuple: the first item is a boolean whose value is True
if answer
matches the question's answer
attribute or False
otherwise, and the second item is the value for the question's answer
attribute.
Parameters:
answer
: answer provided by the user
Returns a dictionary representation of the question. Each question has a _type
key that can be used to determine how to render a question on the frontend (i.e. display multiple options for MCQ, textbox for FRQ, etc.), and a term
key which represents the term the user is prompted with. Please see MCQQuestion.to_dict()
, FRQQuestion.to_dict()
, TrueFalseQuestion.to_dict()
, and MatchQuestion.to_dict()
for more information on formatting.
Representation of an MCQ-format question. Has the same attributes as Question
objects, with some additional properties.
Parameters:
term
: question termoptions
: question optionsanswer
: question answerprompt = "{term}"
: question prompt (use"{term}"
to reference question term in custom prompts)
List of potential answer choices.
The dictionary representation returned by the to_dict
method of a MCQQuestion
object looks like this:
{
"_type": "mcq",
"term": "term",
"options": {
"option1": False,
"option2": False,
"option3": True,
"option4": False,
},
"answer": "answer"
}
Here's a brief overview:
term
is what the user will be prompted with, whether that be to choose a term's definition or vice/versa.options
is the list of potential answer choices.answer
is correct choice out ofoptions
.
Representation of an FRQ-format question. Has the same attributes as Question
objects, with some additional properties.
Parameters:
term
: question termanswer
: question answerprompt = "{term}"
: question prompt (use"{term}"
to reference question term in custom prompts)
The dictionary representation returned by the to_dict
method of a FRQQuestion
object looks like this:
{
"_type": "frq",
"term": "term",
"answer": "answer"
}
Here's a brief overview:
term
is what the user will be prompted with, whether that be to define a term's definition or vice/versa.answer
is the response that will be accepted as correct given the user's prompt.
Representation of an True-or-false format question. Has the same attributes as Question
objects, with the some additional properties.
Parameters:
term
: question termdefinition
: question definition (what the user has to determine is True or False)answer
: question answerprompt = "{term}"
: question prompt (use"{term}"
to reference question term in custom prompts)
What the user has to determine is True or False.
The dictionary representation returned by the to_dict
method of a TrueFalseQuestion
object looks like this:
{
"_type": "tf",
"term": "term",
"definition": "definition",
"answer": "answer"
}
Here's a brief overview:
term
is what the user will be prompted with, whether that be to select True or False if the definition given matches with a specific term, or vice/versa.definition
is what the user has to determine is True or False.answer
is the actual definition that matches with the givenprompt
, or term.
Representation of an MCQ-format question. Has the same attributes as Question
objects, with the some additional properties.
Parameters:
term
: question termdefinitions
: question definitions (what the user has to match with the terms)answer
: question answerprompt = "{term}"
: question prompt (use"{term}"
to reference question term in custom prompts)
What the user has to match with the corresponding terms.
The dictionary representation returned by the to_dict
method of a MatchQuestion
object looks like this:
{
"_type": "match",
"term": [
"term1",
"term2",
"term3",
"term4"
],
"definitions": [
"definition4",
"definition2",
"definition1",
"definition3",
],
"answer": {
"term1": "definition1",
"term2": "definition2",
"term3": "definition3",
"term4": "definition4"
}
}
Here's a brief overview:
term
is what the user will be prompted with, whether that be to match the term with the definition, or vice/versa.definitions
is what the user has to match with the corresponding terms.answer
maps the termsterm
to their actual definitionsdefinitions
.
The base exception for all quizzable
errors.
The length specified is not valid (i.e. too short or too long)
Parameters:
length
: invalid length of the quiz
The number of options (for MCQs) specified is not valid (i.e. too small or too large)
Parameters:
n_options
: invalid number of options per MCQ question
The number of terms (for matching questions) specified is not valid (i.e. too small or too large)
Parameters:
n_terms
: invalid number of terms per matching question
The type of question specified is not valid (should only be "mcq"
, "frq"
, "tf"
, or "match"
).
Parameters:
question
: invalid type of question
The data passed into the constructor for Question
is incomplete. See MCQQuestion.to_dict()
, FRQQuestion.to_dict()
, TrueFalseQuestion.to_dict()
, and MatchQuestion.to_dict()
for how the data for different types of questions should be formatted.
Parameters:
data
: incomplete data