Welcome to the Ikizamini application, a comprehensive tool to help you practice and prepare for the theory exam for your driving test.
- Interactive quiz to test your knowledge of driving rules, regulations, and safety.
- Real-time feedback on your answers.
- Timer to simulate the actual exam conditions.
- Detailed results page showing correct and incorrect answers.
- React: A JavaScript library for building user interfaces.
- Vite: A build tool that provides a fast development environment.
- TypeScript: A typed superset of JavaScript that compiles to plain JavaScript.
- Firebase Hosting: Fast and secure hosting for web applications.
- Tailwind CSS: A utility-first CSS framework for rapid UI development.
- React Router: Declarative routing for React applications.
- Redux Toolkit: The official, recommended way to write Redux logic.
This application uses questions generated by crawling the Rwanda Traffic Guide website. Here is how the questions were generated:
This script crawls pages from the Rwanda Traffic Guide website, extracts questions, options, correct answers, and associated images, then saves this data in a JSON file.
Ensure you have the following libraries installed:
requests
beautifulsoup4
lxml
-
Base URL and Starting URL:
- The base URL and the starting URL for the crawling process are defined.
base_url = "https://rwandatrafficguide.com/" start_url = "https://rwandatrafficguide.com/rtg001-ikinyabiziga-cyose-cyangwa-ibinyabiziga-bigomba-kugira/"
-
Directory to Save Images:
- Creates a directory to save downloaded images.
os.makedirs("downloaded_images", exist_ok=True)
-
Custom Headers:
- Custom headers are defined to mimic a browser request and avoid being blocked by the website.
headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36" }
-
Data Storage:
- An empty list is created to store the extracted data.
data = []
-
Extract ID from URL:
- Extracts the numerical ID from the URL using a regular expression.
def extract_id_from_url(url): match = re.search(r'rtg(\d+)', url) return int(match.group(1)) if match else None
-
Fetch Page:
- Fetches the HTML content of the page.
def fetch_page(url): response = requests.get(url, headers=headers) if response.status_code == 200: return response.text else: print(f"Failed to retrieve the page. Status code: {response.status_code}") return None
-
Parse Page:
- Parses the HTML content to extract the question, options, correct answer, and image.
def parse_page(html, url): soup = BeautifulSoup(html, 'html.parser') entry_content = soup.find('div', class_='entry-content clr') if not entry_content: return None question = entry_content.find('p', class_='question').text.strip() if entry_content.find('p', class_='question') else "" options_list = entry_content.find('ul', class_='list') options = {} answer = "" if options_list: for idx, li in enumerate(options_list.find_all('li'), start=1): option_text = li.text.strip() option_key = chr(96 + idx) # 'a', 'b', 'c', 'd' options[option_key] = option_text if li.find('strong', class_='colored'): answer = option_key image_url = "" image_tag = entry_content.find('figure', class_='wp-block-image') if image_tag and image_tag.find('img'): img_src = image_tag.find('img')['src'] img_name = os.path.basename(img_src) img_response = requests.get(img_src, headers=headers) with open(os.path.join("downloaded_images", img_name), 'wb') as f: f.write(img_response.content) image_url = img_name question_id = extract_id_from_url(url) data.append({ "id": question_id, "question": question, "image": image_url, "options": options, "answer": answer }) next_page_tag = soup.find('div', class_='nav-next') next_page_url = next_page_tag.a['href'] if next_page_tag and next_page_tag.a else None return next_page_url
The script starts crawling from the initial URL and continues to the next page until no further pages are found.
url = start_url
while url:
html = fetch_page(url)
if html:
url = parse_page(html, url)
else:
break
After crawling, the script saves the extracted data in a JSON file.
with open('questions.json', 'w', encoding='utf-8') as json_file:
json.dump(data, json_file, ensure_ascii=False, indent=4)
print("Crawling completed and data saved to questions.json")
- Ensure you have the required libraries installed.
- Save the script to a Python file, e.g.,
crawl_questions.py
. - Run the script:
python crawl_questions.py
The script will create a directory named downloaded_images
to save any images it downloads. It will also create a JSON file named questions.json
containing the crawled data.
[
{
"id": 22,
"question": "Itara ryo guhagarara ry’ibara ritukura rigomba kugaragara igihe ijuru rikeye nibura mu ntera ikurikira",
"image": "RTGQ398-Ibibazo-Nibisubizo-Byamategeko-Yumuhanda-Rwanda-Traffic-Guide-Com-ni-ikihe-icyapa-gisobanura-umuhanda-w-icyerekezo-kimwe-icyapa-e-a.jpg",
"options": {
"a": "Metero 100 ku manywa na metero 20 mu ijoro",
"b": "Metero 150 ku manywa na metero50 mu ijoro",
"c": "Metero 200 ku manywa na metero100 mu ijoro",
"d": "Nta gisubizo cy’ukuri kirimo"
},
"answer": "d"