This is documentation on the assessment from Tempo AI ventures.
Live url: https://chat-app-t5i4.onrender.com
Since free resources from render are being used (i.e. webserver, redis, and Postgres), there's bound to be some delays most especially when there's been no activity for a while. Hence, the first requests might take some seconds.
To set up this service for local or remote deployment on a computer environment, you need to follow the steps below.
git clone https://github.com/Fuad28/chat-app.git
cd chat-app
docker-compose -f docker-compose.dev.yml up -d --build
Note that the assumption is that you already have docker installed and running on your computer
Now that you've built the containers, you're ready to start making requests on http//:127.0.0.1:8000/api/v1/ and ws//:127.0.0.1:8000/ws/conversations/
In the root directory, run
docker-compose -f docker-compose.dev.yml exec web ./run_tests.sh
In addition, a GitHub workflow has been set up with the name run_tests.yml
to automatically run tests on all following changes created either by a push or pull request to the master branch.
HTTP endpoints: https://documenter.getpostman.com/view/20100124/2sA3Qs9roh
WS endpoint: ws//:127.0.0.1:8000/ws/conversations/?token=...
An Image will be attached since we couldn't publish WebSockets on Postman
To deploy this service on Render, you need to create a Render web service and select a Python environment as your deployment environment as well as set the required env variables so they can become available to the Docker context. Check the .env.example
file for the list of required env variables. Connect the right GitHub repository to the Render service set the branch to master
and the path to ./
and then proceed to deploy.
In addition, a CI/CD pipeline has been set up to redeploy changes in the master branch. This is subject to passing the test mentioned above.
Data first persists in the Postgres database and is then saved in the Redis. This is due to the in-memory nature of Redis and we understand how important users chats are. We certainly don't want users to have missing messages.
At every point in time, we store at most 500 messages per conversation for instance, this gives users enough to read before it's necessary to send a request.
A simple schema is adopted to capture all necessary data. An API was subsequently built around it to allow for necessary data to be conveniently shown while keeping network calls minimum.
PostgreSQL was selected due to its scalability, performance, and exciting features.
- User management and authentication is handled by Djoser
- WebSocket implementation by django-channels.
- Conversation activities such as joins and exits are broadcast.
- Tests are by pytest
- Local development was handled by Docker
- Logging by Django built-in logger leveraging file and console.
- Monitoring by sentry. We can further configure this to send logs to a Slack channel.
- Throttling is implemented project-wise and also specifically on messages. The throttle rate for users is 1000 messages per day and that of messages is set to 60 messages per minute. These values are abstract and subsequent observation and data analytics can help establish a sensible value.
- The API is deployed on render.
- Mails are sent using Mailjet API in production. We however use a development smtp server called smtp4dev for development. The emails can be accessed via http://127.0.0.1:8001/ in development.
- Soft delete is implemented for messages. Further discussions can be had on how to handle them.
- Searching is implemented on conversation list
- We can write a lot more tests.
- We can extend the documentation to be more idiomatic.
- Upon user registration, we can implement an activate account feature for more security.
- We can give more responsibilities to conversation admins. For now, admins can add and remove people from the group (although a group creator can't be removed). Admins can also delete a conversation member message.
- We can also look into data encryption so that users can be assured of their privacy.