The back-end (service API) component for the Boilerplate.
This project include a couple of nice components on its scaffolding such as:
- Phoenix
- GraphQL
- Data Source
- PostgreSQL (Primary)
- Elasticsearch
FTS
(Secondary) - Redis
Cache
- Heartcheck
- Kafka
- Elixir releases
- Dotenv
- Docker
- Kubernetes
- k3d
- sealed-secrets
- HPA
- CI/Travis
- credo
- Telemetry
- Datadog
- Spandex
- Cors
- Formatter
- Tests
General high-level overview
The app is fully wrapped by Docker.
Then you can start the containers:
make up
If you want to use the FTS (eg. userSearch) follow these instructions.
Then, run the app
make run
The server will be available by default accessing http://localhost:4000
make tests
make coveralls
make credo
make format
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes without docker running the app itself, but still for services (e.g Databases, Kafka, etc).
Install the versions of elixir and erlang defined at .tool-versions
asdf install
In the project root dir, create a file .env.local
(which will override the default .env
).
BOILERPLATE_ELASTIC_URL=http://localhost:9200
BOILERPLATE_TEST_ELASTIC_URL=http://localhost:9200
BOILERPLATE_DATABASE_URL=postgres://user:password@localhost:5435/boilerplate_dev
BOILERPLATE_TEST_DATABASE_URL=postgres://user:password@localhost:5435/boilerplate_test
BOILERPLATE_REDIS_CACHE_URL=redis://localhost:6381/0
BOILERPLATE_TEST_REDIS_CACHE_URL=redis://localhost:6381/1
BOILERPLATE_REDIS_PUBSUB_URL=redis://localhost:6381/2
Start the services (using Docker)
docker-compose up --scale app=0
Download the dependencies
mix deps.get
Start the web server
mix phx.server
The server will be available by default accessing http://localhost:4000
MIX_ENV=test mix test
MIX_ENV=test mix coveralls
mix credo
mix format --check-formatted
Given that the project is using mix releases to create production images, we'll not be able to run Mix tasks in non-dev environments (staging, production), given Mix is not available in the images produced for them.
To run any type of task in these environments we need to define a function that runs
that task in Boilerplate.ReleaseTasks
and in the environment we're able to trigger that
with a command.
For a concrete example, you can run migrations in these environments with the command:
$ /app/boilerplate/rel/boilerplate/bin/boilerplate eval "Boilerplate.ReleaseTasks.migrate()"
The migration task call will basically create the needed indexes on Elasticsearch and prepare its document mapping.
You can check more details on that in the Phoenix documentation.
BOILERPLATE_HOST
- The host that the server generate the internal URL. Ex:boilerplate.com
BOILERPLATE_PORT
- The port that the server will start. Ex:4000
.BOILERPLATE_ALLOWED_ORIGINS
- Comma separated list of allowed origins used on CORS validation.BOILERPLATE_SECRET_KEY_BASE
- Key used to sign phoenix cookie sessions. Ex:9Gf55vJqIr89bFKq0gHXTlO2iNzP/gyE/jVeLLKn2rAPhH1q2ePO+3wT5VGbZE50
.
Elasticsearch
BOILERPLATE_ELASTIC_URL
- The elastic's url. Ex:localhost:9200
BOILERPLATE_TEST_ELASTIC_URL
- Url of the Elasticsearch used for tests.BOILERPLATE_USERS_ELASTIC_INDEX
- The elastic's needed index to store users data. Ex:postgresql.public.users
BOILERPLATE_TEST_USERS_ELASTIC_INDEX
- The elastic's needed index to store users data for tests. Ex:postgresql.public.users-test
BOILERPLATE_USERS_ELASTIC_DOCUMENT
- The elastic's users document. Ex:users
BOILERPLATE_TEST_USERS_ELASTIC_DOCUMENT
- The elastic's users document for test. Ex:users-test
.BOILERPLATE_ELASTIC_SEED_FILE
- The used file to bulk index users. Ex:data/seed.json
BOILERPLATE_ELASTIC_DEFAULT_SIZE
- Default page limit size for elastic queries. Ex:10
Datadog
BOILERPLATE_DATADOG_PORT
- Port of the datadog agent REST API. Ex:8126
.BOILERPLATE_DATADOG_HOST
- Host of the datadog agent. Ex:localhost
.BOILERPLATE_ENABLE_DATADOG
- Feature flag to enable database. Set it totrue
to enable datadog tracing and metrics.BOILERPLATE_DATADOG_UDP_PORT
- Port of the datadog agent UDP API. Ex:8125
.BOILERPLATE_DATADOG_SERVICE_NAME
- Service name used by Datadog. Ex:users_staging
.BOILERPLATE_DATADOG_ENVIRONMENT_NAME
- Datadog environment name. Ex:staging
,production
.BOILERPLATE_SPANDEX_BATCH_SIZE
- Number of traces send in each batch to datadog. Ex:100
.BOILERPLATE_SPANDEX_SYNC_THRESHOLD
- How many simultaneous HTTP pushes will be going to Datadog. Ex:100
.
PostgreSQL
BOILERPLATE_DATABASE_URL
- The primary datasource, PostgreSQL URL. Ex:postgres://user:password@postgresql:5432/boilerplate_dev
BOILERPLATE_TEST_DATABASE_URL
- The primary datasource, PostgreSQL URL for tests. Ex:postgres://user:password@postgresql:5432/boilerplate_test
Redis
BOILERPLATE_REDIS_CACHE_URL
- The Cache URL used for cache purpose. Ex:redis://redis:6379/0
BOILERPLATE_TEST_REDIS_CACHE_URL
- The Cache URL used for cache purpose in test environment. Ex:redis://redis:6379/1
HOSTNAME=boilerplate_devBOILERPLATE_REDIS_PUBSUB_URL
- The Cache URL used for phoenix pubsub. Ex:redis://redis:6379/2
The naming scheme must include BOILERPLATE
, the name of the service and the variable name itself.
For example: BOILERPLATE_FOO=bar
The idea is to avoid naming collisions in the Kubernetes environment.
The dotenv will override the env vars in the following order (highest defined variable overrides lower):
Hierarchy Priority | Filename | Environment | Should I .gitignore it? |
Notes |
---|---|---|---|---|
1st (highest) | .env.local.$MIX_ENV |
dev/test/prod | Yes! | Local overrides of environment-specific settings. |
2nd | .env.$MIX_ENV |
dev/test/prod | No. | Overrides of environment-specific settings. |
3rd | .env.local |
All Environments | Yes! | Local overrides |
Last | .env |
All Environments | No. | The Original® |
Note: $MIX_ENV
refers to the environment of the build, as dev
, test
and prod
. Ex: .env.local.test
will be loaded only when the environment is test
.
By running
make heartcheck
You should be able to see something like:
[
{
"elastic": {
"status": "ok"
},
"time": 9.703
},
{
"database": {
"status": "ok"
},
"time": 8.188
},
{
"cache_redis": {
"status": "ok"
},
"time": 3.817
}
]
NAME=John make graphql-mutation-users-create
NAME="John Lennon" make graphql-mutation-users-create
UUID=USER_UUID make graphql-query-users-find
make graphql-query-users-list
UUID=USER_UUID NAME="John Wick" make graphql-mutation-users-update
QUERY=oh make graphql-query-users-search
UUID=USER_UUID make graphql-mutation-users-remove
If you prefer, having the server running go to http://localhost:4000/graphiql
in your prefered browser.
For Infra and Kubernetes info, please check this
This boilerplate also includes a secondary data-source for FTS purposes (Elasticsearch). Behind the scenes it's a CDC replication from the primary data-source (PostgreSQL) through the usage of Kafka connenct Source <> Sink ingesting the Elasticsearch.
In order to make it works you need to perform a simple setup which you can find the instructions here.
For more info access
The Front-end React app for this project can be found here.