This is the mono-repository for the backend services and their common codebase
Follow the next instructions to get the project ready to use ASAP.
Currently, there are two ways to run the project, the production mode using a virtual environment and installing all the necessary libraries there and the other way is using the development mode with Docker and docker-compose. It is recommended to use the development mode and in special cases the production mode.
For both modes it is necessary to have the following requirements installed:
- Python version 3 (recommended 3.8 or less) in your path. It will install automatically pip as well.
- A virtual environment, namely .venv.
- Optionally for running Azure functions locally: Azure functions core tool.
Before proceeding to the configuration for each of the modes, it is important to perform the following step regardless of the mode to be used.
Execute the next command at the root of the project:
python -m venv .venv
Note: We can replace python for python3 or python3.8 according to the version you have installed, but do not forget the initial requirements.
Activate the environment
Windows:
.venv\Scripts\activate.bat
In Unix based operative systems:
source .venv/bin/activate
The configuration required for each of the modes is as follows:
Development Mode
In addition to the initial requirements, it is necessary to have the following requirements installed:
-
Docker
You can follow the instructions below to install on each of the following operating systems:
-
Docker Compose
To install Docker Compose, please choose the operating system you use and follow the steps here.
Once installed Docker and Docker Compose we must create a .env
file in the root of our project where we will put the following environment variables.
export MS_AUTHORITY=XXXX
export MS_CLIENT_ID=XXXX
export MS_SCOPE=XXXX
export MS_SECRET=yFo=XXXX
export MS_ENDPOINT=XXXX
export DATABASE_ACCOUNT_URI=XXXX
export DATABASE_MASTER_KEY=XXXX
export DATABASE_NAME=XXXX
export FLASK_APP=XXXX
export AZURE_APP_CONFIGURATION_CONNECTION_STRING=XXXX
export FLASK_DEBUG=XXXX
export REQUESTS_CA_BUNDLE=XXXX
Please, contact the project development team for the content of the variables mentioned above.
Once all the project configuration is done, we are going to execute the following command in the terminal, taking into account that we are inside the root folder of the project:
docker-compose up --build
This command will build all images with the necessary configurations for each one, also raises the cosmos emulator in combination with the backend, now you can open in the browser:
http://127.0.0.1:5000/
open backend API.https://127.0.0.1:8081/_explorer/index.html
to open Cosmos DB emulator.
If you have already executed the command (
docker-compose up --build
) previously in this project, it is not necessary to execute it again, instead it should be executed like this:docker-compose up
It is also important to clarify that if packages or any extra configuration is added to the image's construction, you need to run again
docker-compose up --build
, you can see more information about this flag here
In order to generate fake data to test functionalities or correct errors, we have built a CLI, called 'Time Tracker CLI', which is in charge of generating the fake information inside the Cosmos emulator.
To learn how this CLI works, you can see the instructions here
It is important to clarify that Time Tracker CLI only works in development mode.
We are using Pytest for tests. The tests are located in the package
tests
and use the conventions for python test discovery.
Remember to run any available test command we have to have the containers up (
docker-compose up
).
This command run all tests:
./time-tracker.sh pytest -v
Run a single test:
./time-tracker.sh pytest -v -k name-test
To check the coverage of the tests execute:
./time-tracker.sh coverage run -m pytest -v
To get a report table:
./time-tracker.sh coverage report
To get a full report in html:
./time-tracker.sh coverage html
Then check in the htmlcov/index.html to see it.
If you want that previously collected coverage data is erased, you can execute:
./time-tracker.sh coverage erase
Production Mode
python3 -m pip install -r requirements/<app>/<stage>.txt
If you use Windows, you will use this command:
python -m pip install -r requirements/<app>/<stage>.txt
Where <app>
is one of the executable app namespace, e.g. time_tracker_api
or time_tracker_events
(Note: Currently, only time_tracker_api
is used.). The stage
can be
dev
: Used for working locallyprod
: For anything deployed
Bear in mind that the requirements for time_tracker_events
, must be located on its local requirements.txt, by
convention.
When you use Bash or GitBash you should create a .env file and add the next variables:
export MS_AUTHORITY=XXX
export MS_CLIENT_ID=XXX
export MS_SCOPE=XXX
export MS_SECRET=XXX
export MS_ENDPOINT=XXX
export DATABASE_ACCOUNT_URI=XXX
export DATABASE_MASTER_KEY=XXX
export DATABASE_NAME=XXX
export FLASK_APP=XXX
export AZURE_APP_CONFIGURATION_CONNECTION_STRING=XXX
export FLASK_DEBUG=True
If you use PowerShell, you should create a .env.bat file and add the next variables:
$env:MS_AUTHORITY="XXX"
$env:MS_CLIENT_ID="XXX"
$env:MS_SCOPE="XXX"
$env:MS_SECRET="XXX"
$env:MS_ENDPOINT="XXX"
$env:DATABASE_ACCOUNT_URI="XXX"
$env:DATABASE_MASTER_KEY="XXX"
$env:DATABASE_NAME="XXX"
$env:FLASK_APP="XXX"
$env:AZURE_APP_CONFIGURATION_CONNECTION_STRING="XXX"
$env:FLASK_DEBUG="True"
If you use Command Prompt, you should create a .env.ps1 file and add the next variables:
set "MS_AUTHORITY=XXX"
set "MS_CLIENT_ID=XXX"
set "MS_SCOPE=XXX"
set "MS_SECRET=XXX"
set "MS_ENDPOINT=XXX"
set "DATABASE_ACCOUNT_URI=XXX"
set "DATABASE_MASTER_KEY=XXX"
set "DATABASE_NAME=XXX"
set "FLASK_APP=XXX"
set "AZURE_APP_CONFIGURATION_CONNECTION_STRING=XXX"
set "FLASK_DEBUG=True"
Important: Ask the development team for the values of the environment variables, also you should set the environment variables each time the application is run.
- Start the app:
flask run
- Open
http://127.0.0.1:5000/
in a browser. You will find in the presented UI a link to the swagger.json with the definition of the api.
We are using Pytest for tests. The tests are located in the package
tests
and use the conventions for python test discovery.
This command run all tests:
pytest -v
Note: If you get the error "No module named azure.functions", execute the command
pip install azure-functions
:
To run a single test:
pytest -v -k name-test
To check the coverage of the tests execute:
coverage run -m pytest -v
To get a report table:
coverage report
To get a full report in html:
coverage html
Then check in the htmlcov/index.html to see it.
If you want that previously collected coverage data is erased, you can execute:
coverage erase
We use pre-commit library to manage local git hooks.
This library allows you to execute code right before the commit, for example:
- Check if the commit contains the correct formatting.
- Format modified files based on a Style Guide such as PEP 8, etc
To install and use pre-commit
in development mode we have to perform the next command:
python3 -m pip install pre-commit
Once pre-commit
library is installed, we just need to run in our virtual environment:
pre-commit install -t pre-commit -t commit-msg
Remember to execute these commands with the virtual environment active.
For more details, see section Development > Git hooks.
With this command the library will take configuration from .pre-commit-config.yaml
and will set up the hooks by us.
Use the following commit message style. e.g:
'feat: TT-123 Applying some changes'
'fix: TT-321 Fixing something broken'
'feat(config): TT-00 Fix something in config files
The value TT-###
refers to the Jira issue that is being solved. Use TT-00 if the commit does not refer to any issue.
For example if your task in Jira is TT-48 implement semantic versioning your branch name is:
TT-48-implement-semantic-versioning
The project time_tracker_events
is an Azure Function project. Its main responsibility is to respond to calls related to
events, like those triggered by Change Feed.
Every time a write action (create
, update
, soft-delete
) is done by CosmosDB, thanks to bindings
these functions will be called. You can also run them in your local machine:
- You must have the Azure CLI and the Azure Functions Core Tools installed in your local machine.
- Be sure to authenticate with the Azure CLI if you are not.
az login
- Execute the project
cd time_tracker_events
source run.sh
You will see that a large console log will appear ending with a message like
Now listening on: http://0.0.0.0:7071
Application started. Press Ctrl+C to shut down.
- Now you are ready to start generating events. Just execute any change in your API and you will see how logs are being generated by the console app you ran before. For instance, this is the log generated when I restarted a time entry:
[04/30/2020 14:42:12] Executing 'Functions.handle_time_entry_events_trigger' (Reason='New changes on collection time_entry at 2020-04-30T14:42:12.1465310Z', Id=3da87e53-0434-4ff2-8db3-f7c051ccf9fd)
[04/30/2020 14:42:12] INFO: Received FunctionInvocationRequest, request ID: 578e5067-b0c0-42b5-a1a4-aac858ea57c0, function ID: c8ac3c4c-fefd-4db9-921e-661b9010a4d9, invocation ID: 3da87e53-0434-4ff2-8db3-f7c051ccf9fd
[04/30/2020 14:42:12] INFO: Successfully processed FunctionInvocationRequest, request ID: 578e5067-b0c0-42b5-a1a4-aac858ea57c0, function ID: c8ac3c4c-fefd-4db9-921e-661b9010a4d9, invocation ID: 3da87e53-0434-4ff2-8db3-f7c051ccf9fd
[04/30/2020 14:42:12] {"id": "9ac108ff-c24d-481e-9c61-b8a3a0737ee8", "project_id": "c2e090fb-ae8b-4f33-a9b8-2052d67d916b", "start_date": "2020-04-28T15:20:36.006Z", "tenant_id": "cc925a5d-9644-4a4f-8d99-0bee49aadd05", "owner_id": "709715c1-6d96-4ecc-a951-b628f2e7d89c", "end_date": null, "_last_event_ctx": {"user_id": "709715c1-6d96-4ecc-a951-b628f2e7d89c", "tenant_id": "cc925a5d-9644-4a4f-8d99-0bee49aadd05", "action": "update", "description": "Restart time entry", "container_id": "time_entry", "session_id": null}, "description": "Changing my description for testing Change Feed", "_metadata": {}}
[04/30/2020 14:42:12] Executed 'Functions.handle_time_entry_events_trigger' (Succeeded, Id=3da87e53-0434-4ff2-8db3-f7c051ccf9fd)
In this API we are requiring authenticated users using JWT. To do so, we are using the library
PyJWT, so in every request to the API we expect a header Authorization
with a format
like:
Bearer
In the Swagger UI, you will now see a new button called "Authorize":
when you click it then you will be notified that you must enter the content of the Authorization header, as mentioned
before:
Click "Authorize" and then close that dialog. From that moment forward you will not have to do it anymore because the
Swagger UI will use that JWT in every call, e.g.
If you want to check out the data (claims) that your JWT contains, you can also use the CLI of PyJWT:
pyjwt decode --no-verify "<JWT>"
Bear in mind that this API is not in charge of verifying the authenticity of the JWT, but the API Management.
Due to the used technology and particularities on the implementation of this API, it is important that you respect the following notes regarding to the manipulation of the data from and towards the API:
- The recommended format for
DateTime strings in Azure Cosmos DB is
YYYY-MM-DDThh:mm:ss.fffffffZ
which follows the ISO 8601 UTC standard.
The Azure function project time_tracker_events
also have some constraints to have into account. It is recommended that
you read the Azure Functions Python developer guide.
If you require to deploy time_tracker_events
from your local machine to Azure Functions, you can execute:
func azure functionapp publish time-tracker-events --build local
There are available commands, aware of the API, that can be very helpful to you. You can check them out by running
python cli.py
If you want to run an specific command, e.g. gen_swagger_json
, specify it as a param
as well as its correspondent options.
python cli.py gen_swagger_json -f ~/Downloads/swagger.json
We use angular commit message style as the standard commit message style.
-
The release is automatically done by the TimeTracker CI although can also be done manually. The variable
GH_TOKEN
is required to post releases to Github. TheGH_TOKEN
can be generated following these steps. -
We use the command
semantic-release publish
after a successful PR to make a release. Check the library python-semantic-release for details of underlying operations.
Looking for a DB-agnostic migration tool, the only choice I found was migrate-anything.
A specific requirement file was created to run the migrations in requirements/migrations.txt
. This way we do not mix
any possible vulnerable dependency brought by these dependencies to the environment prod
. Therefore the dependencies
to run the migrations shall be installed this way:
pip install -r requirements/<app>/prod.txt
pip install -r requirements/migrations.txt
All the migrations will be handled and created in the python package migrations
. In order to create a migration we
must do it manually (for now) and prefixed by a number, e.g. migrations/01-initialize-db.py
in order to guarantee the
order of execution alphabetically.
Inside every migration there is an up
and down
method. The down
method is executed from the persisted migration in
the database. When a down
logic that used external dependencies was tested, it failed; whilst, I put that same logic in
the up
method, it run correctly. In general, the library seems to present design issues.
Therefore, it is recommended to apply changes just in one direction: up
.
For more information, please check out some examples
that illustrate the usage of this migration tool.
Basically, for running the migrations you must execute:
migrate-anything migrations
They will be automatically run during the Continuous Deployment process.
- Python version 3 as backend programming language. Strong typing for the win.
- Flask as the micro framework of choice.
- Flask RestPlus for building Restful APIs with Swagger.
- Pytest for tests.
- Coverage for coverage.
- Swagger for documentation and standardization, taking into account the API import restrictions and known issues in Azure.
- Azure Functions bindings
for making
time_tracker_events
to handle the triggers generated by our Cosmos DB database throw Change Feed.
Shared file with all the Feature Toggles we create, so we can have a history of them Feature Toggles dictionary
Copyright 2021 ioet Inc. All Rights Reserved.