Redeploy Lovelace
Before starting
- clone *_service repositories under
D:\projects\
(important) - clone lovelace_deploy
- install pgadmin
Launch the stack
Be carefull, by default the deployment is set to prod when you forget the -c
argmuentProduction deployment will create cronjob that will backup the databse and push it to the backup bucket ON PRODUCTION
rez env python -- python -m deploy -u -i .*:.* -c local
if you get some errors like, jobs errors to create nress-nginx, just remove errored jobs adn relaunch the deploy command
Restore database
- Open pg admin and conenct to postres primary
- clean all database schema
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
Optimise some backup
To take less time to restore the database, you can remove some data that are not needed for local development.
For example in pipeline_db, you can remove all the data in the table audit
and audit_log
To do that, you need to:
- extract the .tar file from the backup, in a new folder
- open the file restore.sql, to find which .data file is used for the table you want to clean (ctrl + shift + f and
-- Data for Name:
) - in the folder where you extracted the .tar file,
echo "" > <filename>.data"
to remove the data in the file (or just remove it and create a new file with the same name)
Restore the database
- Now you can restore each database from backup example in cli
&"C:\Users\%username%\AppData\Local\Programs\pgAdmin 4\runtime\pg_restore.exe" -c -U postgres -d library_db -v D:\.backups\mysuperdump.tar
or if you have pg_restore in your path, you can do:
pg_restore -c -U postgres -d library_db -v D:\.backups\mysuperdump.tar
be carefull for pipeline databse you need to upscale cpu and ram (tested with cpu:2 and 16G of ram)
lovelace_deploy\configurations\database\postgresql\values.yml
To only redeploy pipeline database you can userez env python -- python -m deploy -u -i database:.* -c local
lovelace_deploy\configurations\database\postgresql\values.yml
primary:
resources:
requests:
memory: 16Gi
cpu: "2"
Rabbitmq error
You can ignore this step if you don't have any error with rabbitmq
If you see in the log of rabbitmq error related to login of mainrole user:
- In statefulset in lens, downscale rabbitmq to 0
- Delete the pods of rabbitmq
- in PersistentVolumeClaim, delete the pvc of rabbitmq
- in PersistentVolume, delete the pv of rabbitmq
- in statefulset in lens, upscale rabbitmq to 1
now rabbitmq should be up and running and you should other consumer services should be able to connect to it.
Generate github access token
Set it to the environment variable GITHUB_ACCESS_TOKEN in os.
Build images services
docker build -t registry.mtc.wtf/auth_service:local . --build-arg GIT_ACCESS_TOKEN=$env:GITHUB_ACCESS_TOKEN
docker build -t registry.mtc.wtf/pipeline_service:local . --build-arg GIT_ACCESS_TOKEN=$env:GITHUB_ACCESS_TOKEN
docker build -t registry.mtc.wtf/media_service:local . --build-arg GIT_ACCESS_TOKEN=$env:GITHUB_ACCESS_TOKEN
docker build -t registry.mtc.wtf/troll_service:local . --build-arg GIT_ACCESS_TOKEN=$env:GITHUB_ACCESS_TOKEN
docker build -t registry.mtc.wtf/webhook_service:local . --build-arg GIT_ACCESS_TOKEN=$env:GITHUB_ACCESS_TOKEN
Please take time to check each dockerfile and readme.md, because some services have some specific build args. like media service
Media Service
docker build -t registry.mtc.wtf/media_service:local . --build-arg GIT_ACCESS_TOKEN=$env:GITHUB_ACCESS_TOKEN --target dev
Hotfixes
graphql sync dataloader
The version of graphql_sync_dataloaders
need to be https://github.com/loft-orbital/graphql-sync-dataloaders
But after the local image build, the version is wrong, so one of workaround is to manually fix the python module to reinstall the good one.
So in pipeline backend pod and backend container
cd /usr/local/lib/python3.12/site-packages/graphql_sync_dataloaders
ls
cat sync_future.py
cd ..
mv ./graphql_sync_dataloaders ./no_graphql_sync_dataloaders
python -m pip install git+https://github.com/loft-orbital/graphql-sync-dataloaders.git --force-reinstall
cd ./graphql_sync_dataloaders
cat sync_future.py
To use PyCharm unittest with pipeline_service
In the pipeline_service/requirements.txt, add version to graphene-django
:
graphene-django==3.2.0
That doesn't work with 3.2.1
.
Clear AMQP queues
pods -> rabbitmq -> rabbitmq container -> expose the one that begin with stats: 15XXX/TCP
then in a browser : Queues and exchanges -> middle click on each queue and select purge messages