Skip to content

Redeploy Lovelace

Before starting

  • clone *_service repositories under D:\projects\ (important)
  • clone lovelace_deploy
  • install pgadmin

Launch the stack

Be carefull, by default the deployment is set to prod when you forget the -c argmuentProduction deployment will create cronjob that will backup the databse and push it to the backup bucket ON PRODUCTION

ps1
rez env python -- python -m deploy -u -i .*:.* -c local

if you get some errors like, jobs errors to create nress-nginx, just remove errored jobs adn relaunch the deploy command

Restore database

  • Open pg admin and conenct to postres primary
  • clean all database schema
sql
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
  • Now you can restore each database from backup

example in cli

powershell
 &"C:\Users\olivier.argentieri\AppData\Local\Programs\pgAdmin 4\runtime\pg_restore.exe" -c -U postgres -d library_db -v D:\.backups\20241030_library_db.tar

be carefull for pipeline databse you need to upscale cpu and ram (tested with cpu:2 and 16G of ram) lovelace_deploy\configurations\database\postgresql\values.yml To only redeploy pipeline database you can use rez env python -- python -m deploy -u -i database:.* -c local

lovelace_deploy\configurations\database\postgresql\values.yml

yaml
primary:
  resources:
    requests:
      memory: 16Gi
      cpu: "2"

Rabbitmq error

You can ignore this step if you don't have any error with rabbitmq

If you see in the log of rabbitmq error related to login of mainrole user:

  • In statefulset in lens, downscale rabbitmq to 0
  • Delete the pods of rabbitmq
  • in PersistentVolumeClaim, delete the pvc of rabbitmq
  • in PersistentVolume, delete the pv of rabbitmq
  • in statefulset in lens, upscale rabbitmq to 1

now rabbitmq should be up and running and you should other consumer services should be able to connect to it.

Generate github access token

Set it to the environment variable GITHUB_ACCESS_TOKEN in os.

Build images services

ps1
docker build -t registry.mtc.wtf/auth_service:local . --build-arg GIT_ACCESS_TOKEN=$env:GITHUB_ACCESS_TOKEN
docker build -t registry.mtc.wtf/pipeline_service:local . --build-arg GIT_ACCESS_TOKEN=$env:GITHUB_ACCESS_TOKEN
docker build -t registry.mtc.wtf/media_service:local . --build-arg GIT_ACCESS_TOKEN=$env:GITHUB_ACCESS_TOKEN
docker build -t registry.mtc.wtf/troll_service:local . --build-arg GIT_ACCESS_TOKEN=$env:GITHUB_ACCESS_TOKEN
docker build -t registry.mtc.wtf/webhook_service:local . --build-arg GIT_ACCESS_TOKEN=$env:GITHUB_ACCESS_TOKEN

Hotfixes

graphql sync dataloader

The version of graphql_sync_dataloaders need to be https://github.com/loft-orbital/graphql-sync-dataloaders But after the local image build, the version is wrong, so one of workaround is to manually fix the python module to reinstall the good one.

So in pipeline backend pod and backend container

sh
cd /usr/local/lib/python3.12/site-packages/graphql_sync_dataloaders
ls
cat sync_future.py
cd ..
mv ./graphql_sync_dataloaders ./no_graphql_sync_dataloaders
python -m pip install git+https://github.com/loft-orbital/graphql-sync-dataloaders.git --force-reinstall
cd ./graphql_sync_dataloaders
cat sync_future.py

To use PyCharm unittest with pipeline_service

In the pipeline_service/requirements.txt, add version to graphene-django :

txt
graphene-django==3.2.0

That doesn't work with 3.2.1.