Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add hpa and async worker to enterprise-catalog. #5

Merged
merged 1 commit into from
Jan 5, 2024
Merged

Conversation

Jacatove
Copy link
Contributor

@Jacatove Jacatove commented Oct 31, 2023

Description

Enterprise catalog has been enabled in our platform and there is a need to anticipate future intensive usage of this service, that's why HPA and worker support is being added. Currently, an operation like adding an enterprise customer catalog to a an enterprise customer, takes a considerable amount of time, this is thought as a result of running tasks synchronously.

Changes

  • Support HPA.
  • Add enterprise-catalog worker deployment for both k8s and docker-compose.

Test 1: You can trigger an async task by adding an enterprise customer catalog to a an enterprise customer, the update_catalog_metadata_task task is executed by the worker.

Test 2:

  1. Pull the current branch
  2. Install the plugin pip install -e tutor-contrib-enterprise and verify that the plugin is installed tutor plugins list
  3. Run tutor config save
  4. Run tutor local launch
  5. Open the logs tutor local logs --follow
  6. Run tutor local do sync-enterprise-catalog-metadata
  7. Verify that the worker is working

image

Comment on lines +31 to +37
resources:
limits:
cpu: "{{ ENTERPRISE_CATALOG_LIMIT_CPU }}"
memory: "{{ ENTERPRISE_CATALOG_LIMIT_MEMORY }}"
requests:
cpu: "{{ ENTERPRISE_CATALOG_REQUEST_CPU }}"
memory: "{{ ENTERPRISE_CATALOG_REQUEST_MEMORY }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove these limits as we don't have any idea of the ideal limits. @Jacatove

Copy link
Contributor Author

@Jacatove Jacatove Dec 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Squirrel18 "Any HPA target can be scaled based on the resource usage of the pods in the scaling target. When defining the pod specification the resource requests like cpu and memory should be specified. This is used to determine the resource utilization and used by the HPA controller to scale the target up or down". Copied from https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-resource-metrics.

So I think it is worth to give some thought about resources:

This is what I have gathered so far. I ran the enterprise-catalog-update-content-job and this is the result.

MEMORY:
image
image

CPU
image
image

Checking the resourses used by the pods, I got this:

enterprise-catalog-64554f5895-8kl52                         0m           501Mi           
enterprise-catalog-update-content-job-1703786342004-kmzmg   60m          89Mi            
jacato:~$ kubectl top pod | grep enterprise
enterprise-catalog-64554f5895-8kl52                         0m           501Mi           
enterprise-catalog-update-content-job-1703786342004-kmzmg   5m           93Mi

…xt.co>

feat: add enterprise-catalog-worker deployment.
feat: enable async enterprise-catalog-worker.
feat: add hpa to enterprise-catalog.
refactor: address suggestions.
Update tutorenterprise/templates/enterprise/apps/enterprise-catalog/settings/partials/common.py
@Squirrel18 Squirrel18 changed the title feat: add hpa to enterprise-catalog. feat: add hpa and async worker to enterprise-catalog. Jan 5, 2024
@Squirrel18 Squirrel18 merged commit 322bcf9 into main Jan 5, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants