This guide can be used by Intel developers of IDC components to learn how to build and test IDC using a single development workstation. External dependencies are minimized.
Follow the procedure in Prepare Development Environment for IDC.
Run the following steps:
sudo apt install make unzip python3-pip moreutils make install-interactive-tools
Install additional tools.
- kind (use the version in /build/repositories/repositories.bzl)
- kubectl (use the version in /build/repositories/repositories.bzl)
- Go (use the Go toolchain version in /WORKSPACE)
Configure k8s for large environments
echo "fs.inotify.max_user_instances=1280" | sudo tee -a /etc/sysctl.d/idc.conf echo "fs.inotify.max_user_watches=655360" | sudo tee -a /etc/sysctl.d/idc.conf sudo sysctl --system
Create a swap file if your system has 32 GiB RAM or less.
sudo fallocate -l 32G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile echo "/swapfile none swap sw 0 0" | sudo tee -a /etc/fstab
For more details, see How To Add Swap Space on Ubuntu 20.04.
Use RAM for /tmp. This will improve the performance of building, testing, and running Deploy All In Kind.
echo "tmpfs /tmp tmpfs mode=1777,strictatime,nosuid,nodev,size=32G 0 0" | sudo tee -a /etc/fstab sudo mount -a
Configure docker to use internal mirror for images.
Pulling images directly from docker is not recommended as it is very likely to fail because of rate limitations imposed by docker. See https://intel.sharepoint.com/sites/caascustomercommunity/sitepages/dockerhubcache.aspx?web=1#registry-mirror
It is best practice to configure docker to use a mirror. You can do this by adding a registry-mirror to /etc/docker/daemon.json on your host. For example
{"registry-mirrors": ["https://dockerhubcache.caas.intel.com"]}
example steps:
sudo vi /etc/docker/daemon.json # add the line '{"registry-mirrors": ["https://dockerhubcache.caas.intel.com"]}' # save file sudo systemctl restart docker
To build all containers and Helm charts, run:
make build
To run all unit tests and most integration tests, run the command:
make test
- After making any changes to imports in Go (.go) files, run the command below to create or update BUILD.bazel files with
go_binary
,go_test
, and other Bazel rules.
make gazelle
Build the Go SDK and add it to the path.
eval `make go-sdk-export` go version cd go
Update a dependency in
go.mod
to the latest version with the following command:go get example.com/pkg
To use a specific version:
go get example.com/pkg@v1.2.3
Run Go Tidy to make sure go.mod matches the source code.
go mod tidy
Sometimes, the removal of a direct dependency will result in indirect dependencies getting downgraded. If this occurs, add a reference to the package in
/go/pkg/force_import/main.go
. By adding a direct reference,go mod tidy
will respect the minimal version ingo.mod
.import ( _ "example.com/pkg" )
Run Go Vet to examine the source code.
go vet ./svc/cloudaccount/... go vet ./...
Update the Bazel dependency list
deps.bzl
.cd .. make gazelle
Review the changes to
deps.bzl
to ensure that dependencies are not downgraded.If there are many changes, you may want to use
make go-list
and the scripthack/go-mod-downgraded.sh
to automatically detect modules where the semver decreased. Note that this script only properly identifies versions inx.y.z
format. If the version is in a different format, carefully review the output. Follow the steps below.git checkout main make go-list > local/go-list-main.txt git checkout your-branch make go-list > local/go-list.txt hack/go-mod-downgraded.sh
Ensure that everything can be built and tests are successful.
make generate build test
After updating public_api/proto/*.proto
, go/svc/*/*.templ
, or other sources of generated code, run:
make generate
If you only made changes to Protobuf (.proto) files, you can run just a subset of the generation process with:
make generate-go
Then commit any changed files. The Jenkins job "Check generated files" will fail if generated files have not been checked in.
Most IDC services can be deployed to a development workstation using kind. This environment can be used for iterative development of IDC services.
To build core components, deploy a new local kind cluster, and deploy core components, run:
make deploy-all-in-kind-v2 |& ts | ts -i | ts -s | tee local/deploy-all-in-kind-v2.log
To enable verbose logs, run export ZAP_LOG_LEVEL=-127
before running the previous command.
This section explains how to run the ITAC Console and ITAC Admin Console in your local environment.
When enabled, NGINX pods host the static UI web site. A Squid proxy pod runs a forward HTTP proxy and this proxy is able to resolve and connect to any service within the kind cluster or accessible to the kind cluster. You can then configure your browser to send all *.kind.local requests through the Squid proxy.
Note
These instructions apply to the Firefox browser on Windows. Chrome cannot be used with these steps because it cannot use a wpad file on a local drive.
The necessary components are already enabled in kind-singlecluster and dev-flex environments. If you are using a kind environment that does not use these configuration files, ensure the following settings in the relevant file in
deployment/helmfile/environments/*.yaml.gotmpl
.global: adminportal: enabled: true components: adminPortal: enabled: true portal: enabled: true squidProxy: enabled: true portal: enabled: true
If you make changes to these files, then apply the changes by following the steps in :ref:`upgrade_services_in_local_kind_cluster`.
Initialize the product catalog using the following steps.
Clone the product catalog repo locally.
git clone https://github.com/intel-innersource/frameworks.cloud.devcloud.services.product-catalog
Apply product specs.
cd frameworks.cloud.devcloud.services.product-catalog cd dev kubectl apply -f vendors/ -n idcs-system kubectl apply -f products/ -n idcs-system
Download the latest Proxy-Auto-Configuration (PAC) file from http://wpad.intel.com/wpad.dat and save it to a file in your Windows PC such as
C:\Users\claudiof\wpad.js
.Insert the 2nd line as shown below. The rest of the file should be unchanged.
The address and port of the Squid proxy must be reachable from your browser. Otherwise, you can configure VS Code to forward port 31128 to your development workstation, and use "PROXY localhost:31128" in wpad.js.
function FindProxyForURL(url,host) { if (dnsDomainIs(host,".kind.local")) {return "PROXY claudiof-ws.ecx1.jf.intel.com:31128"} ... }
Configure the Firefox proxy settings.
Automatic proxy configuration URL: file:///C:/Users/claudiof/wpad.js
Open your browser to https://dev.console.cloud.intel.com.kind.local/.
If you see a certificate error, then follow these additional steps to trust the private root CA created when deploying the kind cluster.
- Click Advanced, then View Certificate.
- Click on the root CA certificate that has a subject name like "Intel IDC CA 07067987 kind-singlecluster-root-ca". The numbers are randomly generated for your cluster.
- Click the link to download the PEM (cert).
- Open the folder in which you downloaded the PEM file.
- Rename the file to have a .crt extension.
- Right-click on the .crt file and choose Install Certificate. Enter the following fields:
- Store Location: Current Users
- Certificate store: Trusted Root Certification Authorities
- After installing the certificate, restart Firefox.
- Open your browser to https://dev.console.cloud.intel.com.kind.local/. You should not receive any certificate warnings.
- These steps will need to be repeated if Vault is redeployed with
make deploy-all-in-kind-v2
.
At the login prompt, choose Employee Sign In.
The following URLs are available:
ITAC Console: https://dev.console.cloud.intel.com.kind.local
ITAC Admin Console: https://dev.admin-console.cloud.intel.com.kind.local
Argo CD: https://dev.argocd.cloud.intel.com.kind.local
Login using username
admin
. The password is in the Kubernetes secretargocd/argocd-initial-admin-secret
.Gitea: https://dev.gitea.cloud.intel.com.kind.local
Login using username
gitea@local.domain
. The password is in the filelocal/secrets/gitea_admin_password
.Vault: https://dev.vault.cloud.intel.com.kind.local
Login using username
admin
. The password is in the filelocal/secrets/vault_admin_password
.
Note
IKS must be configured manually.
To run core end-to-end tests.
export no_proxy=${no_proxy},.local
source go/pkg/tools/oidc/test-scripts/get_token.sh
go/svc/cloudaccount/test-scripts/cloud_account_create.sh
export CLOUDACCOUNT=$(go/svc/cloudaccount/test-scripts/cloud_account_get_by_name.sh | jq -r .id)
go/svc/compute_api_server/test-scripts/vnet_create_with_name.sh
go/svc/compute_api_server/test-scripts/sshpublickey_create_with_name.sh
go/svc/compute_api_server/test-scripts/instance_list.sh
Anytime after running make deploy-all-in-kind-v2
, you can modify the source code of any service and upgrade the service
running in kind using the steps below.
make upgrade-all-in-kind-v2 |& ts | ts -i | ts -s | tee local/upgrade-all-in-kind-v2.log
In some cases, you may want to completely uninstall an IDC service and then reinstall it.
This is particularly useful when iterating on changes to a database schema.
The environment variable DEPLOY_ALL_IN_KIND_APPLICATIONS_TO_DELETE
can be set to any
regular expression that matches any number of Helm release names.
DEPLOY_ALL_IN_KIND_APPLICATIONS_TO_DELETE=".*-compute-db|.*-compute-api-server" \
make upgrade-all-in-kind-v2 |& ts | ts -i | ts -s | tee local/upgrade-all-in-kind-v2.log
Alternatively, you can checkout a different commit before running make deploy-all-in-kind-v2
, then checkout
your latest commit to manually test an upgrade.
When make deploy-all-in-kind-v2
is executed, the following occurs.
- Generate random secrets in
local/secrets
if needed. Existing secrets are unchanged. - Deploy a Docker registry as a Docker container. This will be used by containers and OCI Helm charts.
- Run the Go application deploy_all_in_kind.
This performs the following.
- Run Bazel to build deployment artifacts (see :ref:`deployment_artifacts`).
- Run Bazel to push container images and Helm charts to the local Docker registry.
- Generate Argo CD manifests which define the Helm releases that will be deployed.
- Start a kind cluster.
- Deploy CoreDNS, Vault, and Gitea.
- Push Argo CD manifests to a repo in Gitea.
- Deploy Argo CD and configure it to watch the repo in Gitea.
- Wait for Argo CD to deploy IDC services.
To enable VMaaS in a local kind cluster, follow the steps in this section.
If you do not have a valid Harvester KubeConfig, the VM Instance Operator will fail to start. Obtain the KubeConfig file using the steps below.
- Download the KubeConfig from Vault.
- Save the file to
local/secrets/harvester-kubeconfig/harvester1
.
If your development workstation is connected to the Intel corporate network:
Login to Harvester1.
Click Support in the bottom-left corner.
Click Download KubeConfig.
Save the file to
local/secrets/harvester-kubeconfig/harvester1
.
Both SSH Proxy Operator and BM Instance Operator needs the public key of the SSH Proxy Server to verify it before establishing a connection.
Obtain and update the host public key secret using the following command:
ssh-keyscan -t rsa ${HOST_IP} | awk '{print $2, $3}'> local/secrets/ssh-proxy-operator/host_public_key
NOTE: Here HOST_IP corresponds to IP address of the bastion server or SSH proxy server through which user will be connecting to the reserved instances.
Perform the steps in :ref:`deploy_idc_core_services_in_local_kind_cluster`.
After running the core end-to-end tests, run the following additional steps.
go/svc/compute_api_server/test-scripts/instance_create_with_name.sh
watch go/svc/compute_api_server/test-scripts/instance_list_summary.sh
ssh -J guest-${USER}@10.165.62.252 ubuntu@172.16.x.x
go/svc/compute_api_server/test-scripts/instance_delete_by_name.sh
go/svc/compute_api_server/test-scripts/instance_list.sh
This step is optional and is primarily intended for BMaaS developers who will be requesting BM instances.
If you want baremetal-operator to pull OS images directly from S3 bucket, instead of deploying a dedicated HTTP server you can enable NGINX S3 Gateway.
To do this, modify your environment Helmfile to include following section in regional services (or edit default settings)
nginxS3Gateway:
enabled: true
s3_bucket_name: {{ env "NGINX_S3_GATEWAY_BUCKET" | default "catalog-fs-dev" }}
You can set S3 bucket name that should be used to pull images directly
in Helmfile, or overwrite it through NGINX_S3_GATEWAY_BUCKET
env var
before deployment.
NGINX S3 Gateway Helm chart will create k8s NodePort service available
on port 31969
. Next, it’s necessary to configure baremetal-operator
to use this service for pulling images. This can be accomplished by
modifying bmInstanceOperator
configuration in your environment
Helmfile.
bmInstanceOperator:
osHttpServerUrl: {{ env "OS_IMAGES_HTTP_URL" | default (printf "http://%s:31969" (requiredEnv "KIND_API_SERVER_ADDRESS")) }}
This configuration is already included in bmaas-flex-dev environment settings.
The last step is to provide AWS credentials that will be used by NGINX
S3 Gateway for authentication. Save AWS access key ID to
local/secrets/NGINX_S3_GATEWAY_ACCESS_KEY_ID
and AWS secret key to
local/secrets/NGINX_S3_GATEWAY_SECRET_KEY
before triggering
deployment. Those files will be used to populate Vault secret.
This deployment setup baremetal-operator running in kind connected to a virtual baremetal stack all in one single instance. BMaaS developers can use this setup to develop BM instance operator
baremetal-operator includes the following services - ironic - ironic inspector - ironic http,tftp serving iPXE, iPXE profiles and ironic python agent - dhcp server
It will also include a virtual baremetal stack (vBMC + quemu-kvm nodes)
to run this deployment on a baremetal instance(reserve a baremetal
instance in onecloud) Set BMC default credential using
DEFAULT_BMC_USERNAME
DEFAULT_BMC_PASSWD
env variables
To enable access to the Ironic installer image, set ssh keys in Vault by
using IPA_IMAGE_SSH_PRIV
and IPA_IMAGE_SSH_PUB
env variables set
to the ssh key files. They will default to /dev/null and will require
manually updating in Vault if not set.
If this is a newly provisioned node, you might need the Intel certs applied
curl -LO --insecure -s https://ubit-artifactory-or.intel.com/artifactory/it-btrm-local/intel_cacerts/install_intel_cacerts_linux.sh
chmod +x install_intel_cacerts_linux.sh
sudo ./install_intel_cacerts_linux.sh
rm install_intel_cacerts_linux.sh
sudo passwd root
https_proxy="http://proxy-dmz.intel.com:912"
http_proxy="http://proxy-dmz.intel.com:912"
no_proxy="intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,127.0.0.0/8,134.134.0.0/16,172.16.0.0/16,192.168.150.0/24,.kind.local"
pushd idcs_domain/bmaas/bmvs/playbooks/roles/http_server/files
wget https://ubit-artifactory-or.intel.com/artifactory/intelcloudservices-or-local/images/ubuntu-22.04-server-cloudimg-amd64-latest.qcow2
wget https://ubit-artifactory-or.intel.com/artifactory/intelcloudservices-or-local/images/ubuntu-22.04-server-cloudimg-amd64-latest.qcow2.md5sum
popd
sudo apt install make gcc
make secrets
export DEFAULT_BMC_USERNAME=admin
export DEFAULT_BMC_PASSWD=password
export SSH_PROXY_IP=$(hostname -f)
export SSH_USER_PASSWORD=$(uuidgen)
sudo useradd -m -p $SSH_USER_PASSWORD guest-${USER}
sudo -u guest-${USER} mkdir /home/guest-${USER}/.ssh
sudo -u guest-${USER} cp local/secrets/ssh-proxy-operator/id_rsa.pub /home/guest-${USER}/.ssh/authorized_keys
sudo useradd -m -p $SSH_USER_PASSWORD bmo-${USER}
sudo -u bmo-${USER} mkdir /home/bmo-${USER}/.ssh
sudo -u bmo-${USER} cp local/secrets/bm-instance-operator/id_rsa.pub /home/bmo-${USER}/.ssh/authorized_keys
make install-requirements
export PATH=/home/${USER}/.local/bin:/usr/local/go/bin:$PATH
make install-interactive-tools
sudo iptables -I INPUT -p tcp -m tcp --dport 6443 -j ACCEPT
sudo iptables -I INPUT -p tcp -m tcp --dport 443 -j ACCEPT
sudo iptables -I INPUT -p tcp --match multiport --dports 8001,8002,8003,50001 -j ACCEPT
export IDC_ENV='kind-jenkins'
make deploy-metal-in-kind
###### NOTE: deploy-metal-in-kind creates the KIND cluster with the routable host interface IP as the API server address by default. Use the following command to create a KIND cluster with a specific interface IP address,
make deploy-metal-in-kind KIND_API_SERVER_ADDRESS=<IP address>
make deploy-metal-in-kind KIND_API_SERVER_ADDRESS=127.0.0.1
###### NOTE: For billing aria driver deployment, `make secrets` will create two files under local/secrets - 1) aria_auth_key 2) aria_client_no 3) aria_api_crt 4) aria_api_key
- Update the aria_auth_key file using this command (******* -> these values you can get it from your supervisor):
`echo "*********"> local/secrets/aria_auth_key`
- Update the aria_client_no file using this command (******* -> these values you can get it from your supervisor):
`echo "*********"> local/secrets/aria_client_no`
- Update the aria_api_crt file using this command (******* -> these values you can get it from your supervisor):
`echo "*********"> local/secrets/aria_api_crt`
- Update the aria_api_key file using this command (******* -> these values you can get it from your supervisor):
`echo "*********"> local/secrets/aria_api_key`
export no_proxy=${no_proxy},.kind.local
export URL_PREFIX=http://dev.oidc.cloud.intel.com.kind.local
export TOKEN=$(curl "${URL_PREFIX}/token?email=admin@intel.com&groups=IDC.Admin")
echo ${TOKEN}
export URL_PREFIX=https://dev.compute.us-dev-1.api.cloud.intel.com.kind.local
export CLOUDACCOUNTNAME=${USER}@intel.com
go/svc/cloudaccount/test-scripts/cloud_account_create.sh
export CLOUDACCOUNT=$(go/svc/cloudaccount/test-scripts/cloud_account_get_by_name.sh | jq -r .id)
echo $CLOUDACCOUNT
export AZONE=us-dev-1b
export VNETNAME=us-dev-1b-metal
go/svc/compute_api_server/test-scripts/vnet_create_with_name.sh
export NAME=my-metal-instance-1
export INSTANCE_TYPE=bm-virtual
export MACHINE_IMAGE=ubuntu-22.04-server-cloudimg-amd64-latest
go/svc/compute_api_server/test-scripts/sshpublickey_create_with_name.sh
go/svc/compute_api_server/test-scripts/instance_create_with_name.sh
go/svc/compute_api_server/test-scripts/instance_list.sh
go/svc/compute_api_server/test-scripts/instance_get_status.sh
ssh -J guest-${USER}@$(hostname -f) sdp@172.18.10.x
go/svc/compute_api_server/test-scripts/instance_delete_by_name.sh
export LB_MONITOR=tcp
export LB_PORT=8080
export NAME=mylb1
go/svc/compute_api_server/test-scripts/loadbalancer_create_with_name.sh
go/svc/compute_api_server/test-scripts/loadbalancer_list.sh
go/svc/compute_api_server/test-scripts/loadbalancer_delete_by_name.sh
For some testing, it may be important to deploy a separate kind cluster for global and regional services.
This uses the original (v1) version of make deploy-all-in-kind
.
Deploy in kind.
To test VMaaS only, with a multicluster (1 region) kind environment:
export IDC_ENV=kind-multicluster make show-config make deploy-all-in-kind |& ts | ts -i | ts -s | tee local/deploy-all-in-kind-multicluster.log
To test VMaaS only, with a 2-region kind environment:
export IDC_ENV=kind-2regions make show-config make deploy-all-in-kind |& ts | ts -i | ts -s | tee local/deploy-all-in-kind-2regions.log
Check for pods that are not healthy.
watch 'kind get clusters | grep idc | xargs -i kubectl --context kind-{} get pods -A | egrep -v "NAMESPACE|Running|Completed"'
You may view all pods with the following command.
kind get clusters | grep idc | xargs -i kubectl --context kind-{} get pods -A
To create an instance in a different region, run;
export REGION=us-dev-2
Mark one or more tests with Focus
as shown below. See https://onsi.github.io/ginkgo/#focused-specs for details.
It("should work", Focus, func() {
...
})
Run the test suite with maximum verbosity.
BAZEL_EXTRA_OPTS="--test_output=streamed --test_arg=-test.v --test_arg=-ginkgo.vv --test_env=ZAP_LOG_LEVEL=-127 //go/pkg/compute_integration_test/..." make test-custom
Tests that have external dependencies that are not widely available to all users should be excluded from make test
.
Entire Go test suites can be excluded by adding tags = ["manual"]
to the go_test()
definition in the BUILD.bazel
file.
Such a test suite can be executed manually with the command below.
BAZEL_EXTRA_OPTS="--test_output=streamed //go/pkg/compute_integration_test/..." make test-custom
export VAULT_ADDR=http://localhost:30990/
export VAULT_TOKEN=$(cat local/secrets/VAULT_TOKEN)
vault secrets list
Edit the hosts file in your laptop running your browser (
C:\Windows\System32\drivers\etc\hosts
). It should have the line from deployment/common/etc-hosts/hosts but with the IP address pointing to the host running kind.For token generation, visit:
For invoking global APIs (grpc-rest-gateway), visit:
For invoking regional APIs (grpc-rest-gateway), visit:
Other URLs:
Use VS Code to forward port 30990 to localhost:30990. Then visit:
Login using the Vault token in local/secrets/VAULT_ROOT_KEY
.
Argo CD is deployed with make deploy-all-in-kind-v2
.
Get admin
password:
ARGOCD_PASSWORD=$(kubectl get secret -n argocd argocd-initial-admin-secret -o go-template='{{.data.password | base64decode}}')
echo ${ARGOCD_PASSWORD}
Use VS Code to forward port 30960 to localhost:30960. Then visit:
The recommended way of monitoring and controlling Argo CD is through the Kubernetes CRDs such as Applications and ApplicationSets. If you need to use the Argo CD CLI, follow the steps below.
export ARGOCD_SERVER=localhost:30960
export ARGOCD_OPTS="--plaintext"
ARGOCD_PASSWORD=$(kubectl get secret -n argocd argocd-initial-admin-secret -o go-template='{{.data.password | base64decode}}')
argocd login ${ARGOCD_SERVER} --username admin --password "${ARGOCD_PASSWORD}"
argocd app list
For more information, see deployment/argocd/README.md.
Gitea provides a Github-like environment locally. It is deployed with make deploy-all-in-kind-v2
.
Get gitea_admin
password:
GITEA_ADMIN_PASSWORD=$(cat local/secrets/gitea_admin_password)
echo ${GITEA_ADMIN_PASSWORD}
Use VS Code to forward port 30965 to localhost:30965. Then visit:
Issue: E1010 08:57:43.772801 78004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get "https://10.98.58.75:6443/api?timeout=32s": Forbidden" Remedy: Check that your no_proxy env var is set correctly. It should include the IP address of this node.
Issue: github.com/google/cel-go/interpreter: github.com/aws/aws-sdk-go-v2@v1.30.4: Get "https://proxy.golang.org/github.com/aws/aws-sdk-go-v2/@v/v1.30.4.mod": tls: failed to verify certificate: x509: certificate signed by unknown authority Remedy: sudo apt install ca-certficates && sudo update-ca-certificates