Commit cdefcfd1 authored by Szymon Zimnowoda's avatar Szymon Zimnowoda
Browse files

Sz/local k8s

parent f4c0c705
Showing with 359 additions and 1 deletion
+359 -1
......@@ -16,6 +16,7 @@ See documentation on:
* How are data types defined in [Schema](./docs/Schema.md)
* [Schema synchronization](./docs/Synchronization.md) between clients/plugins and the Pod
* Performance [Measurement](./docs/PerformanceMeasurement.md)
* Running local [Kubernetes setup](./docs/local_k8s.md)
## MSRV (Minimal Supported Rust Version)
Rust must be at least in version `1.64.0`. However we encourage to use the latest version available.
......
......@@ -77,7 +77,7 @@ Request:
# Locust
[Locust](https://docs.locust.io/en/stable/index.html) allow us to simulate consumer load on the POD.
Under `pod/perf_testing` directory there is a `locustfile.py` that Contains 2 clients, one simulating Frontend operation, second simulating Plugin behavior. Each simulated POD owner has those 2 clients (can be think of as one POD owner has one tab opened in the browser + one running plugin). In order to run it:
Under `pod/tools/perf_testing` directory there is a `locustfile.py` that Contains 2 clients, one simulating Frontend operation, second simulating Plugin behavior. Each simulated POD owner has those 2 clients (can be think of as one POD owner has one tab opened in the browser + one running plugin). In order to run it:
* [Optional] Set the `POD_DB_DIR` env variable to point to the directory where databases reside.
* Set number of locust `--users` to 2 * POD owners you want. Locust naming here is really unfortunate and it's easy to mess up locust users with POD users.
......
# Kubernetes glossary
- **Cluster** A set of worker machines, called [nodes](https://kubernetes.io/docs/concepts/architecture/nodes/), that run containerized applications. Every cluster has at least one worker node.
* **Node** A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the [control plane](https://kubernetes.io/docs/reference/glossary/?all=true#term-control-plane) and contains the services necessary to run [Pods](https://kubernetes.io/docs/concepts/workloads/pods/). Typically you have several nodes in a cluster; in a learning or resource-limited environment, you might have only one node.
The [components](https://kubernetes.io/docs/concepts/overview/components/#node-components) on a node include the [kubelet](https://kubernetes.io/docs/reference/generated/kubelet), a [container runtime](https://kubernetes.io/docs/setup/production-environment/container-runtimes), and the [kube-proxy](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/)
* **POD** A _Pod_ (as in a pod of whales or pea pod) is a group of one or more [containers](https://kubernetes.io/docs/concepts/containers/), with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.
# Kubernetes infrastructure at your home
Following chapters describe how to setup full k8s environment at your local machine. It might be very useful while testing kubernetes specific features of the `POD` or plugins, or for bug reproduction. Having everything set up locally significantly shortens the feedback loop.
To setup Kubernetes we're gonna use [minikube](https://minikube.sigs.k8s.io/docs/start/).
To view the Cluster state use [lens](https://k8slens.dev/)
## Start minikube
`minikube start `
## Enable DNS
That will allow to access k8s services using local URLs. Follow steps described [here](
https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/)
## Deploying a Docker image to the minikube
Minikube manages it's own instance of docker, with separate image registry. To build images for the minikube we need to use that registry, that can be done by calling:
* `eval $(minikube docker-env)`
Now this shell can be used to build the Docker images via `docker build ...args` command.
## Build plugin-tool-deployment
The `plugin-tool-deployment` container is monitoring for new plugins to show up in the cluster, and apply ingress configuration on them upon startup. That allows to reach them using local urls.
Build the container image in `minikube` [registry](#deploying-a-docker-image-to-the-minikube)
* `cd tools/local_k8s/plugin-tool-deployment/`
* `docker build -t plugin-tool-create-deployment -f Dockerfile_create .`
The image will be later used in `pod_deploy.yaml` configuration.
## Build local POD to the container
This will build the container of your local version of the POD, allows fix-and-check procedure to be quick.
NOTE: all commands has to be done from the same shell
* Build image in `minikube` image [registry](#deploying-a-docker-image-to-the-minikube)
Note `use_kubernetes=true`, that adds `kubectl` executable into container
`docker build -t pod-for-k8s --build-arg use_kubernetes=true .`
* Deploy POD to the minikube
`kubectl apply -f tools/local_k8s/pod_deploy.yml`
* Use dns name: `http://pod.test` to access POD
## Pull image to k8s
Seems like fetching large containers by minikube is a problem, [returns an error after some time](https://github.com/kubernetes/minikube/issues/14806)
Solution is to manually pull the image first, example
`minikube ssh docker pull gitlab.memri.io:5050/szimnowoda/plugin_to_trigger:main-latest`
## Deploying plugins in k8s
* Note HTTPS ingress is not yet implemented for local k8s. POD needs to be patched accordingly:
```rust
/// Figure out correct URL depending on the environnement the Plugin will be run on.
pub fn resolve_webserver_url(item: &CreateItem, cli: &CliOptions) -> Result<String> {
...
let webserver_url = if cli.use_kubernetes {
// Kubernetes
if let Some(base) = &cli.plugins_public_domain {
// format!("https://{}.{}", container_id, base) CHANGE HTTPS TO HTTP
format!("http://{}.{}", container_id, base)
} ...
```
* You can deploy and start plugin using [POD HTTP API](./docs/HTTP_API.md)
* When plugin starts, the `plugin-tool-deployment` container will patch its ingress configuration using `attach-services.sh` that will expose the plugin webserver via local url.
* Having all above steps done correctly, POD and plugins are reachable via their URL addresses.
POD is able to communicate with the plugin and vice versa. For example item triggering works.
## Side notes
## Kubectl command to run plugin, used by pod
```
kubectl run --restart=Never c0000000000-gitlabmemriio5050szi-4fa6094c51d64874a135-4b6ac9a0 '--labels=app=c0000000000-gitlabmemriio5050szi-4fa6094c51d64874a135-4b6ac9a0,type=plugin,owner=10eab6008d5642cf42abd2aa41f847cb' --port=8080 --image-pull-policy=Always '--image=gitlab.memri.io:5050/szimnowoda/plugin_to_trigger:main-latest' '--overrides={"apiVersion":"v1","kind":"Pod","spec":{"containers":[{"env":[{"name":"POD_FULL_ADDRESS","value":"http://172.17.0.5:3030"},{"name":"POD_TARGET_ITEM","value":"{\"containerId\":\"c0000000000-gitlabmemriio5050szi-4fa6094c51d64874a135-4b6ac9a0\",\"containerImage\":\"gitlab.memri.io:5050/szimnowoda/plugin_to_trigger:main-latest\",\"dateCreated\":1665349205938,\"dateModified\":1665349205938,\"dateServerModified\":1665349205938,\"deleted\":false,\"id\":\"4fa6094c-51d6-4874-a135-9017fcdedf18\",\"pluginModule\":\"plugin_to_trigger.plugin\",\"pluginName\":\"ClassifierPlugin\",\"rowid\":870,\"status\":\"idle\",\"targetItemId\":\"4fa6094c-51d6-4874-a135-9017fcdedf18\",\"type\":\"PluginRun\",\"webserverPort\":8008,\"webserverUrl\":\"http://localhost\"}"},{"name":"POD_PLUGINRUN_ID","value":"4fa6094c-51d6-4874-a135-9017fcdedf18"},{"name":"POD_OWNER","value":"0000000000000000000000000000000000000000000000000000000000000000"},{"name":"POD_AUTH_JSON","value":"{\"data\":{\"nonce\":\"72c936beb1a14bc7be48d8a4bf3c9416922fbc8f55c9abdb\",\"encryptedPermissions\":\"b33fe2b4514380eb2ed78bd070d6ddbc5027484879a4d08f5e04bf5a82cfae6bfe228a3b977a26c43f0fa4b191b3bd17\"}}"},{"name":"PLUGIN_DNS","value":"http://localhost"},{"name":"PYTHONUNBUFFERED","value":"1"}],"image":"gitlab.memri.io:5050/szimnowoda/plugin_to_trigger:main-latest","imagePullPolicy":"Always","name":"c0000000000-gitlabmemriio5050szi-4fa6094c51d64874a135-4b6ac9a0","ports":[{"containerPort":8080,"protocol":"TCP"}],"resources":{"limits":{"cpu":"2","memory":"3.5Gi"},"requests":{"cpu":"1","memory":"2Gi"}}}]}}'
```
\ No newline at end of file
FROM ubuntu:latest
RUN apt-get update && apt-get install -y curl jq
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/"$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)"/bin/linux/amd64/kubectl \
&& chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl
COPY attach-services.sh attach-services.sh
COPY ingress-patch.json ingress-patch.json
COPY plugin-service.yaml plugin-service.yaml
CMD ["/bin/bash", "-c", "while true; do /attach-services.sh; sleep 1; done"]
# TODO: add this to ingress patch when ssl will be set up
# ,
# {
# "op": "add",
# "path": "/spec/tls/0/hosts/-",
# "value": "replace_unique_name.replace_stage_name"
# }
\ No newline at end of file
FROM ubuntu:latest
RUN apt-get update && apt-get install -y curl jq
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/"$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)"/bin/linux/amd64/kubectl \
&& chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl
COPY delete-services.sh delete-services.sh
COPY ingress-patch.json ingress-patch.json
COPY plugin-service.yaml plugin-service.yaml
# TODO: dockerfile not used for now
# CMD ["/bin/bash", "-c", "while true; do /delete-services.sh; sleep 1; done"]
#!/bin/bash
NAMESPACE=default
NOT_FOUND=0
PORT=8080
STAGE_NAME=test
RESOURCE_NOT_EXISTS="No resources found in $NAMESPACE namespace."
PLUGINS_PODS=$(kubectl get pods -l type=plugin --field-selector=status.phase=Pending -n $NAMESPACE -o json | jq '.items[].metadata.name')
PLUGINS_RUNNING_PODS=$(kubectl get pods -l type=plugin --field-selector=status.phase=Running -n $NAMESPACE -o json | jq '.items[].metadata.name')
PLUGINS_PODS+=(${PLUGINS_RUNNING_PODS[@]})
for plugin_name in ${PLUGINS_PODS[@]}
do
PLUGIN_NAME_ALTERED=$(echo $plugin_name | tr -d '"')
IS_SERVICE_EXITS=$(kubectl get service --field-selector=metadata.name=$PLUGIN_NAME_ALTERED -n $NAMESPACE -o json | jq '.items | length')
if [ $IS_SERVICE_EXITS -eq 0 ]; then
echo "service {$PLUGIN_NAME_ALTERED} is not available"
mkdir -p $PLUGIN_NAME_ALTERED
cp plugin-service.yaml $PLUGIN_NAME_ALTERED/$PLUGIN_NAME_ALTERED-service.yaml
cp ingress-patch.json $PLUGIN_NAME_ALTERED/$PLUGIN_NAME_ALTERED-ingress-patch.json
# cp ingress-ssl-patch.json $PLUGIN_NAME_ALTERED/$PLUGIN_NAME_ALTERED-ingress-ssl-patch.json
sed -i "s/replace_unique_name/$PLUGIN_NAME_ALTERED/g" $PLUGIN_NAME_ALTERED/$PLUGIN_NAME_ALTERED-service.yaml
sed -i "s/replace_namespace/$NAMESPACE/g" $PLUGIN_NAME_ALTERED/$PLUGIN_NAME_ALTERED-service.yaml
sed -i "s/replace_port/$PORT/g" $PLUGIN_NAME_ALTERED/$PLUGIN_NAME_ALTERED-service.yaml
sed -i "s/replace_unique_name/$PLUGIN_NAME_ALTERED/g" $PLUGIN_NAME_ALTERED/$PLUGIN_NAME_ALTERED-ingress-patch.json
sed -i "s/replace_stage_name/$STAGE_NAME/g" $PLUGIN_NAME_ALTERED/$PLUGIN_NAME_ALTERED-ingress-patch.json
# sed -i "s/replace_unique_name/$PLUGIN_NAME_ALTERED/g" $PLUGIN_NAME_ALTERED/$PLUGIN_NAME_ALTERED-ingress-ssl-patch.json
# sed -i "s/replace_stage_name/$STAGE_NAME/g" $PLUGIN_NAME_ALTERED/$PLUGIN_NAME_ALTERED-ingress-ssl-patch.json
kubectl -n $NAMESPACE patch ingress example-ingress --type "json" -p "$(cat $PLUGIN_NAME_ALTERED/$PLUGIN_NAME_ALTERED-ingress-patch.json)"
# kubectl -n $NAMESPACE patch ingress hello-kubernetes-ingress --type "json" -p "$(cat $PLUGIN_NAME_ALTERED/$PLUGIN_NAME_ALTERED-ingress-ssl-patch.json)"
kubectl apply -f $PLUGIN_NAME_ALTERED/$PLUGIN_NAME_ALTERED-service.yaml
fi
done
echo "all plugin pods have services configured."
#!/bin/bash
NAMESPACE=default
NOT_FOUND=0
PORT=8080
STAGE_NAME=test
RESOURCE_NOT_EXISTS="No resources found in $NAMESPACE namespace."
PLUGINS_PODS_COMPLETED=$(kubectl get pods -l type=plugin --field-selector=status.phase!=Running -n $NAMESPACE -o json | jq '.items[].metadata.name')
for plugin_name in $PLUGINS_PODS_COMPLETED
do
PLUGIN_NAME_ALTERED=$(echo $plugin_name | tr -d '"')
IS_SERVICE_EXITS=$(kubectl get service --field-selector=metadata.name=$PLUGIN_NAME_ALTERED -n $NAMESPACE -o json | jq '.items | length')
if [ $IS_SERVICE_EXITS -eq 1 ]; then
POD_HOST='"'$PLUGIN_NAME_ALTERED.$STAGE_NAME'"'
INDEX=$(kubectl get ingress example-ingress -n $NAMESPACE -o json | jq '.spec.rules | map(.host == '$POD_HOST') | index(true)')
if [ "$INDEX" != "null" ]; then
echo "Deleting $PLUGIN_NAME_ALTERED from ingress"
kubectl patch ingress example-ingress --type=json -p="[{'op': 'remove', 'path': '/spec/rules/$INDEX'}]" -n $NAMESPACE
fi
SSL_INDEX=$(kubectl get ingress example-ingress -n $NAMESPACE -o json | jq '.spec.tls[0].hosts | index('$POD_HOST')')
if [ "$SSL_INDEX" != "null" ]; then
echo "Deleting $PLUGIN_NAME_ALTERED from SSL Ingress"
kubectl patch ingress example-ingress --type=json -p="[{'op': 'remove', 'path': '/spec/tls/0/hosts/$SSL_INDEX'}]" -n $NAMESPACE
fi
echo "Deleting service {$PLUGIN_NAME_ALTERED}"
kubectl delete svc $PLUGIN_NAME_ALTERED -n $NAMESPACE
fi
done
echo "Services and ingress are cleared."
\ No newline at end of file
[{
"op": "add",
"path": "/spec/rules/-",
"value": {
"host": "replace_unique_name.replace_stage_name",
"http": {
"paths": [{
"backend": {
"service": {
"name": "replace_unique_name",
"port": {
"number": 80
}
}
},
"path": "/",
"pathType": "Prefix"
}]
}
}
}
]
apiVersion: v1
kind: Service
metadata:
name: replace_unique_name
namespace: replace_namespace
spec:
type: ClusterIP
ports:
- port: 80
targetPort: replace_port
selector:
app: replace_unique_name
# Deploy the Memri's POD
apiVersion: v1
kind: Pod
metadata:
name: pod-backend
labels:
app: pod-backend-app
spec:
containers:
- name: pod-backend
# image: gitlab.memri.io:5050/memri/pod:dev-latest
image: pod-for-k8s
imagePullPolicy: Never
command:
- /pod
- '--owners=ANY'
- '--use-kubernetes=true'
- '--insecure-non-tls=0.0.0.0'
- '--plugins-callback-address=http://pod.test'
# - '--plugins-docker-network=pod_memri-net'
# - '--tls-pub-crt'
# - '--tls-priv-key'
- '--plugins-public-domain=test'
ports:
- name: http
containerPort: 3030
protocol: TCP
env:
- name: RUST_LOG
value: pod=debug,info
resources: {}
# volumeMounts:
# - name: data
# mountPath: /data/
---
# Deploy the plugin-tool-deployment, a service that monitors for upcoming plugins and patches
# their network configuration in order to access them via URLs
apiVersion: v1
kind: Pod
metadata:
name: plugin-tool-deployment
labels:
app: plugin-tool-deployment-app
spec:
containers:
- name: plugin-tool-create-deployment
image: plugin-tool-create-deployment
imagePullPolicy: Never
---
# The `default` user that runs in k8s POD needs special privileges to use `kubectl`
# command inside the container, create a role:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-backend
namespace: default
rules:
- verbs:
- create
- get
- watch
- list
- patch
- delete
apiGroups:
- ''
- networking.k8s.io
resources:
- pods
- pods/log
- services
- ingresses
# Do binding between `default` user and a `Role`
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-backend-binding
namespace: default
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-backend
# Create a POD service, access it via port 80
---
apiVersion: v1
kind: Service
metadata:
name: pod-backend-service
namespace: default
spec:
ports:
- port: 80
targetPort: 3030
protocol: TCP
type: NodePort
selector:
app: pod-backend-app
# Resolve pod.test url to pod-backend-service service
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
namespace: default
spec:
ingressClassName: nginx
rules:
- host: pod.test
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: pod-backend-service
port:
number: 80
\ No newline at end of file
File moved
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment