You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

In this guide, we will try to setup a simple web app which is composed of a few components.

The overall layout of the app is shown below.

The app serve as a proxy for the HiPS  (Hierarchical Progressive Surveys) that generates a jpg files on the fly by fetching the FITS tiles from the original HiPS site.

It caches the FITS files on the MINIO server, which is not part of the deployment. In fact, the app uses the MINIO server on the KASI.

There will be web servers (we will use 3 server) which will receive traffic from the client (web browser).  The app involves  some blocking tasks, which we delegates to the task workers. The task workers are implemented with Celery task queue.

Thus, we will deploy

  1. RabbitMQ : the message queue used by the celery tasks.
  2. Celery Workers
  3. Web servers which delegate some of its tasks to the celery workers.

Create a namespace.

We will deploy the app in a custom namespace called "hips-sanic"

> k create ns hips-sanic


Deploy RabbitMq


deployment_rabbitmq.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rabbitmq
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rabbitmq
  template:
    metadata:
      labels:
        app: rabbitmq
    spec:
      containers:
      - name: rabbitmq
        image: rabbitmq:3-alpine
        ports:
        - containerPort: 5672


> k apply -f deployment_rabbitmq.yaml -n hips-sanic


service_rabbitmq.yaml
apiVersion: v1
kind: Service
metadata:
  name: rabbitmq
spec:
  selector:
    app: rabbitmq
  ports:
  - port: 5672
  


> k -n hips-sanic apply -f service_rabbitmq.yaml


Note that this will create a CLUSTER-IP service. And the k8s's internal DNS will register "rabbitmq" for this service. Thus, any app (or pod) within the same namespace can access the rabbitmq with a hostname of "rabbitmq". The pods we will deploy below has a default hostanme of "rabbitmq" for the rabbitmq server. Thus, no change is required on other pods.


Deploy Web servers and Workers


deployment_hips_sanic.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hipsweb-sanic
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hipsweb-sanic
  template:
    metadata:
      labels:
        app: hipsweb-sanic
    spec:
      containers:
      - name: hipsweb-sanic
        image: registry.kasi.re.kr/hipsweb/hipsweb_sanic


We first deploy web servers (with 3 pods).


deployment_minio_cache_worker.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: minio-cache-worker
spec:
  replicas: 3
  selector:
    matchLabels:
      app: minio-cache-worker
  template:
    metadata:
      labels:
        app: minio-cache-worker
    spec:
      containers:
      - name: minio-cache-worker
        image: registry.kasi.re.kr/hipsweb/minio_cache_worker:latest


The above is for celery workers.

> k -n hips-sanic apply -f deployment_minio_cache_worker.yaml


As a result, you will have 3 server pods and 3 worker pods.


Since the servers and worker pods will communicate through the rabbitmq (with a hostname of rabbitmq), internal networking is all set.

But we need to setup the network for the incoming traffic. We will create a Nodeport service for the web servers. Since the web server we deploy listen on port 8000, we set TargetPort of 8000 for our service.


service-hipswebsanic.yaml
apiVersion: v1
kind: Service
metadata:
  name: hipsweb-sanic
spec:
  type: NodePort
  selector:
    app: hipsweb-sanic
  ports:
  - port: 80
    targetPort: 8000



Note that a NodePort service is created with a port number of 30714 (your port number may be different).

Instead of creating an Ingress object, let's manually adjust the openstack's LoadBalancer object.


Create a pool with worker nodes as a member and port of "30714".

Create a new L7 policy, that redirect to the pool you created.

Create a new L7 rule, that matches a hostname of "hipsweb.spherex.gems0.org". This will the the hostname the client will use to access the service.



Now, we will deploy a website which will use the web service we created above. The website contained within the image of "igrinsjj-site" will access "hipsweb.spherex.gems0.org" to fetch the HIPS jpg files created on the fly.


deployment_igrins_jj.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: igrins-jj-site
spec:
  replicas: 1
  selector:
    matchLabels:
      app: igrins-jj-site
  template:
    metadata:
      labels:
        app: igrins-jj-site
    spec:
      containers:
      - name: igrins-jj-site
        image: registry.kasi.re.kr/hipsweb/igrins-jj-site


And the NodePort.

service-igrins-jj.yaml
apiVersion: v1
kind: Service
metadata:
  name: igrins-jj-site
spec:
  type: NodePort
  selector:
    app: igrins-jj-site
  ports:
  - port: 80
    targetPort: 80


Again, manually add an ingress for the site.


Create a pool with worker nodes as a member and port of "30596" which is a node port for the igrins-jj site.

Create a new L7 policy, that redirect to the pool you created.

Create a new L7 rule, that matches a hostname of "igrins-jj.spherex.gems0.org".


All is done. From the webbrowser, goto, for example, "http://igrins-jj.spherex.gems0.org/fov/hjst?ra=83.81883450742724&dec=-5.389784460966834&pa=0".



  • No labels