...
Choose the cluster in Cluster Infra → KASI Clusters by clicking Delete Stacks. If some VM nodes are not erases cleanly, delete the VMs following the instruction given in KASI Science Cloud : VM instances.
FAQ
What accounts are avaialble? What account do I have to use?
When cluster VM instances are ready, all cluster nodes have root and ubuntu accounts. 1) The root account permits web console login with the password provided by users, and you should use the account to login via the web console. The cluster is already ready to access all nodes with the root account in the form of password-less SSH access when the cluster is made ready. However, the ssh login with the root account outside the cloud network is not possible unless you manually change the setup. 2) The ubuntu account is sudo-enabled, but the console login with the ubuntu account is not allowed when the cluster is made. If you like to allow ubuntu account to login via the web console, you need to set password for the ubuntu account as described in "Changing password of ubuntu account and preparing key-based ssh login environment in multiple VM nodes" of the useful tips. Becasue the user-provided public key is already included in the authorized keys of the ubuntu account, the SSH access with a correct private key is possible. However, the password-less SSH access among the cluster nodes is not ready with the ubuntu account when the cluster is made. In order to make the cluster nodes allow password-less SSH access among them, you may need to setup the environment by yourself as shown in "Changing password of ubuntu account and preparing key-based ssh login environment in multiple VM nodes" of the useful tips. 3) You can create new accounts by yourself because you have root-permission.
Becasue the NFS partition can have issues of permissions in being accessed in multiple VM cluster nodes, you need to be careful in choosing accounts for running your applications and in setting file/directory (in particular, NFS directory and files there) permissions. Probably, if you do not need SSH access from external machines and do not have concerns on security with the root account, using the root account might be the easiest way to avoid problems such as file/directory permission. If you need SSH access from external machines, you may need to change the SSH-related configurations for the root account. When you decide to use the ubuntu account with efforts of changing owner and permission of files and directories, you may need to use the script introducted in "Changing password of ubuntu account and preparing key-based ssh login environment in multiple VM nodes" of the useful tips.
Building custom environment is painful everytime when a new cluster is made. Are there better ways?
It is recommended to have some simple scripts that can make procedures of preparing custom environments easy and fast. You can have your own scripts of installing apt packages and creating conda environments following the guide given in the useful tips. If you keep the scripts in the provided external storage, which can be accessed in the KASI cloud, or others such as github repository, you can make the cluster nodes to execute your own custom script as a step to build your custom work environments. In terms of using conda environment, you can export your custom conda environment configuration and save them in the external storage (see https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#building-identical-conda-environments). When your code can be provided as pre-compiled binary in the external storage, compilation steps can be skipped for the code.
Working directories: each node's disk, mounted external NAS space, and cluster NFS parition.
Each cluster node comes with its own disk space which is chosen by user. If your task requires locally accessible space such as /tmp, each node's disk space might be the best option for speed. The external NAS space can be mounted in any cluster nodes. However, it is expected for users to mount the NAS space in the cluster master node to bring required data and codes to the cluster NFS partition. The external NAS space can be also used to store results produced in your task by copying results stored in the NFS partition to the mounted external disk space. The cluster NFS partition is available in all cluster nodes. Therefore, the NFS partition is a right place to host files that need to be accessed in all cluster nodes. If you need to execute specific commands such as changing file permission and creating new directories in all cluster nodes, see the example given in "Executing custom commands in multiple VM nodes" of the useful tips.
Useful Tips
Running MPI codes
...