Okera is accessed via four Okera public access points:
- Web UI and REST API
- Policy Engine (planner) API
- Enforcement Fleet (worker) API
- Presto/JDBC API
Each of these access points is available via a specific port.
The following table lists the default TCP ports used by Okera.
||OkeraEnsemble AWS CLI, Spark, and Databricks|
||Okera Web UI|
||Okera Policy Engine (planner) API|
||Okera Policy Engine (planner) diagnostics (optional)|
||Okera Hive HiveServer2 proxy (optional)|
||Okera Impala HiveServer2 proxy (optional)|
||Okera Enforcement Fleet (worker) API|
||Okera Enforcement Fleet (worker) diagnostics (optional)|
||Okera Presto/JDBC API|
||Okera diagnostics (optional)|
Kubernetes Clusters (EKS, GKE, and AKS)¶
On managed Kubernetes clusters (e.g. EKS, GKE, AKS, or a Kubernetes cluster that uses the AWS/Google Cloud Platform/Azure provider), Okera provisions
ServiceTypes for Okera external-facing services. Okera provisions these services as standard load balancers. When changing ports, the Kubernetes cloud provider synchronizes those values to the respective load balancer, which can take a few minutes to take effect.
Kubernetes has two
ServiceTypes that Okera uses for public access points:
NodePort, which exposes a common port across all nodes in the cluster at the host level.
LoadBalancer, which provisions a load balancer object in the respective cloud provider.
There are three values defined for each port. For example, for the
$ kubectl get svc cdas-rest-server -oyaml ... type: LoadBalancer ports: - name: webui nodePort: 31792 port: 443 protocol: TCP targetPort: 8083
Each of the port values has a different meaning:
targetPortis the value of the port that is open on each of the targeted pods.
nodePortis the value of the port that is open on each of the nodes themselves.
portis the value on which this service is exposed.
The port by which you access a particular access point is different depending on the
- If the
LoadBalancer, the service is accessed on the
- If the
NodePort, the service is accessed on the