Skip to content

Commit a886077

Browse files
author
sanika palav
committed
docs(spark-operator): clarify namespace limitations with alert
Signed-off-by: sanika palav <palavsanika62@gmail>
1 parent 719404f commit a886077

1 file changed

Lines changed: 14 additions & 0 deletions

File tree

content/en/docs/components/spark-operator/user-guide/running-multiple-instances-of-the-operator.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,20 @@ description: |
55
weight: 70
66
---
77

8+
{{% alert title="Important" color="warning" %}}
9+
Spark Operator creates **cluster-scoped resources** such as
10+
`MutatingWebhookConfiguration`, `ValidatingWebhookConfiguration`, and `ClusterRole`.
11+
12+
Because of this, running Spark Operator instances in **different namespaces**
13+
within the same Kubernetes cluster is **not supported** and will result in
14+
resource conflicts.
15+
16+
The recommended approach is to:
17+
- Deploy Spark Operator **once** (or multiple releases) in a **single operator namespace**
18+
- Configure each instance to watch **different Spark job namespaces**
19+
using `spark.jobNamespaces`
20+
{{% /alert %}}
21+
822
If you need to run multiple instances of the Spark operator within the same k8s cluster, then you need to ensure that the running instances should not watch the same spark job namespace.
923
For example, you can deploy two Spark operator instances in the `spark-operator` namespace, one with release name `spark-operator-1` which watches the `spark-1` namespace:
1024

0 commit comments

Comments
 (0)