Skip to content
This repository was archived by the owner on Feb 9, 2025. It is now read-only.

Commit fc034d2

Browse files
authored
Merge pull request #12 from COM6012/rloftin/lab_8
Rloftin/lab 8
2 parents 28aa92f + 2411f8d commit fc034d2

1 file changed

Lines changed: 355 additions & 0 deletions

File tree

Lines changed: 355 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,355 @@
1+
# Lab 8: $k$-means clustering
2+
3+
[COM6012 Scalable Machine Learning **2024**](https://github.com/COM6012/ScalableML) by [Shuo Zhou](https://shuo-zhou.github.io/) at The University of Sheffield
4+
5+
## Study schedule
6+
7+
- [Task 1](#1-k-means-clustering): To finish in the lab session on 19th April. **Essential**
8+
- [Task 2](#2-exercises): To finish by the following Wednesday 24th March. ***Exercise***
9+
- [Task 3](#3-additional-ideas-to-explore-optional): To explore further. *Optional*
10+
11+
### Suggested reading
12+
13+
- Chapters *Clustering* and *RFM Analysis* of [PySpark tutorial](https://runawayhorse001.github.io/LearningApacheSpark/pyspark.pdf)
14+
- [Clustering in Spark](https://spark.apache.org/docs/3.5.0/ml-clustering.html)
15+
- [PySpark API on clustering](https://spark.apache.org/docs/3.5.0/api/python/reference/api/pyspark.ml.clustering.KMeans.html)
16+
- [PySpark code on clustering](https://github.com/apache/spark/blob/master/python/pyspark/ml/clustering.py)
17+
- [k-means clustering on Wiki](https://en.wikipedia.org/wiki/K-means_clustering)
18+
- [k-means++ on Wiki](https://en.wikipedia.org/wiki/K-means%2B%2B)
19+
- [k-means|| paper](http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf)
20+
21+
## 1. $k$-means clustering
22+
23+
[k-means](http://en.wikipedia.org/wiki/K-means_clustering) is one of the most commonly used clustering algorithms that clusters the data points into a predefined number of clusters. The Spark MLlib implementation includes a parallelized variant of the [k-means++](https://en.wikipedia.org/wiki/K-means%2B%2B) method called [k-means||](http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf).
24+
25+
`KMeans` is implemented as an `Estimator` and generates a [`KMeansModel`](https://spark.apache.org/docs/3.5.0/api/python/reference/api/pyspark.ml.clustering.KMeansModel.html) as the base model.
26+
27+
[API](https://spark.apache.org/docs/3.5.0/api/python/reference/api/pyspark.ml.clustering.KMeans.html): `class pyspark.ml.clustering.KMeans(featuresCol='features', predictionCol='prediction', k=2, initMode='k-means||', initSteps=2, tol=0.0001, maxIter=20, seed=None, distanceMeasure='euclidean', weightCol=None)`
28+
29+
The following parameters are available:
30+
31+
- *k*: the number of desired clusters.
32+
- *maxIter*: the maximum number of iterations
33+
- *initMode*: specifies either random initialization or initialization via k-means||
34+
- *initSteps*: determines the number of steps in the k-means|| algorithm (default=2, advanced)
35+
- *tol*: determines the distance threshold within which we consider k-means to have converged.
36+
- *seed*: setting the **random seed** (so that multiple runs have the same results)
37+
- *distanceMeasure*: either Euclidean (default) or cosine distance measure
38+
- *weightCol*: optional weighting of data points
39+
40+
### Getting started
41+
42+
First log into the Stanage cluster
43+
44+
```sh
45+
ssh $USER@stanage.shef.ac.uk
46+
```
47+
48+
You need to replace `$USER` with your username (using **lowercase** and without `$`).
49+
50+
Once logged in, we can request 2 cpu cores from reserved resources by
51+
52+
```sh
53+
srun --account=default --reservation=com6012-8 --cpus-per-task=2 --time=01:00:00 --pty /bin/bash
54+
```
55+
56+
if the reserved resources are not available, request core from the general queue by
57+
58+
```sh
59+
srun --pty --cpus-per-task=2 bash -i
60+
```
61+
62+
Now set up our conda environment, using
63+
64+
```sh
65+
source myspark.sh # assuming you copied HPC/myspark.sh to your root directory (see Lab 1, Task 2)
66+
```
67+
68+
if you created a `myspark.sh` script in Lab 1. If not, use
69+
70+
```sh
71+
module load Java/17.0.4
72+
module load Anaconda3/2022.05
73+
source activate myspark
74+
```
75+
76+
We'll be generating plots as part of this lab, so you will need to install `matplotlib` if you have not done so already with:
77+
78+
```sh
79+
pip install matplotlib
80+
```
81+
82+
Now we can start the PySpark shell with two cpu cores
83+
84+
```sh
85+
cd com6012/ScalableML # our main working directory
86+
pyspark --master local[2] # start pyspark with the 2 cpu cores requested above.
87+
```
88+
89+
If you are experiencing a `segmentation fault` when entering the `pyspark` interactive shell, run `export LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8` to fix it. It is recommended to add this line to your `myspark.sh` file.
90+
91+
We will do some plotting in this lab. To plot and save figures on HPC, we need to do the following before using pyplot:
92+
93+
```python
94+
import matplotlib
95+
matplotlib.use('Agg') # Must be before importing matplotlib.pyplot or pylab!
96+
```
97+
98+
Now import modules needed in this lab:
99+
100+
```python
101+
from pyspark.ml.clustering import KMeans
102+
from pyspark.ml.clustering import KMeansModel
103+
from pyspark.ml.evaluation import ClusteringEvaluator
104+
from pyspark.ml.linalg import Vectors
105+
import matplotlib.pyplot as plt
106+
```
107+
108+
### Clustering of simple synthetic data
109+
110+
Here, we study $k$-means clustering on a simple example with four well-separated data points as the following.
111+
112+
```python
113+
data = [(Vectors.dense([0.0, 0.0]),), (Vectors.dense([1.0, 1.0]),),
114+
(Vectors.dense([9.0, 8.0]),), (Vectors.dense([8.0, 9.0]),)]
115+
df = spark.createDataFrame(data, ["features"])
116+
kmeans = KMeans(k=2, seed=1) # Two clusters with seed = 1
117+
model = kmeans.fit(df)
118+
```
119+
120+
We examine the cluster centers (centroids) and use the trained model to "predict" the cluster index for a data point.
121+
122+
```python
123+
centers = model.clusterCenters()
124+
len(centers)
125+
# 2
126+
for center in centers:
127+
print(center)
128+
# [0.5 0.5]
129+
# [8.5 8.5]
130+
model.predict(df.head().features)
131+
# 0
132+
```
133+
134+
We can use the trained model to cluster any data points in the same space, where the cluster index is considered as the `prediction`.
135+
136+
```python
137+
transformed = model.transform(df)
138+
transformed.show()
139+
# +---------+----------+
140+
# | features|prediction|
141+
# +---------+----------+
142+
# |[0.0,0.0]| 0|
143+
# |[1.0,1.0]| 0|
144+
# |[9.0,8.0]| 1|
145+
# |[8.0,9.0]| 1|
146+
# +---------+----------+
147+
```
148+
149+
We can examine the training summary for the trained model.
150+
151+
```python
152+
model.hasSummary
153+
# True
154+
summary = model.summary
155+
summary
156+
# <pyspark.ml.clustering.KMeansSummary object at 0x2b1662948d30>
157+
summary.k
158+
# 2
159+
summary.clusterSizes
160+
# [2, 2]]
161+
summary.trainingCost #sum of squared distances of points to their nearest center
162+
# 2.0
163+
```
164+
165+
You can check out the [KMeansSummary API](https://spark.apache.org/docs/3.5.0/api/java/org/apache/spark/ml/clustering/KMeansSummary.html) for details of the summary information, e.g., we can find out that the training cost is the sum of squared distances to the nearest centroid for all points in the training dataset.
166+
167+
### Save and load an algorithm/model
168+
169+
We can save an algorithm/model in a temporary location (see [API on save](https://spark.apache.org/docs/3.5.0/api/python/reference/api/pyspark.ml.PipelineModel.html?highlight=pipelinemodel%20save#pyspark.ml.PipelineModel.save)) and then load it later.
170+
171+
Save and load the $k$-means algorithm (settings):
172+
173+
```python
174+
import tempfile
175+
176+
temp_path = tempfile.mkdtemp()
177+
kmeans_path = temp_path + "/kmeans"
178+
kmeans.save(kmeans_path)
179+
kmeans2 = KMeans.load(kmeans_path)
180+
kmeans2.getK()
181+
# 2
182+
```
183+
184+
Save and load the learned $k$-means model (note that only the learned model is saved, not including the summary):
185+
186+
```python
187+
model_path = temp_path + "/kmeans_model"
188+
model.save(model_path)
189+
model2 = KMeansModel.load(model_path)
190+
model2.hasSummary
191+
# False
192+
model2.clusterCenters()
193+
# [array([0.5, 0.5]), array([8.5, 8.5])]
194+
```
195+
196+
### Iris clustering
197+
198+
Clustering of the [Iris flower data set](https://en.wikipedia.org/wiki/Iris_flower_data_set) is a classical example [discussed the Wikipedia page of $k$-means clustering](https://en.wikipedia.org/wiki/K-means_clustering#Discussion). This data set was introduced by [Ronald Fisher](https://en.wikipedia.org/wiki/Ronald_Fisher), "the father of modern statistics and experimental design" (and thus machine learning) and also "the greatest biologist since Darwin". The code below is based on Chapter *Clustering* of [PySpark tutorial](https://runawayhorse001.github.io/LearningApacheSpark/pyspark.pdf), with some changes introduced.
199+
200+
#### Load and inspect the data
201+
202+
```python
203+
df = spark.read.load("Data/iris.csv", format="csv", inferSchema="true", header="true").cache()
204+
df.show(5,True)
205+
# +------------+-----------+------------+-----------+-------+
206+
# |sepal_length|sepal_width|petal_length|petal_width|species|
207+
# +------------+-----------+------------+-----------+-------+
208+
# | 5.1| 3.5| 1.4| 0.2| setosa|
209+
# | 4.9| 3.0| 1.4| 0.2| setosa|
210+
# | 4.7| 3.2| 1.3| 0.2| setosa|
211+
# | 4.6| 3.1| 1.5| 0.2| setosa|
212+
# | 5.0| 3.6| 1.4| 0.2| setosa|
213+
# +------------+-----------+------------+-----------+-------+
214+
# only showing top 5 rows
215+
df.printSchema()
216+
# root
217+
# |-- sepal_length: double (nullable = true)
218+
# |-- sepal_width: double (nullable = true)
219+
# |-- petal_length: double (nullable = true)
220+
# |-- petal_width: double (nullable = true)
221+
# |-- species: string (nullable = true)
222+
```
223+
224+
We can use `.describe().show()` to inspect the (statistics of) data:
225+
226+
```python
227+
df.describe().show()
228+
# +-------+------------------+-------------------+------------------+------------------+---------+
229+
# |summary| sepal_length| sepal_width| petal_length| petal_width| species|
230+
# +-------+------------------+-------------------+------------------+------------------+---------+
231+
# | count| 150| 150| 150| 150| 150|
232+
# | mean| 5.843333333333335| 3.0540000000000007|3.7586666666666693|1.1986666666666672| null|
233+
# | stddev|0.8280661279778637|0.43359431136217375| 1.764420419952262|0.7631607417008414| null|
234+
# | min| 4.3| 2.0| 1.0| 0.1| setosa|
235+
# | max| 7.9| 4.4| 6.9| 2.5|virginica|
236+
# +-------+------------------+-------------------+------------------+------------------+---------+
237+
```
238+
239+
#### Convert the data to dense vector (features)
240+
241+
Use a `transData` function similar to that in Lab 2 to convert the attributes into feature vectors.
242+
243+
```python
244+
def transData(data):
245+
return data.rdd.map(lambda r: [Vectors.dense(r[:-1])]).toDF(['features'])
246+
247+
dfFeatureVec= transData(df).cache()
248+
dfFeatureVec.show(5, False)
249+
# +-----------------+
250+
# |features |
251+
# +-----------------+
252+
# |[5.1,3.5,1.4,0.2]|
253+
# |[4.9,3.0,1.4,0.2]|
254+
# |[4.7,3.2,1.3,0.2]|
255+
# |[4.6,3.1,1.5,0.2]|
256+
# |[5.0,3.6,1.4,0.2]|
257+
# +-----------------+
258+
# only showing top 5 rows
259+
```
260+
261+
#### Determine $k$ via silhouette analysis
262+
263+
We can perform a [Silhouette Analysis](https://en.wikipedia.org/wiki/Silhouette_(clustering)) to determine $k$ by running multiple $k$-means with different $k$ and evaluate the clustering results. See [the ClusteringEvaluator API](https://spark.apache.org/docs/3.5.0/api/python/reference/api/pyspark.ml.evaluation.ClusteringEvaluator.html), where `silhouette` is the default metric. You can also refer to this [scikit-learn notebook on the same topic](https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html). Other ways of determining the best $k$ can be found on [a dedicated wiki page](https://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set).
264+
265+
```python
266+
import numpy as np
267+
268+
numK=10
269+
silhouettes = np.zeros(numK)
270+
costs= np.zeros(numK)
271+
for k in range(2,numK): # k = 2:9
272+
kmeans = KMeans().setK(k).setSeed(11)
273+
model = kmeans.fit(dfFeatureVec)
274+
predictions = model.transform(dfFeatureVec)
275+
costs[k]=model.summary.trainingCost
276+
evaluator = ClusteringEvaluator() # to compute the silhouette score
277+
silhouettes[k] = evaluator.evaluate(predictions)
278+
```
279+
280+
We can take a look at the clustering results (the `prediction` below is the cluster index/label).
281+
282+
```python
283+
predictions.show(15)
284+
# +-----------------+----------+
285+
# | features|prediction|
286+
# +-----------------+----------+
287+
# |[5.1,3.5,1.4,0.2]| 1|
288+
# |[4.9,3.0,1.4,0.2]| 1|
289+
# |[4.7,3.2,1.3,0.2]| 1|
290+
# |[4.6,3.1,1.5,0.2]| 1|
291+
# |[5.0,3.6,1.4,0.2]| 1|
292+
# |[5.4,3.9,1.7,0.4]| 5|
293+
# |[4.6,3.4,1.4,0.3]| 1|
294+
# |[5.0,3.4,1.5,0.2]| 1|
295+
# |[4.4,2.9,1.4,0.2]| 1|
296+
# |[4.9,3.1,1.5,0.1]| 1|
297+
# |[5.4,3.7,1.5,0.2]| 5|
298+
# |[4.8,3.4,1.6,0.2]| 1|
299+
# |[4.8,3.0,1.4,0.1]| 1|
300+
# |[4.3,3.0,1.1,0.1]| 1|
301+
# |[5.8,4.0,1.2,0.2]| 5|
302+
# +-----------------+----------+
303+
# only showing top 15 rows
304+
```
305+
306+
Plot the cost (sum of squared distances of points to their nearest centroid, the smaller the better) against $k$.
307+
308+
```python
309+
fig, ax = plt.subplots(1,1, figsize =(8,6))
310+
ax.plot(range(2,numK),costs[2:numK],marker="o")
311+
ax.set_xlabel('$k$')
312+
ax.set_ylabel('Cost')
313+
plt.grid()
314+
plt.savefig("Output/Lab8_cost.png")
315+
```
316+
317+
We can see that this cost measure is biased towards a large $k$. Let us plot the silhouette metric (the larger the better) against $k$.
318+
319+
```python
320+
fig, ax = plt.subplots(1,1, figsize =(8,6))
321+
ax.plot(range(2,numK),silhouettes[2:numK],marker="o")
322+
ax.set_xlabel('$k$')
323+
ax.set_ylabel('Silhouette')
324+
plt.grid()
325+
plt.savefig("Output/Lab8_silhouette.png")
326+
```
327+
328+
We can see that the silhouette measure is biased towards a small $k$. By the silhouette metric, we should choose $k=2$ but we know the ground truth $k$ is 3 (read the [data description](https://archive.ics.uci.edu/ml/datasets/iris) or count unique species). Therefore, this metric is not giving the ideal results in this case (either). [Determining the optimal number of clusters](https://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set) is an open problem.
329+
330+
## 2. Exercises
331+
332+
### Further study on iris clustering
333+
334+
Carry out some further studies on the iris clustering problem above.
335+
336+
1. Choose $k=3$ and evaluate the clustering results against the ground truth (class labels) using the [Normalized Mutual Information (NMI) available in scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.normalized_mutual_info_score.html). You need to install `scikit-learn` in the `myspark` environment via `conda install -y scikit-learn`. This allows us to study the clustering quality when we know the true number of clusters.
337+
2. Use multiple (e.g., 10 or 20) random seeds to generate different clustering results and plot the respective NMI values (with respect to ground truth with $k=3$ as in the question above) to observe the effect of initialisation.
338+
339+
## 3. Additional ideas to explore (*optional*)
340+
341+
### RFM Customer Value Analysis
342+
343+
- Follow Chapter *RFM Analysis* of [PySpark tutorial](https://runawayhorse001.github.io/LearningApacheSpark/pyspark.pdf) to perform [RFM Customer Value Analysis](https://en.wikipedia.org/wiki/RFM_(customer_value))
344+
- The data can be downloaded from [Online Retail Data Set](https://archive.ics.uci.edu/ml/datasets/online+retail) at UCI.
345+
- Note the **data cleaning** step that checks and removes rows containing null value via `.dropna()`. You may need to do the same when you are dealing with real data.
346+
- The **data manipulation** steps are also useful to learn.
347+
348+
### Network intrusion detection
349+
350+
- The original task is a classification task. We can ignore the class labels and perform clustering on the data.
351+
- Write a standalone program (and submit as a batch job to HPC) to do $k$-means clustering on the [KDDCUP1999 data](https://archive.ics.uci.edu/ml/datasets/KDD+Cup+1999+Data) with 4M points. You may start with the smaller 10% subset.
352+
353+
### Color Quantization using K-Means
354+
355+
- Follow the scikit-learn example [Color Quantization using K-Means](https://scikit-learn.org/stable/auto_examples/cluster/plot_color_quantization.html#sphx-glr-auto-examples-cluster-plot-color-quantization-py) to perform the same using PySpark on your high-resolution photos.

0 commit comments

Comments
 (0)