How do I compute the cumulative sum per group specifically using the DataFrame
abstraction
; and in PySpark
?
With an example dataset as follows:
df = sqlContext.createDataFrame( [(1,2,"a"),(3,2,"a"),(1,3,"b"),(2,2,"a"),(2,3,"b")],
["time", "value", "class"] )
+----+-----+-----+
|time|value|class|
+----+-----+-----+
| 1| 2| a|
| 3| 2| a|
| 1| 3| b|
| 2| 2| a|
| 2| 3| b|
+----+-----+-----+
I would like to add a cumulative sum column of value
for each class
grouping over the (ordered) time
variable.
question from:
https://stackoverflow.com/questions/45946349/python-spark-cumulative-sum-by-group-using-dataframe 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…