It is creating a folder with multiple files, because each partition is saved individually. If you need a single output file (still in a folder) you can repartition
(preferred if upstream data is large, but requires a shuffle):
df
.repartition(1)
.write.format("com.databricks.spark.csv")
.option("header", "true")
.save("mydata.csv")
or coalesce
:
df
.coalesce(1)
.write.format("com.databricks.spark.csv")
.option("header", "true")
.save("mydata.csv")
data frame before saving:
All data will be written to mydata.csv/part-00000
. Before you use this option be sure you understand what is going on and what is the cost of transferring all data to a single worker. If you use distributed file system with replication, data will be transfered multiple times - first fetched to a single worker and subsequently distributed over storage nodes.
Alternatively you can leave your code as it is and use general purpose tools like cat
or HDFS getmerge
to simply merge all the parts afterwards.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…