I have trying to convert the SQL Query to PySpark Code. Wherever possible, I am trying to part away from using SQL code and trying to keep it as much pure pyspark as possible.
The SQL Query I am trying to work on is:
SELECT
INVESTOR_NUMBER, F_NUM ,TARGET_NUMBER ,DIST_NUMBER,
ELECTRONIC_TRANSAC,
-SUM(SHARES_QUANTITY) AS UNITS_INFLOW , -SUM(PURCHASE_TRANSACTION_AMOUNT) AS AMOUNT_INFLOW,
-COUNT(*) AS CNT_INFLOW , -COUNT(DISTINCT(TRANSACTION_REFERENCE_NUMBER)) AS CNT_INFLOW_DIST
FROM INDIA__TRANSACTIONS_FACT WHERE
TRANSACTION_CODE IN ('P','S')
GROUP BY
DIST_NUMBER, ELECTRONIC_TRANSAC
Here, when I try to avoid reversals in PySpark what do I do?
-SUM(SHARES_QUANTITY) AS UNITS_INFLOW , -SUM(PURCHASE_TRANSACTION_AMOUNT) AS AMOUNT_INFLOW,
-COUNT(*) AS CNT_INFLOW , -COUNT(DISTINCT(TRANSACTION_REFERENCE_NUMBER)) AS CNT_INFLOW_DIST
The SQL code here works alright, but I am unable to find the method for the same in PySpark.
Any help on the syntactic code conversion from SQL to PySpark is appreciated.
question from:
https://stackoverflow.com/questions/65641796/how-to-use-sum-and-count-for-excluding-reversals-in-pyspark 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…