Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
368 views
in Technique[技术] by (71.8m points)

Is there an elegent way to merge two data frame by timestamp in R?

Suppose I have two data frame, df1 and df2.

df1 <- data.frame(value = 1:5, timestamp = as.POSIXct( c( "2020-03-02 12:20:00", "2020-03-02 12:20:01", "2020-03-02 12:20:03" , "2020-03-02 12:20:05", "2020-03-02 12:20:08")))

df2 <- data.frame(value = 6:10, timestamp = as.POSIXct( c( "2020-03-02 12:20:01", "2020-03-02 12:20:02", "2020-03-02 12:20:03" , "2020-03-02 12:20:04", "2020-03-02 12:20:05")))

df1

value timestamp
1 2020-03-02 12:20:00
2 2020-03-02 12:20:01
3 2020-03-02 12:20:03
4 2020-03-02 12:20:05
5 2020-03-02 12:20:08
question from:https://stackoverflow.com/questions/65858965/is-there-an-elegent-way-to-merge-two-data-frame-by-timestamp-in-r

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Here are several alternatives. I find the SQL solution the most descriptive. The base solution is pretty short and has no dependencies. The data.table approach is likely fast and the code is compact but you need to read the documentation carefully to determine whether or not it is doing what you want since it is not obvious from the code unlike the prior two solutions. The dplyr/fuzzyjoin solution may be of interest if you are using the tidyverse.

1) sqldf Perform a left self join such that we join to each a row all b rows having a timestamp less than or equal to it and then take only the b row having the maximum timestamp of the ones joined to each a row. Note that SQLite guarantees that when max is used on a particular field that any other column references in the same table will be to that same row.

For large data add the argument dbname = tempfile() to the sqldf call and it will perform the join out of memory so that R memory limitations don't apply. It would also be possible to add an index to the data to speed it up.

library(sqldf)

sqldf("select max(b.timestamp), a.*, b.value as 'value.df2'
  from df1 a
  left join df2 b on b.timestamp <= a.timestamp
  group by a.timestamp
  order by a.timestamp"
)[-1]

giving:

  value           timestamp value.df2
1     1 2020-03-02 12:20:00        NA
2     2 2020-03-02 12:20:01         6
3     3 2020-03-02 12:20:03         8
4     4 2020-03-02 12:20:05        10
5     5 2020-03-02 12:20:08        10

Note that it can be used within a magrittr pipeline by placing the sqldf statement within brace brackets and referring to the left hand side as [.] within the sql statement:

library(magrittr)
library(sqldf)

df1 %>%
  { sqldf("select max(b.timestamp), a.*, b.value as 'value.df2'
      from [.] a
      left join df2 b on b.timestamp <= a.timestamp
      group by a.timestamp
      order by a.timestamp")[-1]
  }

2) base For each timestamp find the ones that are less than or equal to it and take the last one or NA if none.

Match <- function(tt) with(df2, tail(c(NA, value[timestamp <= tt]), 1))
transform(df1, value.df2 = sapply(timestamp, Match))

3) data.table This package supports rolling joins:

as.data.table(df2)[df1, on = .(timestamp), roll = TRUE]

4) dplyr/fuzzyjoin the fuzzy_left_join joins all rows of df2 to df1 whose timestamp is less than or equal to it. Then for each joined row we take the last one and fix up the names.

library(dplyr)
library(fuzzyjoin)

df1 %>%
  fuzzy_left_join(df2, by = "timestamp", match_fun = `>=`) %>%
  group_by(timestamp.x) %>%
  slice(n = n()) %>%
  ungroup %>%
  select(timestamp = timestamp.x, value = value.x, value.df2 = value.y)

  

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...