We are going to look at five join types available in dplyr: innerjoin, semijoin, leftjoin, antijoin and fulljoin. We are going to examine the output of each join type using a simple example. In the fifth section we’ll learn how to combine the dplyr and ggplot2 (using chaining) commands to build expressive charts and graphs.

  1. Dplyr is a grammar of data manipulation, providing a consistent set of verbs that help you solve the most common data manipulation challenges: mutate adds new variables that are functions of existing variables; select picks variables based on their names.
  2. Joinall(dfs, by = NULL, type = 'left', match = 'all') Arguments. A list of data frames. Character vector of variable names to join. If omitted, will match on all common variables. Type of join: left (default), right, inner or full. See details for more information.
  3. They are translated to the following SQL queries: innerjoin (x, y): SELECT. FROM x JOIN y ON x.a = y.a leftjoin (x, y): SELECT. FROM x LEFT JOIN y ON x.a = y.a rightjoin (x, y): SELECT. FROM x RIGHT JOIN y ON x.a = y.a fulljoin (x, y): SELECT. FROM x FULL JOIN y ON x.a = y.a semijoin (x.

Overview

dplyr isan R package for working with structured data both in and outside of R.dplyr makes data manipulation for R users easy, consistent, andperformant. With dplyr as an interface to manipulating Spark DataFrames,you can:

  • Select, filter, and aggregate data
  • Use window functions (e.g. for sampling)
  • Perform joins on DataFrames
  • Collect data from Spark into R

Statements in dplyr can be chained together using pipes defined by themagrittrR package. dplyr also supports non-standardevalutionof its arguments. For more information on dplyr, see theintroduction,a guide for connecting todatabases,and a variety ofvignettes.

Dplyr join by two columns

Reading Data

You can read data into Spark DataFrames using the followingfunctions:

FunctionDescription
spark_read_csvReads a CSV file and provides a data source compatible with dplyr
spark_read_jsonReads a JSON file and provides a data source compatible with dplyr
spark_read_parquetReads a parquet file and provides a data source compatible with dplyr

Regardless of the format of your data, Spark supports reading data froma variety of different data sources. These include data stored on HDFS(hdfs:// protocol), Amazon S3 (s3n:// protocol), or local filesavailable to the Spark worker nodes (file:// protocol)

Each of these functions returns a reference to a Spark DataFrame whichcan be used as a dplyr table (tbl).

Flights Data

This guide will demonstrate some of the basic data manipulation verbs ofdplyr by using data from the nycflights13 R package. This packagecontains data for all 336,776 flights departing New York City in 2013.It also includes useful metadata on airlines, airports, weather, andplanes. The data comes from the US Bureau of TransportationStatistics,and is documented in ?nycflights13

Dplyr Join Tables

Connect to the cluster and copy the flights data using the copy_tofunction. Caveat: The flight data in nycflights13 is convenient fordplyr demonstrations because it is small, but in practice large datashould rarely be copied directly from R objects.

dplyr Verbs

Verbs are dplyr commands for manipulating data. When connected to aSpark DataFrame, dplyr translates the commands into Spark SQLstatements. Remote data sources use exactly the same five verbs as localdata sources. Here are the five verbs with their corresponding SQLcommands:

  • select ~ SELECT
  • filter ~ WHERE
  • arrange ~ ORDER
  • summarise ~ aggregators: sum, min, sd, etc.
  • mutate ~ operators: +, *, log, etc.

Laziness

When working with databases, dplyr tries to be as lazy as possible:

  • It never pulls data into R unless you explicitly ask for it.

  • It delays doing any work until the last possible moment: it collectstogether everything you want to do and then sends it to the databasein one step.

For example, take the followingcode:

This sequence of operations never actually touches the database. It’snot until you ask for the data (e.g. by printing c4) that dplyrrequests the results from the database.

Piping

You can usemagrittrpipes to write cleaner syntax. Using the same example from above, youcan write a much cleaner version like this:

Grouping

The group_by function corresponds to the GROUP BY statement in SQL.

Collecting to R

You can copy data from Spark into R’s memory by using collect().

collect() executes the Spark query and returns the results to R forfurther analysis and visualization.

SQL Translation

It’s relatively straightforward to translate R code to SQL (or indeed toany programming language) when doing simple mathematical operations ofthe form you normally use when filtering, mutating and summarizing.dplyr knows how to convert the following R functions to Spark SQL:

Window Functions

dplyr supports Spark SQL window functions. Window functions are used inconjunction with mutate and filter to solve a wide range of problems.You can compare the dplyr syntax to the query it has generated by usingdbplyr::sql_render().

Peforming Joins

It’s rare that a data analysis involves only a single table of data. Inpractice, you’ll normally have many tables that contribute to ananalysis, and you need flexible tools to combine them. In dplyr, thereare three families of verbs that work with two tables at a time:

  • Mutating joins, which add new variables to one table from matchingrows in another.

  • Filtering joins, which filter observations from one table based onwhether or not they match an observation in the other table.

  • Set operations, which combine the observations in the data sets asif they were set elements.

All two-table verbs work similarly. The first two arguments are x andy, and provide the tables to combine. The output is always a new tablewith the same type as x.

The following statements are equivalent:

Sampling

You can use sample_n() and sample_frac() to take a random sample ofrows: use sample_n() for a fixed number and sample_frac() for afixed fraction.

Writing Data

Join

It is often useful to save the results of your analysis or the tablesthat you have generated on your Spark cluster into persistent storage.The best option in many scenarios is to write the table out to aParquet file using thespark_write_parquetfunction. For example:

This will write the Spark DataFrame referenced by the tbl R variable tothe given HDFS path. You can use thespark_read_parquetfunction to read the same table back into a subsequent Sparksession:

Join

Dplyr Join Types

You can also write data as CSV or JSON using thespark_write_csv andspark_write_jsonfunctions.

Hive Functions

Many of Hive’s built-in functions (UDF) and built-in aggregate functions(UDAF) can be called inside dplyr’s mutate and summarize. The LanguangeReferenceUDFpage provides the list of available functions.

The following example uses the datediff and current_date HiveUDFs to figure the difference between the flight_date and the currentsystem date:

Source: R/verb-joins.R

These are methods for the dplyr join generics. They are translatedto the following SQL queries:

  • inner_join(x, y): SELECT * FROM x JOIN y ON x.a = y.a

  • left_join(x, y): SELECT * FROM x LEFT JOIN y ON x.a = y.a

  • right_join(x, y): SELECT * FROM x RIGHT JOIN y ON x.a = y.a

  • full_join(x, y): SELECT * FROM x FULL JOIN y ON x.a = y.a

  • semi_join(x, y): SELECT * FROM x WHERE EXISTS (SELECT 1 FROM y WHERE x.a = y.a)

  • anti_join(x, y): SELECT * FROM x WHERE NOT EXISTS (SELECT 1 FROM y WHERE x.a = y.a)

R Dplyr Join

Arguments

x, y

A pair of lazy data frames backed by database queries.

by

A character vector of variables to join by.

If NULL, the default, *_join() will perform a natural join, using allvariables in common across x and y. A message lists the variables so that youcan check they're correct; suppress the message by supplying by explicitly.

To join by different variables on x and y, use a named vector.For example, by = c('a' = 'b') will match x$a to y$b.

To join by multiple variables, use a vector with length > 1.For example, by = c('a', 'b') will match x$a to y$a and x$b toy$b. Use a named vector to match different variables in x and y.For example, by = c('a' = 'b', 'c' = 'd') will match x$a to y$b andx$c to y$d.

To perform a cross-join, generating all combinations of x and y,use by = character().

copy

If x and y are not from the same data source,and copy is TRUE, then y will be copied into atemporary table in same database as x. *_join() will automaticallyrun ANALYZE on the created table in the hope that this will makeyou queries as efficient as possible by giving more data to the queryplanner.

This allows you to join tables across srcs, but it's potentially expensiveoperation so you must opt into it.

suffix

If there are non-joined duplicate variables in x andy, these suffixes will be added to the output to disambiguate them.Should be a character vector of length 2.

auto_index

if copy is TRUE, automatically createindices for the variables in by. This may speed up the join ifthere are matching indexes in x.

...

Other parameters passed onto methods.

sql_on

A custom join predicate as an SQL expression.Usually joins use column equality, but you can perform more complexqueries by supply sql_on which should be a SQL expression thatuses LHS and RHS aliases to refer to the left-hand side orright-hand side of the join respectively.

na_matches

Should NA (NULL) values match one another?The default, 'never', is how databases usually work. 'na' makesthe joins behave like the dplyr join functions, merge(), match(),and %in%.

Value

Another tbl_lazy. Use show_query() to see the generatedquery, and use collect() to execute the queryand return data to R.

Examples

Coments are closed

Most Viewed Posts

  • Iphone Ar
  • Chainsaw Rust
  • Tier 2 Workbench Rust
  • Prioritising Work Matrix

Scroll to top