Skip to content

Commit

Permalink
Docs: use proper URIs (subpackages of com.datastax.driver.spark)
Browse files Browse the repository at this point in the history
  • Loading branch information
ash211 committed Jul 7, 2014
1 parent 058ebbd commit 4bb4213
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 5 deletions.
6 changes: 3 additions & 3 deletions doc/2_loading.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Load data into the table:
Now you can read that table as `RDD`:

val rdd = sc.cassandraTable("test", "words")
// rdd: com.datastax.driver.spark.CassandraRDD[com.datastax.driver.spark.CassandraRow] = CassandraRDD[0] at RDD at CassandraRDD.scala:41
// rdd: com.datastax.driver.spark.rdd.CassandraRDD[com.datastax.driver.spark.rdd.reader.CassandraRow] = CassandraRDD[0] at RDD at CassandraRDD.scala:41

rdd.toArray.foreach(println)
// CassandraRow{word: bar, count: 20}
Expand All @@ -41,7 +41,7 @@ Continuing with the previous example, follow these steps to access individual co
Store the first item of the rdd in the firstRow value.

val firstRow = rdd.first
// firstRow: com.datastax.driver.spark.CassandraRow = CassandraRow{word: bar, count: 20}
// firstRow: com.datastax.driver.spark.rdd.reader.CassandraRow = CassandraRow{word: bar, count: 20}

Get the number of columns and column names:

Expand Down Expand Up @@ -85,7 +85,7 @@ In the test keyspace, set up a collection set using cqlsh:
Then in your application, retrieve the first row:

val row = sc.cassandraTable("test", "users").first
// row: com.datastax.driver.spark.CassandraRow = CassandraRow{username: someone, emails: [[email protected], [email protected]]}
// row: com.datastax.driver.spark.rdd.reader.CassandraRow = CassandraRow{username: someone, emails: [[email protected], [email protected]]}

Query the collection set in Cassandra from Spark:

Expand Down
2 changes: 1 addition & 1 deletion doc/4_mapper.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,4 +54,4 @@ Property values might be also set by Scala-style setters. The following class is
}


[Next - Saving data](5_saving.md)
[Next - Saving data](5_saving.md)
2 changes: 1 addition & 1 deletion doc/5_saving.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,4 +56,4 @@ The following properties set in `SparkConf` can be used to fine-tune the saving
- `cassandra.output.batch.size.bytes`: maximum total size of the batch in bytes; defaults to 64 kB.
- `cassandra.output.concurrent.writes`: maximum number of batches executed in parallel by a single Spark task; defaults to 5

[Next - Customizing the object mapping](6_advanced_mapper.md)
[Next - Customizing the object mapping](6_advanced_mapper.md)

0 comments on commit 4bb4213

Please sign in to comment.