這篇文章主要介紹“Spark SQL怎么用”,在日常操作中,相信很多人在Spark SQL怎么用問題上存在疑惑,小編查閱了各式資料,整理出簡單好用的操作方法,希望對大家解答”Spark SQL怎么用”的疑惑有所幫助!接下來,請跟著小編一起來學習吧!
pom.xml
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>2.1.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>2.1.0</version>
</dependency>
</dependencies>
Java:
import java.io.Serializable;
import java.util.Arrays;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.SparkSession;
public class SparkSqlTest {
public static class Person implements Serializable {
private static final long serialVersionUID = -6259413972682177507L;
private String name;
private int age;
public Person(String name, int age) {
this.name = name;
this.age = age;
}
public String toString() {
return name + ": " + age;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
}
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("Test").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
SparkSession spark = SparkSession.builder().appName("Test").getOrCreate();
JavaRDD<String> input = sc.parallelize(Arrays.asList("abc,1", "test,2"));
JavaRDD<Person> persons = input.map(s -> s.split(",")).map(s -> new Person(s[0], Integer.parseInt(s[1])));
//[abc: 1, test: 2]
System.out.println(persons.collect());
Dataset<Row> df = spark.createDataFrame(persons, Person.class);
/*
+---+----+
|age|name|
+---+----+
| 1| abc|
| 2|test|
+---+----+
*/
df.show();
/*
root
|-- age: integer (nullable = false)
|-- name: string (nullable = true)
*/
df.printSchema();
SQLContext sql = new SQLContext(spark);
sql.registerDataFrameAsTable(df, "person");
/*
+---+----+
|age|name|
+---+----+
| 2|test|
+---+----+
*/
sql.sql("SELECT * FROM person WHERE age>1").show();
sc.close();
}
}
到此,關于“Spark SQL怎么用”的學習就結束了,希望能夠解決大家的疑惑。理論與實踐的搭配能更好的幫助大家學習,快去試試吧!若想繼續學習更多相關知識,請繼續關注億速云網站,小編會繼續努力為大家帶來更多實用的文章!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。