Sometimes there is no such a situation, I got a sql, csv, parquet file, together wanted to write sql, do not want to write a mess of things, just want to quickly achieve aggregate query data I want. So we can use the characteristics of the spark-sql directly manipulate files dealing with such demand, sister never have to worry I will not spark, because I will only sql.
Instructions
csv
spark.sql("select * from csv.`/tmp/demo.csv`").show(false)
json
spark.sql("select * from json.`/tmp/demo.json`").show(false)
parquet
spark.sql("select * from parquet.`/tmp/demo.parquet`").show(false)