Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
20/04/13 16:17:18 INFO SparkContext: Running Spark version 2.1.1
20/04/13 16:17:18 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
20/04/13 16:17:18 INFO SecurityManager: Changing view acls to: Administrator
20/04/13 16:17:18 INFO SecurityManager: Changing modify acls to: Administrator
20/04/13 16:17:18 INFO SecurityManager: Changing view acls groups to:
20/04/13 16:17:18 INFO SecurityManager: Changing modify acls groups to:
20/04/13 16:17:18 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Administrator); groups with view permissions: Set(); users with modify permissions: Set(Administrator); groups with modify permissions: Set()
20/04/13 16:17:20 INFO Utils: Successfully started service ‘sparkDriver’ on port 49587.
20/04/13 16:17:20 INFO SparkEnv: Registering MapOutputTracker
20/04/13 16:17:20 INFO SparkEnv: Registering BlockManagerMaster
20/04/13 16:17:20 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/04/13 16:17:20 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/04/13 16:17:20 INFO DiskBlockManager: Created local directory at C:\Users\Administrator\AppData\Local\Temp\blockmgr-c2281fdc-c18b-431b-a8db-c41e93f49919
20/04/13 16:17:20 INFO MemoryStore: MemoryStore started with capacity 1992.9 MB
20/04/13 16:17:20 INFO SparkEnv: Registering OutputCommitCoordinator
20/04/13 16:17:20 INFO Utils: Successfully started service ‘SparkUI’ on port 4040.
20/04/13 16:17:20 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.81.99:4040
20/04/13 16:17:20 INFO Executor: Starting executor ID driver on host localhost
20/04/13 16:17:20 INFO Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService’ on port 49628.
20/04/13 16:17:20 INFO NettyBlockTransferService: Server created on 192.168.81.99:49628
20/04/13 16:17:20 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/04/13 16:17:20 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.81.99, 49628, None)
20/04/13 16:17:20 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.81.99:49628 with 1992.9 MB RAM, BlockManagerId(driver, 192.168.81.99, 49628, None)
20/04/13 16:17:20 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.81.99, 49628, None)
20/04/13 16:17:20 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.81.99, 49628, None)
20/04/13 16:17:20 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 127.1 KB, free 1992.8 MB)
20/04/13 16:17:20 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 14.3 KB, free 1992.8 MB)
20/04/13 16:17:20 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.81.99:49628 (size: 14.3 KB, free: 1992.9 MB)
20/04/13 16:17:21 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:20
20/04/13 16:17:21 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
at org.apache.hadoop.util.Shell.(Shell.java:293)
at org.apache.hadoop.util.StringUtils.(StringUtils.java:76)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362)
at org.apache.spark.SparkContextKaTeX parse error: Can't use function '$' in math mode at position 8: anonfun$̲hadoopFile$1anonfun 29. a p p l y ( S p a r k C o n t e x t . s c a l a : 1013 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t 29.apply(SparkContext.scala:1013) at org.apache.spark.SparkContext a n o n f u n anonfun hadoopFile 1 1 $anonfun 29. a p p l y ( S p a r k C o n t e x t . s c a l a : 1013 ) a t o r g . a p a c h e . s p a r k . r d d . H a d o o p R D D 29.apply(SparkContext.scala:1013) at org.apache.spark.rdd.HadoopRDD a n o n f u n anonfun getJobConf 6. a p p l y ( H a d o o p R D D . s c a l a : 179 ) a t o r g . a p a c h e . s p a r k . r d d . H a d o o p R D D 6.apply(HadoopRDD.scala:179) at org.apache.spark.rdd.HadoopRDD a n o n f u n anonfun getJobConf 6. a p p l y ( H a d o o p R D D . s c a l a : 179 ) a t s c a l a . O p t i o n . f o r e a c h ( O p t i o n . s c a l a : 257 ) a t o r g . a p a c h e . s p a r k . r d d . H a d o o p R D D . g e t J o b C o n f ( H a d o o p R D D . s c a l a : 179 ) a t o r g . a p a c h e . s p a r k . r d d . H a d o o p R D D . g e t P a r t i t i o n s ( H a d o o p R D D . s c a l a : 198 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 6.apply(HadoopRDD.scala:179) at scala.Option.foreach(Option.scala:257) at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:179) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:198) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . r d d . M a p P a r t i t i o n s R D D . g e t P a r t i t i o n s ( M a p P a r t i t i o n s R D D . s c a l a : 35 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . r d d . M a p P a r t i t i o n s R D D . g e t P a r t i t i o n s ( M a p P a r t i t i o n s R D D . s c a l a : 35 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . r d d . M a p P a r t i t i o n s R D D . g e t P a r t i t i o n s ( M a p P a r t i t i o n s R D D . s c a l a : 35 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . P a r t i t i o n e r 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.Partitioner a n o n f u n anonfun defaultPartitioner 2. a p p l y ( P a r t i t i o n e r . s c a l a : 66 ) a t o r g . a p a c h e . s p a r k . P a r t i t i o n e r 2.apply(Partitioner.scala:66) at org.apache.spark.Partitioner a n o n f u n anonfun defaultPartitioner 2. a p p l y ( P a r t i t i o n e r . s c a l a : 66 ) a t s c a l a . c o l l e c t i o n . T r a v e r s a b l e L i k e 2.apply(Partitioner.scala:66) at scala.collection.TraversableLike a n o n f u n anonfun map 1. a p p l y ( T r a v e r s a b l e L i k e . s c a l a : 234 ) a t s c a l a . c o l l e c t i o n . T r a v e r s a b l e L i k e 1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike a n o n f u n anonfun map 1. a p p l y ( T r a v e r s a b l e L i k e . s c a l a : 234 ) a t s c a l a . c o l l e c t i o n . i m m u t a b l e . L i s t . f o r e a c h ( L i s t . s c a l a : 381 ) a t s c a l a . c o l l e c t i o n . T r a v e r s a b l e L i k e 1.apply(TraversableLike.scala:234) at scala.collection.immutable.List.foreach(List.scala:381) at scala.collection.TraversableLike class.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:285)
at org.apache.spark.Partitioner . d e f a u l t P a r t i t i o n e r ( P a r t i t i o n e r . s c a l a : 66 ) a t o r g . a p a c h e . s p a r k . r d d . P a i r R D D F u n c t i o n s .defaultPartitioner(Partitioner.scala:66) at org.apache.spark.rdd.PairRDDFunctions a n o n f u n anonfun reduceByKey 3. a p p l y ( P a i r R D D F u n c t i o n s . s c a l a : 331 ) a t o r g . a p a c h e . s p a r k . r d d . P a i r R D D F u n c t i o n s 3.apply(PairRDDFunctions.scala:331) at org.apache.spark.rdd.PairRDDFunctions a n o n f u n anonfun reduceByKey 3. a p p l y ( P a i r R D D F u n c t i o n s . s c a l a : 331 ) a t o r g . a p a c h e . s p a r k . r d d . R D D O p e r a t i o n S c o p e 3.apply(PairRDDFunctions.scala:331) at org.apache.spark.rdd.RDDOperationScope .withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope . w i t h S c o p e ( R D D O p e r a t i o n S c o p e . s c a l a : 112 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . w i t h S c o p e ( R D D . s c a l a : 362 ) a t o r g . a p a c h e . s p a r k . r d d . P a i r R D D F u n c t i o n s . r e d u c e B y K e y ( P a i r R D D F u n c t i o n s . s c a l a : 330 ) a t c o m . a t g u i g u . b i g d a t a . s p a r k . W o r d C o u n t .withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:362) at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330) at com.atguigu.bigdata.spark.WordCount .main(WordCount.scala:26)
at com.atguigu.bigdata.spark.WordCount.main(WordCount.scala)
Exception in thread “main” java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:404)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at org.apache.hadoop.util.Shell S h e l l C o m m a n d E x e c u t o r . e x e c u t e ( S h e l l . j a v a : 589 ) a t o r g . a p a c h e . h a d o o p . u t i l . S h e l l . e x e c C o m m a n d ( S h e l l . j a v a : 678 ) a t o r g . a p a c h e . h a d o o p . u t i l . S h e l l . e x e c C o m m a n d ( S h e l l . j a v a : 661 ) a t o r g . a p a c h e . h a d o o p . f s . F i l e U t i l . e x e c C o m m a n d ( F i l e U t i l . j a v a : 1097 ) a t o r g . a p a c h e . h a d o o p . f s . R a w L o c a l F i l e S y s t e m ShellCommandExecutor.execute(Shell.java:589) at org.apache.hadoop.util.Shell.execCommand(Shell.java:678) at org.apache.hadoop.util.Shell.execCommand(Shell.java:661) at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097) at org.apache.hadoop.fs.RawLocalFileSystem RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:567)
at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:542)
at org.apache.hadoop.fs.LocatedFileStatus.(LocatedFileStatus.java:42)
at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1815)
at org.apache.hadoop.fs.FileSystem 4. n e x t ( F i l e S y s t e m . j a v a : 1797 ) a t o r g . a p a c h e . h a d o o p . m a p r e d . F i l e I n p u t F o r m a t . l i s t S t a t u s ( F i l e I n p u t F o r m a t . j a v a : 233 ) a t o r g . a p a c h e . h a d o o p . m a p r e d . F i l e I n p u t F o r m a t . g e t S p l i t s ( F i l e I n p u t F o r m a t . j a v a : 270 ) a t o r g . a p a c h e . s p a r k . r d d . H a d o o p R D D . g e t P a r t i t i o n s ( H a d o o p R D D . s c a l a : 202 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 4.next(FileSystem.java:1797) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:233) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . r d d . M a p P a r t i t i o n s R D D . g e t P a r t i t i o n s ( M a p P a r t i t i o n s R D D . s c a l a : 35 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . r d d . M a p P a r t i t i o n s R D D . g e t P a r t i t i o n s ( M a p P a r t i t i o n s R D D . s c a l a : 35 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . r d d . M a p P a r t i t i o n s R D D . g e t P a r t i t i o n s ( M a p P a r t i t i o n s R D D . s c a l a : 35 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD a n o n f u n anonfun partitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . P a r t i t i o n e r 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.Partitioner a n o n f u n anonfun defaultPartitioner 2. a p p l y ( P a r t i t i o n e r . s c a l a : 66 ) a t o r g . a p a c h e . s p a r k . P a r t i t i o n e r 2.apply(Partitioner.scala:66) at org.apache.spark.Partitioner a n o n f u n anonfun defaultPartitioner 2. a p p l y ( P a r t i t i o n e r . s c a l a : 66 ) a t s c a l a . c o l l e c t i o n . T r a v e r s a b l e L i k e 2.apply(Partitioner.scala:66) at scala.collection.TraversableLike a n o n f u n anonfun map 1. a p p l y ( T r a v e r s a b l e L i k e . s c a l a : 234 ) a t s c a l a . c o l l e c t i o n . T r a v e r s a b l e L i k e 1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike a n o n f u n anonfun map 1. a p p l y ( T r a v e r s a b l e L i k e . s c a l a : 234 ) a t s c a l a . c o l l e c t i o n . i m m u t a b l e . L i s t . f o r e a c h ( L i s t . s c a l a : 381 ) a t s c a l a . c o l l e c t i o n . T r a v e r s a b l e L i k e 1.apply(TraversableLike.scala:234) at scala.collection.immutable.List.foreach(List.scala:381) at scala.collection.TraversableLike class.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:285)
at org.apache.spark.Partitioner . d e f a u l t P a r t i t i o n e r ( P a r t i t i o n e r . s c a l a : 66 ) a t o r g . a p a c h e . s p a r k . r d d . P a i r R D D F u n c t i o n s .defaultPartitioner(Partitioner.scala:66) at org.apache.spark.rdd.PairRDDFunctions a n o n f u n anonfun reduceByKey 3. a p p l y ( P a i r R D D F u n c t i o n s . s c a l a : 331 ) a t o r g . a p a c h e . s p a r k . r d d . P a i r R D D F u n c t i o n s 3.apply(PairRDDFunctions.scala:331) at org.apache.spark.rdd.PairRDDFunctions a n o n f u n anonfun reduceByKey 3. a p p l y ( P a i r R D D F u n c t i o n s . s c a l a : 331 ) a t o r g . a p a c h e . s p a r k . r d d . R D D O p e r a t i o n S c o p e 3.apply(PairRDDFunctions.scala:331) at org.apache.spark.rdd.RDDOperationScope .withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope . w i t h S c o p e ( R D D O p e r a t i o n S c o p e . s c a l a : 112 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . w i t h S c o p e ( R D D . s c a l a : 362 ) a t o r g . a p a c h e . s p a r k . r d d . P a i r R D D F u n c t i o n s . r e d u c e B y K e y ( P a i r R D D F u n c t i o n s . s c a l a : 330 ) a t c o m . a t g u i g u . b i g d a t a . s p a r k . W o r d C o u n t .withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:362) at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330) at com.atguigu.bigdata.spark.WordCount .main(WordCount.scala:26)
at com.atguigu.bigdata.spark.WordCount.main(WordCount.scala)
20/04/13 16:17:21 INFO SparkContext: Invoking stop() from shutdown hook
20/04/13 16:17:21 INFO SparkUI: Stopped Spark web UI at http://192.168.81.99:4040
20/04/13 16:17:21 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/04/13 16:17:21 INFO MemoryStore: MemoryStore cleared
20/04/13 16:17:21 INFO BlockManager: BlockManager stopped
20/04/13 16:17:21 INFO BlockManagerMaster: BlockManagerMaster stopped
20/04/13 16:17:21 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/04/13 16:17:21 INFO SparkContext: Successfully stopped SparkContext
20/04/13 16:17:21 INFO ShutdownHookManager: Shutdown hook called
20/04/13 16:17:21 INFO ShutdownHookManager: Deleting directory C:\Users\Administrator\AppData\Local\Temp\spark-b6e8db72-2692-4e04-bab3-b57c461d8454

Process finished with exit code 1
在spark连接idea时,idea的ip是计算机ip,不是虚拟机的ip,这个怎么办?

发布了10 篇原创文章 · 获赞 0 · 访问量 82

猜你喜欢

转载自blog.csdn.net/WangaWen1229/article/details/105491466
今日推荐