Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties 20/04/13 16:17:18 INFO SparkContext: Running Spark version 2.1.1 20/04/13 16:17:18 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable 20/04/13 16:17:18 INFO SecurityManager: Changing view acls to: Administrator 20/04/13 16:17:18 INFO SecurityManager: Changing modify acls to: Administrator 20/04/13 16:17:18 INFO SecurityManager: Changing view acls groups to: 20/04/13 16:17:18 INFO SecurityManager: Changing modify acls groups to: 20/04/13 16:17:18 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Administrator); groups with view permissions: Set(); users with modify permissions: Set(Administrator); groups with modify permissions: Set() 20/04/13 16:17:20 INFO Utils: Successfully started service ‘sparkDriver’ on port 49587. 20/04/13 16:17:20 INFO SparkEnv: Registering MapOutputTracker 20/04/13 16:17:20 INFO SparkEnv: Registering BlockManagerMaster 20/04/13 16:17:20 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 20/04/13 16:17:20 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 20/04/13 16:17:20 INFO DiskBlockManager: Created local directory at C:\Users\Administrator\AppData\Local\Temp\blockmgr-c2281fdc-c18b-431b-a8db-c41e93f49919 20/04/13 16:17:20 INFO MemoryStore: MemoryStore started with capacity 1992.9 MB 20/04/13 16:17:20 INFO SparkEnv: Registering OutputCommitCoordinator 20/04/13 16:17:20 INFO Utils: Successfully started service ‘SparkUI’ on port 4040. 20/04/13 16:17:20 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.81.99:4040 20/04/13 16:17:20 INFO Executor: Starting executor ID driver on host localhost 20/04/13 16:17:20 INFO Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService’ on port 49628. 20/04/13 16:17:20 INFO NettyBlockTransferService: Server created on 192.168.81.99:49628 20/04/13 16:17:20 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 20/04/13 16:17:20 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.81.99, 49628, None) 20/04/13 16:17:20 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.81.99:49628 with 1992.9 MB RAM, BlockManagerId(driver, 192.168.81.99, 49628, None) 20/04/13 16:17:20 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.81.99, 49628, None) 20/04/13 16:17:20 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.81.99, 49628, None) 20/04/13 16:17:20 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 127.1 KB, free 1992.8 MB) 20/04/13 16:17:20 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 14.3 KB, free 1992.8 MB) 20/04/13 16:17:20 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.81.99:49628 (size: 14.3 KB, free: 1992.9 MB) 20/04/13 16:17:21 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:20 20/04/13 16:17:21 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries. at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278) at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300) at org.apache.hadoop.util.Shell.(Shell.java:293) at org.apache.hadoop.util.StringUtils.(StringUtils.java:76) at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362) at org.apache.spark.SparkContextKaTeX parse error: Can't use function '$' in math mode at position 8: anonfun$̲hadoopFile$1anonfun29.apply(SparkContext.scala:1013)atorg.apache.spark.SparkContextanonfunhadoopFile1$anonfun29.apply(SparkContext.scala:1013)atorg.apache.spark.rdd.HadoopRDDanonfungetJobConf6.apply(HadoopRDD.scala:179)atorg.apache.spark.rdd.HadoopRDDanonfungetJobConf6.apply(HadoopRDD.scala:179)atscala.Option.foreach(Option.scala:257)atorg.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:179)atorg.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:198)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.PartitioneranonfundefaultPartitioner2.apply(Partitioner.scala:66)atorg.apache.spark.PartitioneranonfundefaultPartitioner2.apply(Partitioner.scala:66)atscala.collection.TraversableLikeanonfunmap1.apply(TraversableLike.scala:234)atscala.collection.TraversableLikeanonfunmap1.apply(TraversableLike.scala:234)atscala.collection.immutable.List.foreach(List.scala:381)atscala.collection.TraversableLikeclass.map(TraversableLike.scala:234) at scala.collection.immutable.List.map(List.scala:285) at org.apache.spark.Partitioner.defaultPartitioner(Partitioner.scala:66)atorg.apache.spark.rdd.PairRDDFunctionsanonfunreduceByKey3.apply(PairRDDFunctions.scala:331)atorg.apache.spark.rdd.PairRDDFunctionsanonfunreduceByKey3.apply(PairRDDFunctions.scala:331)atorg.apache.spark.rdd.RDDOperationScope.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope.withScope(RDDOperationScope.scala:112)atorg.apache.spark.rdd.RDD.withScope(RDD.scala:362)atorg.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330)atcom.atguigu.bigdata.spark.WordCount.main(WordCount.scala:26) at com.atguigu.bigdata.spark.WordCount.main(WordCount.scala) Exception in thread “main” java.lang.NullPointerException at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012) at org.apache.hadoop.util.Shell.runCommand(Shell.java:404) at org.apache.hadoop.util.Shell.run(Shell.java:379) at org.apache.hadoop.util.ShellShellCommandExecutor.execute(Shell.java:589)atorg.apache.hadoop.util.Shell.execCommand(Shell.java:678)atorg.apache.hadoop.util.Shell.execCommand(Shell.java:661)atorg.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)atorg.apache.hadoop.fs.RawLocalFileSystemRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:567) at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:542) at org.apache.hadoop.fs.LocatedFileStatus.(LocatedFileStatus.java:42) at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1815) at org.apache.hadoop.fs.FileSystem4.next(FileSystem.java:1797)atorg.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:233)atorg.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)atorg.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.PartitioneranonfundefaultPartitioner2.apply(Partitioner.scala:66)atorg.apache.spark.PartitioneranonfundefaultPartitioner2.apply(Partitioner.scala:66)atscala.collection.TraversableLikeanonfunmap1.apply(TraversableLike.scala:234)atscala.collection.TraversableLikeanonfunmap1.apply(TraversableLike.scala:234)atscala.collection.immutable.List.foreach(List.scala:381)atscala.collection.TraversableLikeclass.map(TraversableLike.scala:234) at scala.collection.immutable.List.map(List.scala:285) at org.apache.spark.Partitioner.defaultPartitioner(Partitioner.scala:66)atorg.apache.spark.rdd.PairRDDFunctionsanonfunreduceByKey3.apply(PairRDDFunctions.scala:331)atorg.apache.spark.rdd.PairRDDFunctionsanonfunreduceByKey3.apply(PairRDDFunctions.scala:331)atorg.apache.spark.rdd.RDDOperationScope.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope.withScope(RDDOperationScope.scala:112)atorg.apache.spark.rdd.RDD.withScope(RDD.scala:362)atorg.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330)atcom.atguigu.bigdata.spark.WordCount.main(WordCount.scala:26) at com.atguigu.bigdata.spark.WordCount.main(WordCount.scala) 20/04/13 16:17:21 INFO SparkContext: Invoking stop() from shutdown hook 20/04/13 16:17:21 INFO SparkUI: Stopped Spark web UI at http://192.168.81.99:4040 20/04/13 16:17:21 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 20/04/13 16:17:21 INFO MemoryStore: MemoryStore cleared 20/04/13 16:17:21 INFO BlockManager: BlockManager stopped 20/04/13 16:17:21 INFO BlockManagerMaster: BlockManagerMaster stopped 20/04/13 16:17:21 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 20/04/13 16:17:21 INFO SparkContext: Successfully stopped SparkContext 20/04/13 16:17:21 INFO ShutdownHookManager: Shutdown hook called 20/04/13 16:17:21 INFO ShutdownHookManager: Deleting directory C:\Users\Administrator\AppData\Local\Temp\spark-b6e8db72-2692-4e04-bab3-b57c461d8454
Process finished with exit code 1 在spark连接idea时,idea的ip是计算机ip,不是虚拟机的ip,这个怎么办?