Spark 和 Hadoop 本地【window】环境搭建

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u013066244/article/details/80743550

环境

Spark:2.3.1
Hadoop:2.7.6
java:1.8

前言

最近主要是在学习Spark,根据官网的文档,想本地跑个小程序看看效果;

https://spark.apache.org/docs/latest/quick-start.html#self-contained-applications

具体想执行的官方代码如下:

/* SimpleApp.java */
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;

public class SimpleApp {
  public static void main(String[] args) {
    String logFile = "YOUR_SPARK_HOME/README.md"; // Should be some file on your system
    SparkSession spark = SparkSession.builder().appName("Simple Application").getOrCreate();
    Dataset<String> logData = spark.read().textFile(logFile).cache();

    long numAs = logData.filter(s -> s.contains("a")).count();
    long numBs = logData.filter(s -> s.contains("b")).count();

    System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);

    spark.stop();
  }
}

Spark下载到本地后,并且测试代码也改好后了:

package misssad.simple_project;

import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.SparkSession;

public class SimpleApp {
    public static void main(String[] args) {
        System.setProperty("hadoop.home.dir", "D:\\Program Files\\hadoop-2.7.6");
        String logFile = "E:\\安装包\\spark-2.3.1-bin-hadoop2.7\\spark-2.3.1-bin-hadoop2.7\\README.md";

        SparkSession spark = SparkSession.builder().appName("Simple Application").getOrCreate();
        Dataset<String> logData = spark.read().textFile(logFile).cache();
        long numAs = logData.filter(s -> s.contains("a")).count();
        long numBs = logData.filter(s -> s.contains("b")).count();

        System.out.println("Lines with a:" + numAs + ", lines with b: " + numBs);
        spark.stop();
    }
}

一执行,报了很多的错误;细想了下,应该是我本地环境没有搭建好!

Spark

这个我就是下载了下,然后解压了下;

window目录:

D:\Program Files\spark-2.3.1-bin-hadoop2.7

hadoop

如果此时去执行代码,依然会报很多问题:

Failed to locate the winutils binary in the hadoop binary path

这个是因为没有配置HADOOP_HOME环境变量,并且其bin目录下必须要有winutils.exe
为什么一定要有winutils.exe呢?
因为hadoop是基于Linux写的,如果要在window环境中运行,需要winutils.exe模拟出环境来

Hadoop下载

下载需要注意的地方就是要和Spark版本对应!比如我下载的spark-2.3.1-bin-hadoop2.7, 那么我就得去下载hadoop2.7

配置环境变量

HADOOP_HOME 和 JAVA_HOME

环境变量HADOOP_HOME:D:\Progra~1\hadoop-2.7.6

特别注意!特别注意!特别注意!

hadoop的环境变量和其依赖的变量都不能有空格

比如 我的 存放位置是D:\Program Files\hadoop-2.7.6

网上很多人的做法是换个没有空格的目录,但是很不赞成这种做法,毕竟治标不治本,因为环境依赖多了,假设你修改了一个很老的变量,可能会引起老程序的异常!

比如:Hadoop也依赖JAVA_HOME这个变量,而我的目录是D:\Program Files\Java\jdk1.8.0_151;总不能把java这个目录也换个没有空格的吧!要是换了,那我以前的java程序和相应的脚本 不得都做修改!

正确的做法!正确的做法!正确的做法!

使用Progra~1替代Program Files

原由:文件夹(sub-directry)名称,以前是不允许带空白的,后来允许带空白,但由于有了空白,许多命令出现二义性,于是采用双引号括起来的办法。例如:

cd Documents and Settings
按老定义 等于 CD Documents, CD 命令找不到名叫Documents 的 directry
于是采用双引号:
cd “Documents and Settings“
但用到 set PATH 时很麻烦,名字太长,双引号时常括错。于是采用8个字符缩写,即写头六个字母(略去空白),另加波浪号和1。例如:
“Documents and Settings“ – DOCUME~1
“Local Settings” – LOCALS~1 (注意略去空白,用了第二个词的字母,凑成六个,再加波浪号和1)。
于是,这种方法成了规定。
来源:为什么文件路径 Program Files 可以写成 Progra~1

说明:如果有重命名就是~2,~3,后面的数字就是排序的顺序。

最后我的环境变量:

JAVA_HOME变成了D:\Progra~1\Java\jdk1.8.0_151
HADOOP_HOME:D:\Progra~1\hadoop-2.7.6
HADOOP_CONF_DIR%HADOOP_HOME%\etc\hadoop
path:前面省略。。。%HADOOP_HOME%\bin

winutils.exe

下载地址:
https://github.com/steveloughran/winutils/releases

不用考虑版本问题,下个最新的就可以啦!
然后放入 hadhoop根目录\bin目录中。

验证Hadoop安装效果

打开cmd,执行hadoop version

C:\Users\yutao>hadoop version
Hadoop 2.7.6
Subversion https://[email protected]/repos/asf/hadoop.git -r 085099c66cf
28be31604560c376fa282e69282b8
Compiled by kshvachk on 2018-04-18T01:33Z
Compiled with protoc 2.5.0
From source with checksum 71e2695531cb3360ab74598755d036
This command was run using /D:/Program Files/hadoop-2.7.6/share/hadoop/common/ha
doop-common-2.7.6.jar

执行程序看看效果

A master URL must be set in your configuration

一执行,发现报了上面的错误:
原因没有指定master,我是eclipse,需要在Run Configurations中的Arguments–>VM arguments:

-Dspark.master=local

再次执行效果:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
18/06/20 11:36:08 INFO SparkContext: Running Spark version 2.3.1
18/06/20 11:36:08 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/06/20 11:36:08 INFO SparkContext: Submitted application: Simple Application
18/06/20 11:36:08 INFO SecurityManager: Changing view acls to: yutao
18/06/20 11:36:08 INFO SecurityManager: Changing modify acls to: yutao
18/06/20 11:36:08 INFO SecurityManager: Changing view acls groups to: 
18/06/20 11:36:08 INFO SecurityManager: Changing modify acls groups to: 
18/06/20 11:36:08 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(yutao); groups with view permissions: Set(); users  with modify permissions: Set(yutao); groups with modify permissions: Set()
18/06/20 11:36:09 INFO Utils: Successfully started service 'sparkDriver' on port 49461.
18/06/20 11:36:09 INFO SparkEnv: Registering MapOutputTracker
18/06/20 11:36:09 INFO SparkEnv: Registering BlockManagerMaster
18/06/20 11:36:09 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
18/06/20 11:36:09 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
18/06/20 11:36:09 INFO DiskBlockManager: Created local directory at C:\Users\yutao\AppData\Local\Temp\blockmgr-60f73a77-407c-4d1c-bfe7-56ab00ae390c
18/06/20 11:36:09 INFO MemoryStore: MemoryStore started with capacity 901.8 MB
18/06/20 11:36:09 INFO SparkEnv: Registering OutputCommitCoordinator
18/06/20 11:36:09 INFO Utils: Successfully started service 'SparkUI' on port 4040.
18/06/20 11:36:09 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://yutao.go-goal.com:4040
18/06/20 11:36:09 INFO Executor: Starting executor ID driver on host localhost
18/06/20 11:36:09 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 49470.
18/06/20 11:36:09 INFO NettyBlockTransferService: Server created on yutao.go-goal.com:49470
18/06/20 11:36:09 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/06/20 11:36:09 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, yutao.go-goal.com, 49470, None)
18/06/20 11:36:09 INFO BlockManagerMasterEndpoint: Registering block manager yutao.go-goal.com:49470 with 901.8 MB RAM, BlockManagerId(driver, yutao.go-goal.com, 49470, None)
18/06/20 11:36:09 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, yutao.go-goal.com, 49470, None)
18/06/20 11:36:09 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, yutao.go-goal.com, 49470, None)
18/06/20 11:36:09 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/D:/sts/workspace/simple-project/spark-warehouse/').
18/06/20 11:36:09 INFO SharedState: Warehouse path is 'file:/D:/sts/workspace/simple-project/spark-warehouse/'.
18/06/20 11:36:10 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
18/06/20 11:36:13 INFO FileSourceStrategy: Pruning directories with: 
18/06/20 11:36:14 INFO FileSourceStrategy: Post-Scan Filters: 
18/06/20 11:36:14 INFO FileSourceStrategy: Output Data Schema: struct<value: string>
18/06/20 11:36:14 INFO FileSourceScanExec: Pushed Filters: 
18/06/20 11:36:15 INFO CodeGenerator: Code generated in 353.178554 ms
18/06/20 11:36:15 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 220.1 KB, free 901.6 MB)
18/06/20 11:36:16 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 20.6 KB, free 901.6 MB)
18/06/20 11:36:16 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on yutao.go-goal.com:49470 (size: 20.6 KB, free: 901.8 MB)
18/06/20 11:36:16 INFO SparkContext: Created broadcast 0 from cache at SimpleApp.java:14
18/06/20 11:36:16 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4198113 bytes, open cost is considered as scanning 4194304 bytes.
18/06/20 11:36:19 INFO CodeGenerator: Code generated in 37.243378 ms
18/06/20 11:36:19 INFO CodeGenerator: Code generated in 39.624506 ms
18/06/20 11:36:19 INFO SparkContext: Starting job: count at SimpleApp.java:15
18/06/20 11:36:20 INFO DAGScheduler: Registering RDD 7 (count at SimpleApp.java:15)
18/06/20 11:36:20 INFO DAGScheduler: Got job 0 (count at SimpleApp.java:15) with 1 output partitions
18/06/20 11:36:20 INFO DAGScheduler: Final stage: ResultStage 1 (count at SimpleApp.java:15)
18/06/20 11:36:20 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
18/06/20 11:36:20 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
18/06/20 11:36:20 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[7] at count at SimpleApp.java:15), which has no missing parents
18/06/20 11:36:20 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 18.2 KB, free 901.5 MB)
18/06/20 11:36:20 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 8.4 KB, free 901.5 MB)
18/06/20 11:36:20 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on yutao.go-goal.com:49470 (size: 8.4 KB, free: 901.8 MB)
18/06/20 11:36:20 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1039
18/06/20 11:36:20 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[7] at count at SimpleApp.java:15) (first 15 tasks are for partitions Vector(0))
18/06/20 11:36:20 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
18/06/20 11:36:20 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 8316 bytes)
18/06/20 11:36:20 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
18/06/20 11:36:21 INFO FileScanRDD: Reading File path: file:///D:/Program%20Files/spark-2.3.1-bin-hadoop2.7/README.md, range: 0-3809, partition values: [empty row]
18/06/20 11:36:21 INFO CodeGenerator: Code generated in 22.258382 ms
18/06/20 11:36:21 INFO MemoryStore: Block rdd_2_0 stored as values in memory (estimated size 4.4 KB, free 901.5 MB)
18/06/20 11:36:21 INFO BlockManagerInfo: Added rdd_2_0 in memory on yutao.go-goal.com:49470 (size: 4.4 KB, free: 901.8 MB)
18/06/20 11:36:21 INFO CodeGenerator: Code generated in 5.141693 ms
18/06/20 11:36:21 INFO CodeGenerator: Code generated in 62.976727 ms
18/06/20 11:36:22 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1984 bytes result sent to driver
18/06/20 11:36:22 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1665 ms on localhost (executor driver) (1/1)
18/06/20 11:36:22 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
18/06/20 11:36:22 INFO DAGScheduler: ShuffleMapStage 0 (count at SimpleApp.java:15) finished in 2.036 s
18/06/20 11:36:22 INFO DAGScheduler: looking for newly runnable stages
18/06/20 11:36:22 INFO DAGScheduler: running: Set()
18/06/20 11:36:22 INFO DAGScheduler: waiting: Set(ResultStage 1)
18/06/20 11:36:22 INFO DAGScheduler: failed: Set()
18/06/20 11:36:22 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[10] at count at SimpleApp.java:15), which has no missing parents
18/06/20 11:36:22 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 7.4 KB, free 901.5 MB)
18/06/20 11:36:22 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 3.8 KB, free 901.5 MB)
18/06/20 11:36:22 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on yutao.go-goal.com:49470 (size: 3.8 KB, free: 901.8 MB)
18/06/20 11:36:22 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1039
18/06/20 11:36:22 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[10] at count at SimpleApp.java:15) (first 15 tasks are for partitions Vector(0))
18/06/20 11:36:22 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
18/06/20 11:36:22 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, executor driver, partition 0, ANY, 7754 bytes)
18/06/20 11:36:22 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
18/06/20 11:36:22 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
18/06/20 11:36:22 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 26 ms
18/06/20 11:36:22 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1782 bytes result sent to driver
18/06/20 11:36:22 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 294 ms on localhost (executor driver) (1/1)
18/06/20 11:36:22 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
18/06/20 11:36:22 INFO DAGScheduler: ResultStage 1 (count at SimpleApp.java:15) finished in 0.304 s
18/06/20 11:36:22 INFO DAGScheduler: Job 0 finished: count at SimpleApp.java:15, took 3.013115 s
18/06/20 11:36:23 INFO SparkContext: Starting job: count at SimpleApp.java:16
18/06/20 11:36:23 INFO DAGScheduler: Registering RDD 15 (count at SimpleApp.java:16)
18/06/20 11:36:23 INFO DAGScheduler: Got job 1 (count at SimpleApp.java:16) with 1 output partitions
18/06/20 11:36:23 INFO DAGScheduler: Final stage: ResultStage 3 (count at SimpleApp.java:16)
18/06/20 11:36:23 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 2)
18/06/20 11:36:23 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 2)
18/06/20 11:36:23 INFO DAGScheduler: Submitting ShuffleMapStage 2 (MapPartitionsRDD[15] at count at SimpleApp.java:16), which has no missing parents
18/06/20 11:36:23 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 18.2 KB, free 901.5 MB)
18/06/20 11:36:23 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 8.4 KB, free 901.5 MB)
18/06/20 11:36:23 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on yutao.go-goal.com:49470 (size: 8.4 KB, free: 901.8 MB)
18/06/20 11:36:23 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:1039
18/06/20 11:36:23 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 2 (MapPartitionsRDD[15] at count at SimpleApp.java:16) (first 15 tasks are for partitions Vector(0))
18/06/20 11:36:23 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
18/06/20 11:36:23 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, localhost, executor driver, partition 0, PROCESS_LOCAL, 8316 bytes)
18/06/20 11:36:23 INFO Executor: Running task 0.0 in stage 2.0 (TID 2)
18/06/20 11:36:23 INFO BlockManager: Found block rdd_2_0 locally
18/06/20 11:36:23 INFO Executor: Finished task 0.0 in stage 2.0 (TID 2). 1898 bytes result sent to driver
18/06/20 11:36:23 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 2) in 35 ms on localhost (executor driver) (1/1)
18/06/20 11:36:23 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 
18/06/20 11:36:23 INFO DAGScheduler: ShuffleMapStage 2 (count at SimpleApp.java:16) finished in 0.042 s
18/06/20 11:36:23 INFO DAGScheduler: looking for newly runnable stages
18/06/20 11:36:23 INFO DAGScheduler: running: Set()
18/06/20 11:36:23 INFO DAGScheduler: waiting: Set(ResultStage 3)
18/06/20 11:36:23 INFO DAGScheduler: failed: Set()
18/06/20 11:36:23 INFO DAGScheduler: Submitting ResultStage 3 (MapPartitionsRDD[18] at count at SimpleApp.java:16), which has no missing parents
18/06/20 11:36:23 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 7.4 KB, free 901.5 MB)
18/06/20 11:36:23 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 3.8 KB, free 901.5 MB)
18/06/20 11:36:23 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on yutao.go-goal.com:49470 (size: 3.8 KB, free: 901.8 MB)
18/06/20 11:36:23 INFO SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1039
18/06/20 11:36:23 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 3 (MapPartitionsRDD[18] at count at SimpleApp.java:16) (first 15 tasks are for partitions Vector(0))
18/06/20 11:36:23 INFO TaskSchedulerImpl: Adding task set 3.0 with 1 tasks
18/06/20 11:36:23 INFO TaskSetManager: Starting task 0.0 in stage 3.0 (TID 3, localhost, executor driver, partition 0, ANY, 7754 bytes)
18/06/20 11:36:23 INFO Executor: Running task 0.0 in stage 3.0 (TID 3)
18/06/20 11:36:23 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
18/06/20 11:36:23 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
18/06/20 11:36:23 INFO Executor: Finished task 0.0 in stage 3.0 (TID 3). 1696 bytes result sent to driver
18/06/20 11:36:23 INFO TaskSetManager: Finished task 0.0 in stage 3.0 (TID 3) in 6 ms on localhost (executor driver) (1/1)
18/06/20 11:36:23 INFO TaskSchedulerImpl: Removed TaskSet 3.0, whose tasks have all completed, from pool 
18/06/20 11:36:23 INFO DAGScheduler: ResultStage 3 (count at SimpleApp.java:16) finished in 0.013 s
18/06/20 11:36:23 INFO DAGScheduler: Job 1 finished: count at SimpleApp.java:16, took 0.060497 s
// 重点看这里 执行成功了
Lines with a:61, lines with b: 30
18/06/20 11:36:23 INFO SparkUI: Stopped Spark web UI at http://yutao.go-goal.com:4040
18/06/20 11:36:23 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/06/20 11:36:23 INFO MemoryStore: MemoryStore cleared
18/06/20 11:36:23 INFO BlockManager: BlockManager stopped
18/06/20 11:36:23 INFO BlockManagerMaster: BlockManagerMaster stopped
18/06/20 11:36:23 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/06/20 11:36:23 INFO SparkContext: Successfully stopped SparkContext
18/06/20 11:36:23 INFO ShutdownHookManager: Shutdown hook called
18/06/20 11:36:23 INFO ShutdownHookManager: Deleting directory C:\Users\yutao\AppData\Local\Temp\spark-48b4a38a-c25a-4da7-b2fc-de0da9b7609b

32位?64位?

上面的日志中报了个warn警告信息:

WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

这只是一个无关紧要的警告而已!
因为官网的Hadoop是32位编译的,而我们大部分的电脑是64位的。所以报了个警告。

看看外国友人的回答:

I assume you're running Hadoop on 64bit CentOS. The reason you saw that warning is the native Hadoop library $HADOOP_HOME/lib/native/libhadoop.so.1.0.0 was actually compiled on 32 bit.

Anyway, it's just a warning, and won't impact Hadoop's functionalities.

Here is the way if you do want to eliminate this warning, download the source code of Hadoop and recompile libhadoop.so.1.0.0 on 64bit system, then replace the 32bit one.

Steps on how to recompile source code are included here for Ubuntu:

http://www.ercoppa.org/Linux-Compile-Hadoop-220-fix-Unable-to-load-native-hadoop-library.htm
Good luck.

第二段英文说的很明确了:无论如何,这只是一个警告,并不会影响Hadoop的功能
所以我觉得没有解决的必要!

总结

刚入门大数据框架,因为我需要解决个问题!

参考地址:

https://stackoverflow.com/a/19993403/6952713

getting JAVA_HOME is incorrectly set with hadoop

winutils.exe下载地址

java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries

猜你喜欢

转载自blog.csdn.net/u013066244/article/details/80743550