apache phoenix 研究笔记

1.前言。
   apache phoenix很多问题需要完善,但确实思想不错。
2.问题。
  (1)按照官方的create index在内存表建索引,会导致插入失败,查询失败,程序崩溃。官方不是说hbase0.94后phonenix2.2,3.0都支持二级索引吗,为什么hadoop2.2上面创建索引成功,但从此以后这张表无论插入还是查询都失败呢,期待解决。
  (2)join有问题,还不能用,版本为phoenix2.2,报错如下:

0: jdbc:phoenix:hadoopmaster:2181>  select * from a inner join b  on a.id=b.id
. . . . . . . . . . . . . . . . .> ;
java.lang.NullPointerException: at index 5
        at com.google.common.collect.ImmutableList.checkElementNotNull(ImmutableList.java:311)
        at com.google.common.collect.ImmutableList.construct(ImmutableList.java:302)
        at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:278)
        at com.salesforce.phoenix.schema.PTableImpl.init(PTableImpl.java:242)
        at com.salesforce.phoenix.schema.PTableImpl.<init>(PTableImpl.java:190)
        at com.salesforce.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:185)
        at com.salesforce.phoenix.compile.JoinCompiler$JoinSpec.createProjectedTable(JoinCompiler.java:257)
        at com.salesforce.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:156)
        at com.salesforce.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:137)
        at com.salesforce.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:121)
        at com.salesforce.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:44)
        at com.salesforce.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:39)
        at com.salesforce.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.optimizePlan(PhoenixStatement.java:223)
        at com.salesforce.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.executeQuery(PhoenixStatement.java:200)
        at com.salesforce.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.execute(PhoenixStatement.java:212)
        at com.salesforce.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1014)
        at sqlline.SqlLine$Commands.execute(SqlLine.java:3673)
        at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
        at sqlline.SqlLine.dispatch(SqlLine.java:821)
        at sqlline.SqlLine.begin(SqlLine.java:699)
        at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
        at sqlline.SqlLine.main(SqlLine.java:424)



(2)查询问题,内存不够用。
java.lang.RuntimeException: com.salesforce.phoenix.exception.PhoenixIOException: com.salesforce.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: TASK12,\x003-1400811430015-user544                                         5248961.893007443\x00bxstevfkzcjl1oxrjmzb,1402059515499.09d0aa347d2a61427cc5c96ae094e96b.: Requested memory of 34598340 bytes could not be allocated from remaining memory of 321202800 bytes from global pool of 346218496 bytes after waiting for 10000ms.
        at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2440)
        at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2074)
        at sqlline.SqlLine.print(SqlLine.java:1735)
        at sqlline.SqlLine$Commands.execute(SqlLine.java:3683)
        at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
        at sqlline.SqlLine.dispatch(SqlLine.java:821)
        at sqlline.SqlLine.begin(SqlLine.java:699)
        at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
        at sqlline.SqlLine.main(SqlLine.java:424)



3.总结。
   (1)关于单机扫描速度group by,cast类型转换方法,phoenix还要加油啊。group by 250万秒级。太少了,至少单机扫表速度要700万/秒级才有意义。cast类型转换cast '111.1' as Integer直接报错,对此我无语了。
   (2)官方还需要在细节下功夫啊。demo也非常少啊。
   (3)phoenix走入了误区,一个版本2.2.3没搞好,就急着往下搞,导致没一个版本能用的,hbase直接就跳到了0.98,你让0.98以下的人怎么活?bug多,功能少是phoenix的一大要害。
 
 

猜你喜欢

转载自nannan408.iteye.com/blog/2077248
今日推荐