【问题描述】
zkui报错
2016-04-07 16:30:27 ERROR Home:103 -
[org.apache.zookeeper.KeeperException.create(KeeperException.java:99),
org.apache.zookeeper.KeeperException.create(KeeperException.java:51),
org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1468),
org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1496),
com.deem.zkui.utils.ZooKeeperUtil.listNodeEntries(ZooKeeperUtil.java:255),
com.deem.zkui.controller.Home.doGet(Home.java:71),
javax.servlet.http.HttpServlet.service(HttpServlet.java:687),
javax.servlet.http.HttpServlet.service(HttpServlet.java:790),
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:698),
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1564),
com.deem.zkui.filter.AuthFilter.doFilter(AuthFilter.java:63),
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1544),
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:524),
org.eclipse.jetty.se
分析:
java程序启动后,默认(请注意是默认)会在/tmp/hsperfdata_userName目录下以该进程的id为文件名新建文件,并在该文件中存储jvm运行的相关信息,其中的userName为当前的用户名,/tmp/hsperfdata_userName目录会存放该用户所有已经启动的java进程信息。对于windows机器/tmp用Windows存放临时文件目录代替。
而jps、jconsole、jvisualvm等工具的数据来源就是这个文件(/tmp/hsperfdata_userName/pid)。所以当该文件不存在或是无法读取时就会出现jps无法查看该进程号,jconsole无法监控等问题
zk的进程找不到了,就是因为误删了这个目录下的文件导致。
group by是一个‘一行命令’
distinct 是一个‘多行命令’
做统计就需区分:分类项和指标项。
select pc.category_id,
sum(case when t.so_month between 3 and 5 then t.order_item_num else
0 end) as spring,
sum(case when t.so_month between 6 and 8 then t.order_item_num else
0 end) as summer,
sum(case when t.so_month between 9 and 11 then t.order_item_num
else 0 end) as autumn,
sum(case when t.so_month=12 or t.so_month<=2 then
t.order_item_num else 0 end) as winnter
from product_category pc join (select si.product_id,
si.order_item_num, month(si.order_create_time) as so_month from
so_item si where si.ds between '2013-05-01' and '2014-04-30' and
si.is_gift=0) t on pc.product_id=t.product_id
group by pc.category_id;
把else后面的0改为0L即可
select pc.category_id,
sum(case when t.so_month between 3 and 5 then t.order_item_num else
0L end) as spring,
sum(case when t.so_month between
(2015-01-15 09:53)
【异常描述】 这个程序跑了n天了,没有问题,从1.4号开始出现各种失败,这是其中一种:
2015-01-15 09:43:12,250 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 2015-01-15 09:43:12,252 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 2015-01-15 09:43:12,412 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2015-01-15 09:43:12,471 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 s
(2015-01-13 13:12)
【问题描述】在parser bytes时报出如下错误
com.google.protobuf.InvalidProtocolBufferException: Protocol
message end-group tag did not match expected tag.
at
com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:94)
at
com.google.protobuf.CodedInputStream.checkLastTagWas(CodedInputStream.java:124)
at
com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:310)
at
com.chinaso.platform.transfer.proto.TVChannelProto$TVChannel.(TVChannelProto.java:135)
at
com.chinaso.platform.transfer.proto.TVChannelProto$TVChannel.(TVChannelProto.java:97)
at
com.chinaso.platform.transfer.proto.TVChannelProto$TVChannel$1.parsePartialFrom(TVChannelProto.java:171)
at
com.chinaso.platform.transfer.proto.TVChannelProto$TVChannel$1.parsePartialFrom(TVChannelProto.java:1)
问题描述:
kafka list-topic报错:
[2014-08-21 10:08:22,728] WARN Session 0x0 for server
AY1405161104217929f5Z/10.162.219.53:2181, unexpected error, closing
socket connection and attempting reconnect
(org.apache.zookeeper.ClientCnxn)
java.io.IOException: Connection reset by peer
at
sun.nio.ch.FileDispatcher.read0(Native Method)
at
sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at
sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
at
sun.nio.ch.IOUtil.read(IOUtil.java:218)
at
sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
at
org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:859)
at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnx
问题描述:
2014-06-20 17:30:10,044 ERROR parse.SemanticAnalyzer
(SemanticAnalyzer.java:getMetaData(1323)) -
org.apache.hadoop.hive.ql.parse.SemanticException: Dynamic
partition strict mode requires at least one static partition
column. To turn this off set
hive.exec.dynamic.partition.mode=nonstrict
at
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer$tableSpec.(BaseSemanticAnalyzer.java:773)
at
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer$tableSpec.(BaseSemanticAnalyzer.java:707)
at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1196)
at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1053)
at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8342)
错误描述:
[root@AY140516110421750f8eZ bin]# ./ldapadd -x -D
'cn=Manager,dc=example,dc=com' -w secret -f init.ldif
ldapadd: attributeDescription 'dn': (possible missing newline
after line 7, entry 'dc=example,dc=com'?)
adding new entry 'dc=example,dc=com'
ldap_add: Undefined attribute type (17)
additional info: dn: attribute
type undefined
解决方法:
问题描述:
部署hive-0.12.0版本,采用mysql 数据库,总出现问题,如下:
Caused by: java.lang.reflect.InvocationTargetException
at
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at
org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325)
at
org.datanucleus.store.AbstractS
(2014-03-24 23:24)