1.当http sink的端口,有异常的时候,sink不成功,会报transaction的提交错误,如下:
2019-02-27 16:13:24,313 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: SOURCE, name: r3 started
2019-02-27 16:14:06,117 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - com.didichuxing.sts.HttpSinks.postJson(HttpSinks.java:116)] request was error!
2019-02-27 16:14:06,118 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:158)] Unable to deliver event. Exception follows.
java.lang.IllegalStateException: close() called when transaction is OPEN - you must either commit or rollback first
at com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at org.apache.flume.channel.BasicTransactionSemantics.close(BasicTransactionSemantics.java:179)
at com.didichuxing.sts.HttpSinks.process(HttpSinks.java:92)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
at java.lang.Thread.run(Thread.java:748)
2019-02-27 16:14:11,124 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - com.didichuxing.sts.HttpSinks.process(HttpSinks.java:87)] Failed to commit transaction.java.lang.IllegalStateException: begin() called when transaction is OPEN!
2019-02-27 16:14:11,125 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:158)] Unable to deliver event. Exception follows.
java.lang.IllegalStateException: begin() called when transaction is OPEN!
at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
at org.apache.flume.channel.BasicTransactionSemantics.begin(BasicTransactionSemantics.java:131)
at com.didichuxing.sts.HttpSinks.process(HttpSinks.java:49)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
at java.lang.Thread.run(Thread.java:748)
2019-02-27 16:14:19,144 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - com.didichuxing.sts.HttpSinks.postJson(HttpSinks.java:116)] request was error!
2019-02-27 16:14:19,144 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:158)] Unable to deliver event. Exception follows.
java.lang.IllegalStateException: close() called when transaction is OPEN - you must either commit or rollback first
2.flume-ng-sql-source的c3p0连接池,允许链接不操作时一直不断开链接。而mysql、oracle数据库在客户端不操作8小时左右的时候会自动断开链接,报下边这个错误
[c3p0] A PooledConnection that has already signalled a Connection error is still in use!
2019/07/30 10:28:57,096
[c3p0] Another error has occurred [ java.sql.SQLRecoverableException: Closed Connection ] which will not be reported to listeners!
java.sql.SQLRecoverableException: Closed Connection
at oracle.jdbc.driver.OracleStatement.ensureOpen(OracleStatement.java:4061)
at oracle.jdbc.driver.OracleStatement.getWarnings(OracleStatement.java:3097)
at oracle.jdbc.driver.OracleStatementWrapper.getWarnings(OracleStatementWrapper.java:348)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.getWarnings(NewProxyPreparedStatement.java:1031)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.handleAndClearWarnings(SqlExceptionHelper.java:320)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.logAndClearWarnings(SqlExceptionHelper.java:273)
at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.close(JdbcCoordinatorImpl.java:529)
at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.cleanup(JdbcCoordinatorImpl.java:509)
at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.close(JdbcCoordinatorImpl.java:204)
at org.hibernate.engine.transaction.internal.TransactionCoordinatorImpl.close(TransactionCoordinatorImpl.java:297)
at org.hibernate.internal.SessionImpl.close(SessionImpl.java:369)
at org.keedio.flume.source.HibernateHelper.resetConnection(HibernateHelper.java:200)
at org.keedio.flume.source.HibernateHelper.executeQuery(HibernateHelper.java:125)
at org.keedio.flume.source.DBSQLSource.process(DBSQLSource.java:102)
at org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:129)
at java.lang.Thread.run(Thread.java:748)
2019/07/30 10:28:57,097
[c3p0] A PooledConnection that has already signalled a Connection error is still in use!
2019/07/30 10:28:57,097
[c3p0] Another error has occurred [ java.sql.SQLRecoverableException: Closed Connection ] which will not be reported to listeners!
java.sql.SQLRecoverableException: Closed Connection
at oracle.jdbc.driver.OracleStatement.ensureOpen(OracleStatement.java:4061)
at oracle.jdbc.driver.OracleStatement.clearWarnings(OracleStatement.java:3106)
at oracle.jdbc.driver.OracleStatementWrapper.clearWarnings(OracleStatementWrapper.java:222)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.clearWarnings(NewProxyPreparedStatement.java:1057)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.handleAndClearWarnings(SqlExceptionHelper.java:329)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.logAndClearWarnings(SqlExceptionHelper.java:273)
at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.close(JdbcCoordinatorImpl.java:529)
at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.cleanup(JdbcCoordinatorImpl.java:509)
at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.close(JdbcCoordinatorImpl.java:204)
at org.hibernate.engine.transaction.internal.TransactionCoordinatorImpl.close(TransactionCoordinatorImpl.java:297)
at org.hibernate.internal.SessionImpl.close(SessionImpl.java:369)
at org.keedio.flume.source.HibernateHelper.resetConnection(HibernateHelper.java:200)
at org.keedio.flume.source.HibernateHelper.executeQuery(HibernateHelper.java:125)
at org.keedio.flume.source.DBSQLSource.process(DBSQLSource.java:102)
at org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:129)
at java.lang.Thread.run(Thread.java:748)
2019/07/30 10:28:57,09
3.flume-kafka-sink 当sink的producer的records空间过大时会报错、有两个参数需要设置:
broker的message.max.bytes 在(server.properties),default约976.6K
producer的max.request.size在生产者中,default 1M
Failed to publish events
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.RecordTooLargeException: The message is 31866588 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:449)
at com.jcloud.flume.sink.kafka.KafkaSink.process(KafkaSink.java:187)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:66)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:146)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 31866588 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
2019/07/29 17:53:44,300
Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: Failed to publish events
at com.jcloud.flume.sink.kafka.KafkaSink.process(KafkaSink.java:227)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:66)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:146)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.RecordTooLargeException: The message is 31866588 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:449)
at com.jcloud.flume.sink.kafka.KafkaSink.process(KafkaSink.java:187)
... 3 more
Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 31866588 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.