4

I have multiple integration tests using @EmbeddedKafka, and after moving to newer springboot version 2.1.8.RELEASE, the logs fills with these stacktraces. Any idea what could cause that?

javax.management.InstanceAlreadyExistsException: kafka.server:type=app-info,id=0
    at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
    at org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:62)
    at kafka.server.KafkaServer.startup(KafkaServer.scala:321)
    at kafka.utils.TestUtils$.createServer(TestUtils.scala:132)
    at kafka.utils.TestUtils.createServer(TestUtils.scala)
    at org.springframework.kafka.test.EmbeddedKafkaBroker.afterPropertiesSet(EmbeddedKafkaBroker.java:223)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1837)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1774)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:405)
Nikolas Charalambidis
  • 40,893
  • 16
  • 117
  • 183
Martin Mucha
  • 2,385
  • 1
  • 29
  • 49

1 Answers1

1

If you're tests are using a Spring Test Context (@RunWith(SpringRunner.class), @SpringJUnitConfig, @SpringBootTest etc) then the embedded kafka is stored in the application context.

Add @DirtiesContext to each test class so that the instance is disposed of properly.

Gary Russell
  • 166,535
  • 14
  • 146
  • 179
  • I'm using "@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)". And this relates to the question: https://stackoverflow.com/questions/58187190/what-is-the-proper-way-of-doing-dirtiesconfig-when-used-embeddedkafka/58187444#58187444 I cannot use @DirtiesContext on class level, since that produces "unreachable node 0" issues and "thread/memory leak" issue. It solves the issue of duplicate mbean, but it cannot used (7x running time, insane logs.) – Martin Mucha Oct 02 '19 at 07:02
  • Using EmbeddedKafkaRule instead of @DirtiesContext as recommended does not produce "unreachable node 0" issues and "thread/memory leak", however does not have any effect on duplicate mbean registration (InstanceAlreadyExistsException). – Martin Mucha Oct 02 '19 at 07:02
  • The `@DirtiesContext` should be `AFTER_CLASS`. The problem with `@ClassRule` makes no sense since JUnit shuts it down before the next test starts. It's not clear why you are seeing a thread leakage issue since you are running tests, not a web app. Please provide a small sample project that exhibits these behaviors so someone can take a look to see what's going on. We have many test cases in the framework and don't see these problems. – Gary Russell Oct 02 '19 at 12:42
  • I probably don't follow. `@DirtiesContext` is by default AFTER_CLASS. Thus having `@DirtiesContext` is after_class. It makes no sense to me as well, as the context is shutdown after each file, but it kills `@EmbeddedKafka`. When used `@EmbeddedKafkaRule` we can use `@DirtiesContext`. I can try to create trivial example, but trivial examples work and typically have very little in common with non-trivial ones. Is there any other way how to debug it? – Martin Mucha Oct 02 '19 at 16:34
  • Debugger with breakpoints in `AppInfoParser.registerAppInfo()` and `AppInfoParser.unregisterAppInfo()`, as well as in `EmbeddedKafkaBroker.destroy()` ? – Gary Russell Oct 02 '19 at 16:43
  • Oh - does your app/test use an `AdminClient` or producers/consumers without closing them? (That's where the unregister is done). – Gary Russell Oct 02 '19 at 16:44
  • That can be the issue,we're not closing anything.However I really do not see how can I do that.We're creating ConcurrentMessageListenerContainer,ConsumerFactory in `@Configuration` method annotated by `@Bean`,without any destroyMethod parameer or `@predestroy`.And on those instances I cannot see any `close` or `destroy` method.Same for messageListener class.What I was able to find on the otherhand is your answer in some stackoverflow question,saying,that closing is not necessary:"You dont need to stop() the container from a destroy() method;the context will stop the consumer when it is closed" – Martin Mucha Oct 05 '19 at 09:02
  • putting destroy method as: `@Bean(destroyMethod = "stop") public ConcurrentMessageListenerContainer...` did not help, and that is the sole place which I was able to find as a candidate for "closing" consumer. – Martin Mucha Oct 05 '19 at 09:27
  • And when debugging I can see new registration for every `org.apache.kafka.clients.consumer.KafkaConsumer` but deregistration only when closing `org.apache.kafka.clients.admin.KafkaAdminClient`. OK,so stepping a little bit further.I can see org.springframework.kafka.listener.KafkaMessageListenerContainer.ListenerConsumer#run being hit, and reading data in while loop:"while (isRunning()) {". And this.consumer.close() is after this while. So it seems,that my problem is, that whole "thing" somehow dies before KafkaMessageListenerContainer has chance stepping out from this while loop and call close – Martin Mucha Oct 05 '19 at 09:57
  • Right; with "active" components like listener containers, we recommend using `@DirtiesContext` at the class level so that when the test ends, the context is closed and these active components are stopped. – Gary Russell Oct 05 '19 at 13:07
  • I'll try to strip our project to minimum failing demo so that I can show you. But so far I cannot undestand what's wrong. Close on KafkaConsumer should be called (IIUC), but it's not. Why?? @DirtiesContext produces insane problems (as said above) and EmbeddedKafkaRule does not avoid multiple registration problem. So far the only "solution" to this seems to be logback filtering. – Martin Mucha Oct 05 '19 at 18:44
  • I am not at a computer so I can"t paste a link but the reference manual shows a technique for using one broker for the complete test suite. I don't know if that will help you. – Gary Russell Oct 05 '19 at 20:38