3242423 发表于 2016-11-9 08:55:58

oslo_messaging中的heartbeat_check

最近在做OpenStack控制节点高可用(三控)的测试,当关掉其中一个控制节点的时候,nova service-list 看到所有nova服务都是down的。 nova-compute的log中有大量这种错误信息:

1
2
3
4
5
6
7
8
2016-11-08 03:46:23.887 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Broken pipe
2016-11-08 03:46:27.275 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Broken pipe
2016-11-08 03:46:27.276 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Broken pipe
2016-11-08 03:46:27.276 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Broken pipe
2016-11-08 03:46:27.277 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Broken pipe
2016-11-08 03:46:27.277 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Broken pipe
2016-11-08 03:46:27.278 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Broken pipe
2016-11-08 03:46:27.278 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: Broken pipe






上述抛出的异常在oslo_messaging/_drivers/impl_rabbit.py中定位出来了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
def _heartbeat_thread_job(self):
      """Thread that maintains inactive connections
      """
      while not self._heartbeat_exit_event.is_set():
            with self._connection_lock.for_heartbeat():
                recoverable_errors = (
                  self.connection.recoverable_channel_errors +
                  self.connection.recoverable_connection_errors)
                try:
                  try:
                        self._heartbeat_check()
                        # NOTE(sileht): We need to drain event to receive
                        # heartbeat from the broker but don't hold the
                        # connection too much times. In amqpdriver a connection
                        # is used exclusivly for read or for write, so we have
                        # to do this for connection used for write drain_events
                        # already do that for other connection
                        try:
                            self.connection.drain_events(timeout=0.001)
                        except socket.timeout:
                            pass
                  except recoverable_errors as exc:
                        LOG.info(_LI("A recoverable connection/channel error "
                                     "occurred, trying to reconnect: %s"), exc)
                        self.ensure_connection()
                        
                except Exception:
                  LOG.warning(_LW("Unexpected error during heartbeart "
                                    "thread processing, retrying..."))
                  LOG.debug('Exception', exc_info=True)
            self._heartbeat_exit_event.wait(
                timeout=self._heartbeat_wait_timeout)
      self._heartbeat_exit_event.clear()




原本heartbeat check就是来检测组件服务和rabbitmq server之间的连接是否是活着的,oslo_messaging中的heartbeat_check任务在服务启动的时候就跑在后台了,当关闭一个控制节点时,实际上也关闭了一个rabbitmq server节点。只不过这里会一直处于循环之中,一直抛出recoverable_errors捕获到的异常,只有当self._heartbeat_exit_event.is_set()才会退出while循环。按理说应该加个超时的东西,这样就就不会一直处于循环之中,过好几分钟后才恢复。

今天我在虚拟机中安装了三控高可用,在nova.conf中加了如下参数:

rabbit_max_retries = 2    # 重连最大次数
heartbeat_timeout_threshold = 0    # 禁止heartbeat check

测试,nova_compute 并不会一直抛出recoverable_errors捕获到的异常,nova service-list并不会出现所有服务down的情况。
后续有待在物理机上测试。。。。。。








页: [1]
查看完整版本: oslo_messaging中的heartbeat_check