Flink切换日志框架为Logback

client端pom文件配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
<dependencies>
<!-- Add the two required logback dependencies -->
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.2.3</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.3</version>
</dependency>

<!-- Add the log4j -> sfl4j (-> logback) bridge into the classpath
Hadoop is logging to log4j! -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
<version>1.7.15</version>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>1.7.1</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>*</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.11</artifactId>
<version>1.7.1</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>*</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
<version>1.7.1</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>*</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
  • 添加logback-core、logback-classic及log4j-over-slf4j依赖,
  • 之后对flink-java、flink-streaming-java_2.11、flink-clients_2.11等配置log4j及slf4j-log4j12的exclusions;
  • 最后通过mvn dependency:tree查看是否还有log4j12,以确认下是否都全部排除了

服务端配置

  • 添加logback-classic.jar、logback-core.jar、log4j-over-slf4j.jar到flink的lib目录下(比如/opt/flink/lib)

    相关jar包在logback官网上都有,嫌麻烦的可以点此链接直接下载!

  • 移除flink的lib目录下(比如/opt/flink/lib)log4j及slf4j-log4j12的jar(比如log4j-1.2.17.jar及slf4j-log4j12-1.7.15.jar)

  • 如果要自定义logback的配置的话,可以覆盖flink的conf目录下的logback.xml、logback-console.xml或者logback-yarn.xml

flink-release-1.7.1/flink-dist/src/main/flink-bin/bin/flink-daemon.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
#!/usr/bin/env bash
################################################################################
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

# Start/stop a Flink daemon.
USAGE="Usage: flink-daemon.sh (start|stop|stop-all) (taskexecutor|zookeeper|historyserver|standalonesession|standalonejob) [args]"

STARTSTOP=$1
DAEMON=$2
ARGS=("${@:3}") # get remaining arguments as array

bin=`dirname "$0"`
bin=`cd "$bin"; pwd`

. "$bin"/config.sh

case $DAEMON in
(taskexecutor)
CLASS_TO_RUN=org.apache.flink.runtime.taskexecutor.TaskManagerRunner
;;

(zookeeper)
CLASS_TO_RUN=org.apache.flink.runtime.zookeeper.FlinkZooKeeperQuorumPeer
;;

(historyserver)
CLASS_TO_RUN=org.apache.flink.runtime.webmonitor.history.HistoryServer
;;

(standalonesession)
CLASS_TO_RUN=org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint
;;

(standalonejob)
CLASS_TO_RUN=org.apache.flink.container.entrypoint.StandaloneJobClusterEntryPoint
;;

(*)
echo "Unknown daemon '${DAEMON}'. $USAGE."
exit 1
;;
esac

if [ "$FLINK_IDENT_STRING" = "" ]; then
FLINK_IDENT_STRING="$USER"
fi

FLINK_TM_CLASSPATH=`constructFlinkClassPath`

pid=$FLINK_PID_DIR/flink-$FLINK_IDENT_STRING-$DAEMON.pid

mkdir -p "$FLINK_PID_DIR"

# Log files for daemons are indexed from the process ID's position in the PID
# file. The following lock prevents a race condition during daemon startup
# when multiple daemons read, index, and write to the PID file concurrently.
# The lock is created on the PID directory since a lock file cannot be safely
# removed. The daemon is started with the lock closed and the lock remains
# active in this script until the script exits.
command -v flock >/dev/null 2>&1
if [[ $? -eq 0 ]]; then
exec 200<"$FLINK_PID_DIR"
flock 200
fi

# Ascending ID depending on number of lines in pid file.
# This allows us to start multiple daemon of each type.
id=$([ -f "$pid" ] && echo $(wc -l < "$pid") || echo "0")

FLINK_LOG_PREFIX="${FLINK_LOG_DIR}/flink-${FLINK_IDENT_STRING}-${DAEMON}-${id}-${HOSTNAME}"
log="${FLINK_LOG_PREFIX}.log"
out="${FLINK_LOG_PREFIX}.out"

log_setting=("-Dlog.file=${log}" "-Dlog4j.configuration=file:${FLINK_CONF_DIR}/log4j.properties" "-Dlogback.configurationFile=file:${FLINK_CONF_DIR}/logback.xml")

JAVA_VERSION=$(${JAVA_RUN} -version 2>&1 | sed 's/.*version "\(.*\)\.\(.*\)\..*"/\1\2/; 1q')

# Only set JVM 8 arguments if we have correctly extracted the version
if [[ ${JAVA_VERSION} =~ ${IS_NUMBER} ]]; then
if [ "$JAVA_VERSION" -lt 18 ]; then
JVM_ARGS="$JVM_ARGS -XX:MaxPermSize=256m"
fi
fi

case $STARTSTOP in

(start)
# Rotate log files
rotateLogFilesWithPrefix "$FLINK_LOG_DIR" "$FLINK_LOG_PREFIX"

# Print a warning if daemons are already running on host
if [ -f "$pid" ]; then
active=()
while IFS='' read -r p || [[ -n "$p" ]]; do
kill -0 $p >/dev/null 2>&1
if [ $? -eq 0 ]; then
active+=($p)
fi
done < "${pid}"

count="${#active[@]}"

if [ ${count} -gt 0 ]; then
echo "[INFO] $count instance(s) of $DAEMON are already running on $HOSTNAME."
fi
fi

# Evaluate user options for local variable expansion
FLINK_ENV_JAVA_OPTS=$(eval echo ${FLINK_ENV_JAVA_OPTS})

echo "Starting $DAEMON daemon on host $HOSTNAME."
$JAVA_RUN $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" -classpath "`manglePathList "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} "${ARGS[@]}" > "$out" 200<&- 2>&1 < /dev/null &

mypid=$!

# Add to pid file if successful start
if [[ ${mypid} =~ ${IS_NUMBER} ]] && kill -0 $mypid > /dev/null 2>&1 ; then
echo $mypid >> "$pid"
else
echo "Error starting $DAEMON daemon."
exit 1
fi
;;

(stop)
if [ -f "$pid" ]; then
# Remove last in pid file
to_stop=$(tail -n 1 "$pid")

if [ -z $to_stop ]; then
rm "$pid" # If all stopped, clean up pid file
echo "No $DAEMON daemon to stop on host $HOSTNAME."
else
sed \$d "$pid" > "$pid.tmp" # all but last line

# If all stopped, clean up pid file
[ $(wc -l < "$pid.tmp") -eq 0 ] && rm "$pid" "$pid.tmp" || mv "$pid.tmp" "$pid"

if kill -0 $to_stop > /dev/null 2>&1; then
echo "Stopping $DAEMON daemon (pid: $to_stop) on host $HOSTNAME."
kill $to_stop
else
echo "No $DAEMON daemon (pid: $to_stop) is running anymore on $HOSTNAME."
fi
fi
else
echo "No $DAEMON daemon to stop on host $HOSTNAME."
fi
;;

(stop-all)
if [ -f "$pid" ]; then
mv "$pid" "${pid}.tmp"

while read to_stop; do
if kill -0 $to_stop > /dev/null 2>&1; then
echo "Stopping $DAEMON daemon (pid: $to_stop) on host $HOSTNAME."
kill $to_stop
else
echo "Skipping $DAEMON daemon (pid: $to_stop), because it is not running anymore on $HOSTNAME."
fi
done < "${pid}.tmp"
rm "${pid}.tmp"
fi
;;

(*)
echo "Unexpected argument '$STARTSTOP'. $USAGE."
exit 1
;;

esac
  • 使用flink-daemon.sh启动的flink使用的logback配置文件是logback.xml

flink-release-1.7.1/flink-dist/src/main/flink-bin/bin/flink-console.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
#!/usr/bin/env bash
################################################################################
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

# Start a Flink service as a console application. Must be stopped with Ctrl-C
# or with SIGTERM by kill or the controlling process.
USAGE="Usage: flink-console.sh (taskexecutor|zookeeper|historyserver|standalonesession|standalonejob) [args]"

SERVICE=$1
ARGS=("${@:2}") # get remaining arguments as array

bin=`dirname "$0"`
bin=`cd "$bin"; pwd`

. "$bin"/config.sh

case $SERVICE in
(taskexecutor)
CLASS_TO_RUN=org.apache.flink.runtime.taskexecutor.TaskManagerRunner
;;

(historyserver)
CLASS_TO_RUN=org.apache.flink.runtime.webmonitor.history.HistoryServer
;;

(zookeeper)
CLASS_TO_RUN=org.apache.flink.runtime.zookeeper.FlinkZooKeeperQuorumPeer
;;

(standalonesession)
CLASS_TO_RUN=org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint
;;

(standalonejob)
CLASS_TO_RUN=org.apache.flink.container.entrypoint.StandaloneJobClusterEntryPoint
;;

(*)
echo "Unknown service '${SERVICE}'. $USAGE."
exit 1
;;
esac

FLINK_TM_CLASSPATH=`constructFlinkClassPath`

log_setting=("-Dlog4j.configuration=file:${FLINK_CONF_DIR}/log4j-console.properties" "-Dlogback.configurationFile=file:${FLINK_CONF_DIR}/logback-console.xml")

JAVA_VERSION=$(${JAVA_RUN} -version 2>&1 | sed 's/.*version "\(.*\)\.\(.*\)\..*"/\1\2/; 1q')

# Only set JVM 8 arguments if we have correctly extracted the version
if [[ ${JAVA_VERSION} =~ ${IS_NUMBER} ]]; then
if [ "$JAVA_VERSION" -lt 18 ]; then
JVM_ARGS="$JVM_ARGS -XX:MaxPermSize=256m"
fi
fi

echo "Starting $SERVICE as a console application on host $HOSTNAME."
exec $JAVA_RUN $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" -classpath "`manglePathList "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} "${ARGS[@]}"
  • 使用flink-console.sh启动的flink使用的logback配置文件是logback-console.xml

yarn-session.sh

flink-release-1.7.1/flink-dist/src/main/flink-bin/yarn-bin/yarn-session.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#!/usr/bin/env bash
################################################################################
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

bin=`dirname "$0"`
bin=`cd "$bin"; pwd`

# get Flink config
. "$bin"/config.sh

if [ "$FLINK_IDENT_STRING" = "" ]; then
FLINK_IDENT_STRING="$USER"
fi

JVM_ARGS="$JVM_ARGS -Xmx512m"

CC_CLASSPATH=`manglePathList $(constructFlinkClassPath):$INTERNAL_HADOOP_CLASSPATHS`

log=$FLINK_LOG_DIR/flink-$FLINK_IDENT_STRING-yarn-session-$HOSTNAME.log
log_setting="-Dlog.file="$log" -Dlog4j.configuration=file:"$FLINK_CONF_DIR"/log4j-yarn-session.properties -Dlogback.configurationFile=file:"$FLINK_CONF_DIR"/logback-yarn.xml"

export FLINK_CONF_DIR

$JAVA_RUN $JVM_ARGS -classpath "$CC_CLASSPATH" $log_setting org.apache.flink.yarn.cli.FlinkYarnSessionCli -j "$FLINK_LIB_DIR"/flink-dist*.jar "$@"
  • 使用yarn-session.sh启动的flink使用的logback配置文件是logback-yarn.xml

doc

小结

  • client端使用logback的话,要在pom文件添加logback-core、logback-classic及log4j-over-slf4j依赖,之后对flink-java、flink-streaming-java_2.11、flink-clients_2.11等配置log4j及slf4j-log4j12的exclusions;最后通过mvn dependency:tree查看是否还有log4j12,以确认下是否都全部排除了
  • 服务端使用logback的话,要在添加logback-classic.jar、logback-core.jar、log4j-over-slf4j.jar到flink的lib目录下(比如/opt/flink/lib);移除flink的lib目录下(比如/opt/flink/lib)log4j及slf4j-log4j12的jar(比如log4j-1.2.17.jar及slf4j-log4j12-1.7.15.jar);如果要自定义logback的配置的话,可以覆盖flink的conf目录下的logback.xml、logback-console.xml或者logback-yarn.xml
  • 使用flink-daemon.sh启动的flink使用的logback配置文件是logback.xml;使用flink-console.sh启动的flink使用的logback配置文件是logback-console.xml;使用yarn-session.sh启动的flink使用的logback配置文件是logback-yarn.xml

Logback配置文件详解

Logback,Java 日志框架。

Logback 如何加载配置的

  1. logback 首先会查找 logback.groovy 文件
  2. 当没有找到,继续试着查找 logback-test.xml 文件
  3. 当没有找到时,继续试着查找 logback.xml 文件
  4. 如果仍然没有找到,则使用默认配置(打印到控制台)

configuration

configuration 是配置文件的根节点,他包含的属性:

  • scan
      当此属性设置为 true 时,配置文件如果发生改变,将会被重新加载,默认值为 true
  • scanPeriod
      设置监测配置文件是否有修改的时间间隔,如果没有给出时间单位,默认单位是毫秒。但 scan 为 true 时,此属性生效,默认的时间间隔为 1 分钟
  • debug
      当此属性设置为 true 时,将打印出 logback 内部日志信息,实时查看 logback 运行状态,默认值为 false。
1
2
3
<configuration scan="true" scanPeriod="60 seconds" debug="false">  
<!-- 其他配置省略-->
</configuration>

configuration 的子节点

设置上下文名称:contextName

每个 logger 度关联到 logger 上下文,默认上下文名称为 “default”。可以通过设置 contextName 修改上下文名称,用于区分不同应该程序的记录

1
2
3
4
<configuration scan="true" scanPeriod="60 seconds" debug="false">  
<contextName>myAppName</contextName>
<!-- 其他配置省略-->
</configuration>

设置变量:property

用于定义键值对的变量, property 有两个属性 name 和 value,name 是键,value 是值,通过 property 定义的键值对会保存到logger 上下文的 map 集合内。定义变量后,可以使用 “${}” 来使用变量

1
2
3
4
5
<configuration scan="true" scanPeriod="60 seconds" debug="false">  
<property name="APP_Name" value="myAppName" />
<contextName>${APP_Name}</contextName>
<!-- 其他配置省略-->
</configuration>

获取时间戳字符串:timestamp

timestamp 有两个属性,key:标识此 timestamp 的名字;datePattern:时间输出格式,遵循SimpleDateFormat 的格式

1
2
3
4
5
<configuration scan="true" scanPeriod="60 seconds" debug="false">  
<timestamp key="bySecond" datePattern="yyyyMMdd'T'HHmmss"/>
<contextName>${bySecond}</contextName>
<!-- 其他配置省略-->
</configuration>

logger

logger 有两种级别,一种是 root,一种是普通的 logger,logger 是用来设置某一个包或者具体的某一个类的日志打印机级别,以及制定的 appender。
logger 有三个属性

  • name:用来指定此 logger 约束的某一个包或者具体的某一个类
  • level:用来设置打印机别,
  • addtivity:是否向上级 logger 传递打印信息。默认是 true

每个 logger 都有对应的父级关系,它通过包名来决定父级关系,root 是最高级的父元素。
下面定义了四个 logger,他们的父子关系从小到大为:
com.lwc.qg.test.logbackDemo → com.lwc.qg.tes → com.lwc.qg → root

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<!-- 根 logger -->
<root level="info">
<appender-ref ref="STDOUT"/>
</root>

<!--
普通的 logger
name:类名或包名,标志该 logger 与哪个包或哪个类绑定
level:该 logger 的日志级别
additivity:是否将日志信息传递给上一级
-->
<logger name="com.lwc.qg.test.logbackDemo" level="debug" additivity="true">
<appender-ref ref="STDOUT"/>
</logger>

<logger name="com.lwc.qg.test" level="info" additivity="true">
<appender-ref ref="STDOUT"/>
</logger>

<logger name="com.lwc.qg" level="info" additivity="true">
<appender-ref ref="STDOUT"/>
</logger>

  从该种级别来看,如果此时在最低层的 logger 输出日志信息,以该配置作为基础,它将会向父级的所有 logger 依次传递,所以按理来说一个打印信息将会打印四次

  从控制台上看,的确每条日志信息都被打印出了四次,但是细心从配置文件上来看,root 的日志级别配置的为 info,但是却输出
debug 级别的日志信息,所以从测试结果可以看出,向上传递的日志信息的日志级别将由最底层的子元素决定(最初传递信息的
logger),因为子元素设置的日志级别为 debug,所以也输出了 debug 级别的信息。
  因此,从理论上来说,如果子元素日志级别设置高一点,那么也将会只输出高级别的日志信息。实际上也是如此,如果我们把 com.lwc.qg.test.logbackDemo 对应的 logger 日志级别设为 warn,那么将只会输出 warn 及其以上的信息

root

root 也是 logger 元素,但它是根 logger。只有一个 level 属性

appender

appender 是负责写日志的组件,常用的组件有:

  • ConsoleAppender
  • FileAppender
  • RollingFileAppender

ConsoleAppender

控制台日志组件,该组件将日志信息输出到控制台,该组件有以下节点

  • encoder:对日志进行格式化
  • target:System.out 或者 System.err,默认是 System.out
1
2
3
4
5
6
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">  
<encoder>
<pattern>%-4relative [%thread] %-5level %logger{35} - %msg %n</pattern>
</encoder>
<target>System.out</target>
</appender>

FileAppender

文件日志组件,该组件将日志信息输出到日志文件中,该组件有以下节点

  • file:被写入的文件名,可以是相对路径,也可以是绝对路径。如果上级目录不存在会自动创建,没有默认值
  • append:如果是 true,日志被追加到文件结尾;如果是 false,清空现存文件,默认是 true。
  • encoder:格式化
  • prudent:如果是 true,日志会被安全的写入文件,即使其他的 FileAppender 也在向此文件做写入操作,效率低,默认是 false。
1
2
3
4
5
6
7
8
<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>testFile.log</file>
<append>true</append>
<encoder>
<pattern>%-4relative [%thread] %-5level %logger{35} - %msg%n</pattern>
</encoder>
<prudent>true</prudent>
</appender>

RollingFileAppender

滚动记录文件日志组件,先将日志记录记录到指定文件,当符合某个条件时,将日志记录到其他文件,该组件有以下节点

  • file:文件名
  • encoder:格式化
  • rollingPolicy:当发生滚动时,决定 RollingFileAppender 的行为,涉及文件移动和重命名
  • triggeringPolicy:告知 RollingFileAppender 合适激活滚动
  • prudent:当为true时,不支持FixedWindowRollingPolicy。支持TimeBasedRollingPolicy,但是有两个限制,1不支持也不允许文件压缩,2不能设置file属性,必须留空。

####

rollingPolicy

滚动策略

  1. TimeBasedRollingPolicy:最常用的滚动策略,它根据时间来制定滚动策略,即负责滚动也负责触发滚动,包含节点:
    • fileNamePattern:文件名模式
    • maxHistoury:控制文件的最大数量,超过数量则删除旧文件
  2. FixedWindowRollingPolicy:根据固定窗口算法重命名文件的滚动策略,包含节点
    • minInedx:窗口索引最小值
    • maxIndex:串口索引最大值,当用户指定的窗口过大时,会自动将窗口设置为12
    • fileNamePattern:文件名模式,必须包含%i,命名模式为 log%i.log,会产生 log1.log,log2.log 这样的文件
  3. triggeringPolicy:根据文件大小的滚动策略,包含节点
    • maxFileSize:日志文件最大大小
1
2
3
4
5
6
7
8
9
10
11
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">

<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logFile.%d{yyyy-MM-dd}.log</fileNamePattern>
<maxHistory>30</maxHistory>
</rollingPolicy>

<encoder>
<pattern>%-4relative [%thread] %-5level %logger{35} - %msg%n</pattern>
</encoder>
</appender>

filter 过滤器

过滤器是用于日志组件中的,每经过一个过滤器都会返回一个确切的枚举值,分别是

  • DENY:返回 DENY,日志将立即被抛弃不再经过其他过滤器
  • NEUTRAL:有序列表的下个过滤器接着处理日志
  • ACCEPT:日志会被立即处理,不再经过剩余过滤器

常用过滤器

常用的过滤器有以下:

  • LevelFilter
    级别过滤器,根据日志级别进行过滤。如果日志级别等于配置级别,过滤器会根据 omMatch 和 omMismatch 接受或拒绝日志。他有以下节点
      level:过滤级别
      onMatch:配置符合过滤条件的操作
      onMismatch:配置不符合过滤条件的操作
    例:该组件设置一个 INFO 级别的过滤器,那么所有非 INFO 级别的日志都会被过滤掉  
1
2
3
4
5
6
7
8
9
10
11
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">  
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>INFO</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
<encoder>
<pattern>%-4relative [%thread] %-5level %logger{35} - %msg %n</pattern>
</encoder>
<target>System.out</target>
</appender>
  • ThresholdFilter
    临界值过滤器,过滤掉低于指定临界值的日志。当日志级别等于或高于临界值时,过滤器会返回 NEUTRAL;当日志级别低于临界值时,日志会被拒绝
    例:过滤掉所有低于 INFO 级别的日志
1
2
3
4
5
6
7
8
9
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">  
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
<encoder>
<pattern>%-4relative [%thread] %-5level %logger{35} - %msg %n</pattern>
</encoder>
<target>System.out</target>
</appender>
  • EvaluatorFilter
    求值过滤器,评估、鉴别日志是否符合指定条件,包含节点:
      evaluator:鉴别器,通过子标签 expression 配置求值条件
      onMatch:配置符合过滤条件的操作
      onMismatch:配置不符合过滤条件的操作