Transform the client log file generation system of xxl-job

Transform the client log file generation system of xxl-job

Why revamp the original log file generation system of XXL-JOB

xxl-job s original client-side log file generation strategy is: one log record generates one file, that is, when there is a log logId in the database, the corresponding client will generate a file, because there are many batches of scheduled tasks, and some The task interval is very short, such as triggered every few seconds. The result is that the client will generate a large number of files, but the content of each file is actually not much, but a large number of individual files will take up more disks, resulting in disks. Resources are tight and have an impact on the performance of the file system. For a long time, resource alarms will be triggered. Therefore, if you don't want to clean up the log files frequently, it is urgent to integrate the fragmented files in some way.

This article is longer and involves more code~

The log file generation strategy after transformation

Basic description

To reduce the number of log files is actually to merge the scattered log files, and at the same time establish an external index file to maintain the starting log content corresponding to the respective current log Id. When reading, first read the index file, and then read the real log content.

The following is a log file description analysis diagram:

  • For an executor, all timed tasks under the executor only generate one logId_jobId_index.log index file every day to maintain the correspondence between the log Id and the task Id;
  • For a timed task, only one log content file of jobId.log is generated every day, which saves all the log content of the day;
  • For the log index file jobId_index.log, it is used to maintain the index in the log content, so that it is easy to know which logId the corresponding number of rows in jobId.log belongs to, and it is easy to find.

The general idea is this. It can be expected that the log file will be greatly reduced and the disk usage should be improved.

This feature that I modified has been running online for almost a year, and there is no problem at the moment. If necessary, I can make corresponding adjustments and modifications in conjunction with my own business, and carefully test it to prevent unknown errors.

Code practice

The transformation of this article is based on the transformation of XXL-JOB 1.8.2 version, other versions have not yet been tested

Open the code directory xxl-job-core module, which mainly involves the changes of the following files:

  • XxlJobFileAppender.java
  • XxlJobLogger.java
  • JobThread.java
  • ExecutorBizImpl.java
  • LRUCacheUtil.java

XxlJobFileAppender.java

Some of the original methods in the code that did not involve changes will not be pasted here.

package com.xxl.job.core.log; import com.xxl.job.core.biz.model.LogResult; import com.xxl.job.core.util.LRUCacheUtil; import org.apache.commons.io.FilenameUtils; import org.slf4j.Logger; import org.slf4j .LoggerFactory; import org.springframework.util.StringUtils; import java.io.*; import java.text.DecimalFormat; import java.text.SimpleDateFormat; import java.util.Date; import java.util.HashMap; import java.util.Map; import java.util.regex.Pattern ; /** * store trigger log in each log-file * @author xuxueli 2016-3-12 19:25:12 */ public class XxlJobFileAppender { private static Logger logger = LoggerFactory.getLogger(XxlJobFileAppender.class); //for JobThread (support log for child thread of job handler) //public static ThreadLocal<String> contextHolder = new ThreadLocal<String>(); public static final InheritableThreadLocal<String> contextHolder = new InheritableThreadLocal<String>(); //for logId, record log Id public static final InheritableThreadLocal<Integer> contextHolderLogId = new InheritableThreadLocal<>(); //for JobId, the Id of the timed task public static final InheritableThreadLocal<Integer> contextHolderJobId = new InheritableThreadLocal<>(); //Use a cache map collection to access index offset information public static final LRUCacheUtil<Integer, Map<String ,Long>> indexOffsetCacheMap = new LRUCacheUtil<>( 80 ); private static final String DATE_FOMATE = "yyyy-MM-dd" ; private static final String UTF_8 = "utf-8" ; //File name suffix private static final String FILE_SUFFIX = ".log" ; private static final String INDEX_SUFFIX = "_index" ; private static final String LOGID_JOBID_INDEX_SUFFIX = "logId_jobId_index" ; private static final String jobLogIndexKey = "jobLogIndexOffset" ; private static final String indexOffsetKey = "indexOffset" ; /** * log base path * * strut like: * ---/ * ---/gluesource/ * ---/gluesource/10_1514171108000.js * ---/gluesource/10_1514171108000.js * ---/2017-12-25/ * ---/2017-12-25/639.log * ---/2017-12-25/821.log * */ private static String logBasePath = "/data/applogs/xxl-job/jobhandler" ; private static String glueSrcPath = logBasePath.concat( "/gluesource" ); public static void initLogPath (String logPath) { //init if (logPath != null && logPath.trim().length()> 0 ) { logBasePath = logPath; } //mk base dir File logPathDir = new File(logBasePath); if (!logPathDir.exists()) { logPathDir.mkdirs(); } logBasePath = logPathDir.getPath(); //mk glue dir File glueBaseDir = new File(logPathDir, "gluesource" ); if (!glueBaseDir.exists()) { glueBaseDir.mkdirs(); } glueSrcPath = glueBaseDir.getPath(); } public static String getLogPath () { return logBasePath; } public static String getGlueSrcPath () { return glueSrcPath; } /** * Rewrite the method of generating log directory and log file name: * log filename, like "logPath/yyyy-MM-dd/jobId.log" * @param triggerDate * @param jobId * @return */ public static String makeLogFileNameByJobId (Date triggerDate, int jobId) { //filePath/yyyy-MM-dd //avoid concurrent problem, can not be static SimpleDateFormat sdf = new SimpleDateFormat(DATE_FOMATE); File logFilePath = new File(getLogPath(), sdf.format(triggerDate)); if (!logFilePath.exists()) { logFilePath.mkdir(); } //Generate log index file String logIndexFileName = logFilePath.getPath() .concat( "/" ) .concat(String.valueOf(jobId)) .concat(INDEX_SUFFIX) .concat(FILE_SUFFIX); File logIndexFilePath = new File(logIndexFileName); if (!logIndexFilePath.exists()) { try { logIndexFilePath.createNewFile(); logger.debug( "Generate log index file, file path: {}" , logIndexFilePath); } catch (IOException e) { logger.error(e.getMessage(), e); } } //Generate a global index of jobId corresponding to the logId of the day in the yyyy-MM-dd folder to reduce the modification of the back pipe String logIdJobIdIndexFileName = logFilePath.getPath() .concat( "/" ) .concat(LOGID_JOBID_INDEX_SUFFIX) .concat(FILE_SUFFIX); File logIdJobIdIndexFileNamePath = new File(logIdJobIdIndexFileName); if (!logIdJobIdIndexFileNamePath.exists()) { try { logIdJobIdIndexFileNamePath.createNewFile(); logger.debug( "Generate logId and jobId index files, file path: {}" , logIdJobIdIndexFileNamePath); } catch (IOException e) { logger.error(e.getMessage(), e); } } //filePath/yyyy-MM-dd/jobId.log log log file String logFileName = logFilePath.getPath() .concat( "/" ) .concat(String.valueOf(jobId)) .concat(FILE_SUFFIX); return logFileName; } /** * The back tube platform reads the detailed log to view and generates the file name * admin read log, generate logFileName bu logId * @param triggerDate * @param logId * @return */ public static String makeFileNameForReadLog (Date triggerDate, int logId) { //filePath/yyyy-MM-dd SimpleDateFormat sdf = new SimpleDateFormat(DATE_FOMATE); File logFilePath = new File(getLogPath(), sdf.format(triggerDate)); if (!logFilePath.exists()) { logFilePath.mkdir(); } String logIdJobIdFileName = logFilePath.getPath().concat( "/" ) .concat(LOGID_JOBID_INDEX_SUFFIX) .concat(FILE_SUFFIX); //find logId->jobId mapping //Get index mapping String infoLine = readIndex(logIdJobIdFileName, logId); String[] arr = infoLine.split( "->" ); int jobId = 0 ; try { jobId = Integer.parseInt(arr[ 1 ]); } catch (Exception e) { logger.error( "makeFileNameForReadLog StringArrayException,{},{}" , e.getMessage(), e); throw new RuntimeException( "StringArrayException" ); } String logFileName = logFilePath.getPath().concat( "/" ) .concat(String.valueOf(jobId)).concat(FILE_SUFFIX); return logFileName; } /** * Add content to the log file, and add an index to the index file * append log * @param logFileName * @param appendLog */ public static void appendLogAndIndex (String logFileName, String appendLog) { //log file if (logFileName == null || logFileName.trim().length() == 0 ) { return ; } File logFile = new File(logFileName); if (!logFile.exists()) { try { logFile.createNewFile(); } catch (Exception e) { logger.error(e.getMessage(), e); return ; } } //start append, count line num long startLineNum = countFileLineNum(logFileName); logger.debug( "Start appending log file, start line number: {}" , startLineNum); //log if (appendLog == null ) { appendLog = "" ; } appendLog += "\r\n" ; //append file content try { FileOutputStream fos = null ; try { fos = new FileOutputStream(logFile, true ); fos.write(appendLog.getBytes( "utf-8" )); fos.flush(); } finally { if (fos != null ) { try { fos.close(); } catch (IOException e) { logger.error(e.getMessage(), e); } } } } catch (Exception e) { logger.error(e.getMessage(), e); } //end append, count line num, count again long endLineNum = countFileLineNum(logFileName); Long lengthTmp = endLineNum-startLineNum; int length = 0 ; try { length = lengthTmp.intValue(); } catch (Exception e) { logger.error( "Long to int Exception" , e); } logger.debug( "End append log file, end line number: {}, length: {}" , endLineNum, length); Map<String, Long> indexOffsetMap = new HashMap<>(); appendIndexLog(logFileName, startLineNum, length, indexOffsetMap); appendLogIdJobIdFile(logFileName, indexOffsetMap); } /** * Create a mapping relationship between log Id and JobId * @param logFileName * @param indexOffsetMap */ public static void appendLogIdJobIdFile (String logFileName, Map indexOffsetMap) { //Get the value saved by the variable in ThreadLocal int logId = XxlJobFileAppender.contextHolderLogId.get(); int jobId = XxlJobFileAppender.contextHolderJobId.get(); File file = new File(logFileName); //Get the parent directory and find the index file in the same folder String parentDirName = file.getParent(); //logId_jobId_index fileName String logIdJobIdIndexFileName = parentDirName.concat( "/" ) .concat(LOGID_JOBID_INDEX_SUFFIX) .concat(FILE_SUFFIX); //Get logId from the cache boolean jobLogIndexOffsetExist = indexOffsetCacheMap.exists(logId); Long jobLogIndexOffset = null ; if (jobLogIndexOffsetExist) { jobLogIndexOffset = indexOffsetCacheMap.get(logId).get(jobLogIndexKey); } if (jobLogIndexOffset == null ) { //Add StringBuffer if it is empty stringBuffer = new StringBuffer(); stringBuffer.append(logId).append( "->" ).append(jobId).append( "\r\n" ); Long currentPoint = getAfterAppendIndexLog(logIdJobIdIndexFileName, stringBuffer.toString()); indexOffsetMap.put(jobLogIndexKey, currentPoint); indexOffsetCacheMap.save(logId, indexOffsetMap); } //If it is not empty, the cache already exists and no other processing will be done } /** * Append the content of the index file and return the offset * @param fileName * @param content * @return */ private static Long getAfterAppendIndexLog (String fileName, String content) { RandomAccessFile raf = null ; Long point = null ; try { raf = new RandomAccessFile(fileName, "rw" ); long end = raf.length(); //Because it is additional content, put the pointer at the end of the file raf.seek(end); raf.writeBytes(content); //Get the current pointer offset /** * The offset is put into the cache variable: Note that the offset obtained here is the offset of the place where the get started. You can't get it after adding content, otherwise it will get to the end * Offset when */ point = end; } catch (IOException e) { logger.error(e.getMessage(), e); } finally { try { raf.close(); } catch (IOException e) { logger.error(e.getMessage(), e); } } return point; } /** * Add index log, like "345->(577,10)" * @param logFileName * @param from * @param length * @param indexOffsetMap */ public static void appendIndexLog (String logFileName, Long from, int length, Map indexOffsetMap) { int strLength = logFileName.length(); //Obtain the index file name through interception String prefixFilePath = logFileName.substring( 0 , strLength- 4 ) ; String logIndexFilePath = prefixFilePath.concat(INDEX_SUFFIX).concat(FILE_SUFFIX); File logIndexFile = new File(logIndexFilePath); if (!logIndexFile.exists()) { try { logIndexFile.createNewFile(); } catch (IOException e) { logger.error(e.getMessage(), e); return ; } } int logId = XxlJobFileAppender.contextHolderLogId.get(); StringBuffer stringBuffer = new StringBuffer(); //Determine whether to add or modify boolean indexOffsetExist = indexOffsetCacheMap.exists(logId); Long indexOffset = null ; if (indexOffsetExist) { indexOffset = indexOffsetCacheMap.get(logId).get(indexOffsetKey); } if (indexOffset == null ) { //append String lengthStr = getFormatNum(length); stringBuffer.append(logId).append( "->(" ) .append(from).append( "," ).append(lengthStr).append( ")\r\n" ); //Add a new index and record the offset Long currentIndexPoint = getAfterAppendIndexLog(logIndexFilePath, stringBuffer.toString()); indexOffsetMap.put(indexOffsetKey, currentIndexPoint); } else { String infoLine = getIndexLineIsExist(logIndexFilePath, logId); //Modify the content of the index file int startTmp = infoLine.indexOf( "(" ); int endTmp = infoLine.indexOf( ")" ); String[] lengthTmp = infoLine.substring(startTmp + 1 , endTmp).split( "," ); int lengthTmpInt = 0 ; try { lengthTmpInt = Integer.parseInt(lengthTmp[ 1 ]); from = Long.valueOf(lengthTmp[ 0 ]); } catch (Exception e) { logger.error( "appendIndexLog StringArrayException,{},{}" , e.getMessage(), e); throw new RuntimeException( "StringArrayException" ); } int modifyLength = length + lengthTmpInt; String lengthStr2 = getFormatNum(modifyLength); stringBuffer.append(logId).append( "->(" ) .append(from).append( "," ).append(lengthStr2).append( ")\r\n" ); modifyIndexFileContent(logIndexFilePath, infoLine, stringBuffer.toString()); } } /** * handle getFormatNum * like 5 to 005 * @return */ private static String getFormatNum ( int num) { DecimalFormat df = new DecimalFormat( "000" ); String str1 = df.format(num); return str1; } /** * Query whether the index exists * @param filePath * @param logId * @return */ private static String getIndexLineIsExist (String filePath, int logId) { //Read the index asking price to determine whether it exists, and it will be called every time the log is generated, so the index file needs to merge the corresponding logId String prefix = logId + "->" ; Pattern pattern = Pattern.compile(prefix + ".*?" ); String indexInfoLine = "" ; RandomAccessFile raf = null ; try { raf = new RandomAccessFile(filePath, "rw" ); String tmpLine = null ; //Offset boolean indexOffsetExist = indexOffsetCacheMap.exists(logId); Long cachePoint = null ; if (indexOffsetExist) { cachePoint = indexOffsetCacheMap.get(logId).get(indexOffsetKey); } if ( null == cachePoint) { cachePoint = Long.valueOf( 0 ); } raf.seek(cachePoint); while ((tmpLine = raf.readLine()) != null ) { final long point = raf.getFilePointer(); boolean matchFlag = pattern.matcher(tmpLine).find(); if (matchFlag) { indexInfoLine = tmpLine; break ; } cachePoint = point; } } catch (IOException e) { logger.error(e.getMessage(), e); } finally { try { raf.close(); } catch (IOException e) { logger.error(e.getMessage(), e); } } return indexInfoLine; } /** * When you need to query the execution log on the back management page, get the index information, * This is not the same as the previous one, because there is no map access offset information when reading, so it is relatively isolated * @param filePath * @param logId * @return */ private static String readIndex (String filePath, int logId) { filePath = FilenameUtils.normalize(filePath); String prefix = logId + "->" ; Pattern pattern = Pattern.compile(prefix + ".*?" ); String indexInfoLine = "" ; BufferedReader bufferedReader = null ; try { bufferedReader = new BufferedReader( new FileReader(filePath)); String tmpLine = null ; while ((tmpLine = bufferedReader.readLine()) != null ) { boolean matchFlag = pattern.matcher(tmpLine).find(); if (matchFlag) { indexInfoLine = tmpLine; break ; } } bufferedReader.close(); } catch (IOException e) { logger.error(e.getMessage(), e); } finally { if (bufferedReader != null ) { try { bufferedReader.close(); } catch (IOException e) { logger.error(e.getMessage(), e); } } } return indexInfoLine; } /** * Modify logIndexFile content * @param indexFileName * @param oldContent * @param newContent * @return */ private static boolean modifyIndexFileContent (String indexFileName, String oldContent, String newContent) { RandomAccessFile raf = null ; int logId = contextHolderLogId.get(); try { raf = new RandomAccessFile(indexFileName, "rw" ); String tmpLine = null ; //Offset boolean indexOffsetExist = indexOffsetCacheMap.exists(logId); Long cachePoint = null ; if (indexOffsetExist) { cachePoint = indexOffsetCacheMap.get(logId).get(indexOffsetKey); } if ( null == cachePoint) { cachePoint = Long.valueOf( 0 ); } raf.seek(cachePoint); while ((tmpLine = raf.readLine()) != null ) { final long point = raf.getFilePointer(); if (tmpLine.contains(oldContent)) { String str = tmpLine.replace(oldContent, newContent); raf.seek(cachePoint); raf.writeBytes(str); } cachePoint = point; } } catch (IOException e) { logger.error(e.getMessage(), e); } finally { try { raf.close(); } catch (IOException e) { logger.error(e.getMessage(), e); } } return true ; } /** * Count the number of lines in the file * @param logFileName * @return */ private static long countFileLineNum (String logFileName) { File file = new File(logFileName); if (file.exists()) { try { FileReader fileReader = new FileReader(file); LineNumberReader lineNumberReader = new LineNumberReader(fileReader); lineNumberReader.skip(Long.MAX_VALUE); //getLineNumber() starts counting from 0, so add 1 long totalLines = lineNumberReader.getLineNumber() + 1 ; fileReader.close(); lineNumberReader.close(); return totalLines; } catch (IOException e) { logger.error(e.getMessage(), e); } } return 0 ; } /** * Rewrite reading log: 1. Read logIndexFile; 2.logFile * @param logFileName * @param logId * @param fromLineNum * @return */ public static LogResult readLogByIndex (String logFileName, int logId, int fromLineNum) { int strLength = logFileName.length(); //Get the file name prefix out. log String prefixFilePath = logFileName.substring( 0 , strLength- 4 ); String logIndexFilePath = prefixFilePath.concat(INDEX_SUFFIX).concat(FILE_SUFFIX); //valid logIndex file if (StringUtils.isEmpty(logIndexFilePath)) { return new LogResult(fromLineNum, 0 , "readLogByIndex fail, logIndexFile not found" , true ); } logIndexFilePath = FilenameUtils.normalize(logIndexFilePath); File logIndexFile = new File(logIndexFilePath); if (!logIndexFile.exists()) { return new LogResult(fromLineNum, 0 , "readLogByIndex fail, logIndexFile not exists" , true ); } //valid log file if (StringUtils.isEmpty(logFileName)) { return new LogResult(fromLineNum, 0 , "readLogByIndex fail, logFile not found" , true ); } logFileName = FilenameUtils.normalize(logFileName); File logFile = new File(logFileName); if (!logFile.exists()) { return new LogResult(fromLineNum, 0 , "readLogByIndex fail, logFile not exists" , true ); } //read logIndexFile String indexInfo = readIndex(logIndexFilePath, logId); int startNum = 0 ; int endNum = 0 ; if (!StringUtils.isEmpty(indexInfo)) { int startTmp = indexInfo.indexOf( "(" ); int endTmp = indexInfo.indexOf( ")" ); String[] fromAndTo = indexInfo.substring(startTmp + 1 , endTmp).split( "," ); try { startNum = Integer.parseInt(fromAndTo[ 0 ]); endNum = Integer.parseInt(fromAndTo[ 1 ]) + startNum; } catch (Exception e) { logger.error( "readLogByIndex StringArrayException,{},{}" , e.getMessage(), e); throw new RuntimeException( "StringArrayException" ); } } //read File StringBuffer logContentBuffer = new StringBuffer(); int toLineNum = 0 ; LineNumberReader reader = null ; try { reader = new LineNumberReader( new InputStreamReader( new FileInputStream(logFile), UTF_8)); String line = null ; while ((line = reader.readLine()) != null ) { //[from, to], start as fromNum(logIndexFile) toLineNum = reader.getLineNumber(); if (toLineNum >= startNum && toLineNum <endNum) { logContentBuffer.append(line).append( "\n" ); } //break when read over if (toLineNum >= endNum) { break ; } } } catch (IOException e) { logger.error(e.getMessage(), e); } finally { if (reader != null ) { try { reader.close(); } catch (IOException e) { logger.error(e.getMessage(), e); } } } LogResult logResult = new LogResult(fromLineNum, toLineNum, logContentBuffer.toString(), false ); return logResult; } } Copy code

XxlJobLogger.java

package com.xxl.job.core.log; import com.xxl.job.core.util.DateUtil; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.slf4j.helpers.FormattingTuple; import org.slf4j.helpers.MessageFormatter; import java.io.PrintWriter; import java.io.StringWriter; import java.util.Date; /** * Created by xuxueli on 17/4/28. */ public class XxlJobLogger { private static Logger logger = LoggerFactory.getLogger( "xxl-job logger" ); /** * append log * * @param callInfo * @param appendLog */ private static void logDetail (StackTraceElement callInfo, String appendLog) { /*//"yyyy-MM-dd HH:mm:ss [ClassName]-[MethodName]-[LineNumber]-[ThreadName] log"; StackTraceElement[] stackTraceElements = new Throwable().getStackTrace(); StackTraceElement callInfo = stackTraceElements[1];*/ StringBuffer stringBuffer = new StringBuffer(); stringBuffer.append(DateUtil.formatDateTime( new Date())).append( "" ) .append( "[" + callInfo.getClassName() + "#" + callInfo.getMethodName() + "]" ).append( "-" ) .append( "[" + callInfo.getLineNumber() + "]" ).append( "-" ) .append( "[" + Thread.currentThread().getName() + "]" ).append( "" ) .append(appendLog!= null ?appendLog: "" ); String formatAppendLog = stringBuffer.toString(); //appendlog String logFileName = XxlJobFileAppender.contextHolder.get(); if (logFileName!= null && logFileName.trim().length()> 0 ) { //XxlJobFileAppender.appendLog(logFileName, formatAppendLog); //modify method call here //modify appendLogAndIndex for addIndexLogInfo XxlJobFileAppender.appendLogAndIndex(logFileName, formatAppendLog); } else { logger.info( " >>>>>>>>>>> {}" , formatAppendLog); } } } Copy code

JobThread.java

@Override public void run () { ...... //execute while (!toStop){ running = false ; idleTimes++; TriggerParam triggerParam = null ; ReturnT<String> executeResult = null ; try { //to check toStop signal, we need cycle, so wo cannot use queue.take(), instand of poll(timeout) triggerParam = triggerQueue.poll( 3L , TimeUnit.SECONDS); if (triggerParam!= null ) { running = true ; idleTimes = 0 ; triggerLogIdSet.remove(triggerParam.getLogId()); //log filename, like "logPath/yyyy-MM-dd/9999.log" //String logFileName = XxlJobFileAppender.makeLogFileName(new Date(triggerParam.getLogDateTim()), triggerParam.getLogId()); //modify rename the generated log file and name it with jobId String logFileName = XxlJobFileAppender.makeLogFileNameByJobId( new Date(triggerParam.getLogDateTim()), triggerParam.getJobId()); XxlJobFileAppender.contextHolderJobId.set(triggerParam.getJobId()); //Modify here according to the xxl-job version number XxlJobFileAppender.contextHolderLogId.set(Integer.parseInt(String.valueOf(triggerParam.getLogId()))); XxlJobFileAppender.contextHolder.set(logFileName); ShardingUtil.setShardingVo( new ShardingUtil.ShardingVO(triggerParam.getBroadcastIndex(), triggerParam.getBroadcastTotal())); ...... } Copy code

ExecutorBizImpl.java

package com.xxl.job.core.biz.impl; import com.xxl.job.core.biz.ExecutorBiz; import com.xxl.job.core.biz.model.LogResult; import com.xxl.job.core.biz.model.ReturnT; import com.xxl.job. core.biz.model.TriggerParam; import com.xxl.job.core.enums.ExecutorBlockStrategyEnum; import com.xxl.job.core.executor.XxlJobExecutor; import com.xxl.job.core.glue.GlueFactory; import com. xxl.job.core.glue.GlueTypeEnum; import com.xxl.job.core.handler.IJobHandler; import com.xxl.job.core.handler.impl.GlueJobHandler; import com.xxl.job.core.handler.impl .ScriptJobHandler; importcom.xxl.job.core.log.XxlJobFileAppender; import com.xxl.job.core.thread.JobThread; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.Date; /** * Created by xuxueli on 17/3/1. */ public class ExecutorBizImpl implements ExecutorBiz { private static Logger logger = LoggerFactory.getLogger(ExecutorBizImpl.class); /** * Rewrite the method of reading log * @param logDateTim * @param logId * @param fromLineNum * @return */ @Override public ReturnT<LogResult> log ( long logDateTim, long logId, int fromLineNum) { //log filename: logPath/yyyy-MM-dd/9999.log String logFileName = XxlJobFileAppender.makeFileNameForReadLog( new Date( logDateTim), ( int )logId); LogResult logResult = XxlJobFileAppender.readLogByIndex(logFileName, Integer.parseInt(String.valueOf(logId)), fromLineNum); return new ReturnT<LogResult>(logResult); } } Copy code

LRUCacheUtil.java

Implement a cache container through LinkedHashMap

package com.xxl.job.core.util; import java.util.LinkedHashMap; import java.util.Map; /** * @Author : liangxuanhao * @Description : Use LinkedHashMap to implement a fixed-size buffer * @Date : */ public class LRUCacheUtil < K , V > extends LinkedHashMap < K , V > { //Maximum cache capacity private static final int CACHE_MAX_SIZE = 100 ; private int limit; public LRUCacheUtil () { this (CACHE_MAX_SIZE); } public LRUCacheUtil ( int cacheSize) { //true means update to the end super (cacheSize, 0.75f , true ); this .limit = cacheSize; } /** * Lock synchronization to prevent multi-thread safety issues when multi-threaded */ public synchronized V save (K key, V val) { return put(key, val); } public V getOne (K key) { return get(key); } public boolean exists (K key) { return containsKey(key); } /** * Determine whether the limit is exceeded * @param elsest * @return returns true if it exceeds the limit, otherwise it returns false */ @Override protected boolean removeEldestEntry (Map.Entry elsest) { //Called after the put or putAll method, if the capacity limit is exceeded, it will be deleted according to the least recently used LRU return size()> limit; } @Override public String toString () { StringBuilder sb = new StringBuilder(); for (Map.Entry<K, V> entry: entrySet()) { sb.append(String.format( "%s:%s " , entry.getKey(), entry.getValue())); } return sb.toString(); } } Copy code

result

The implementation effect is shown in the figure: the expected expectation is achieved, and the effect is good.