Could you check if its really the audit log which is spamming here: I dont see anything here to indicate this is audit related. You need to modify your docker setup not to launch syncthing with audit flag. Is there something that is preventing from reading the files (both the compressed and uncompressed ones) to a channel of log lines. This is a surprising behavior, and I wouldn't have expected docker to decompress any file to the disk, but rather to read the compressed files directly and stream the results without creating any new file. Then restart Docker and youll be good to go. When you reach the end of the file, you return the n lines you currently have in memory. And do not see Global Changes button on my GUI, Also, I have tried to add options to containers: -logfile=-logflags=0 but did not help and I do not see any information in FAQ Syncthing v1 documentation about size limitation of log file, it is stupid, because each soft has it. If we look further down in the docs we can see that the max-size option defaults to -1 which means the logs file size will grow to unlimited size. Dive into Docker takes you from "What is Docker?" Thats a workaround. How does the local logging driver compare in this respect? Hi! It does not decompress if the log file does not contain logs within the time range. I use Syncthing (docker image syncthing/syncthing:latest) and noticed that I lost 120GB of size on my SSD where stored all my containers. When "decompressFile" is called, we pass along the read config and returns an empty file of the configured time stamp is not in the compressed file. however if the logs can't fit in the space, and you don't want to see the compressed logs, why are they being kept? Or possibly even Syncthing related. Well occasionally send you account related emails. For those interested, this is handled here: moby/daemon/logger/loggerutils/logfile.go. ![]() Let me just preface this with you probably dont need to panic about Docker container logs taking up all of your disk space. This makes it impossible to work around this bug by repetitively calling docker logs on successive intervals. Would it be more performant? Unless I'm mistaken, when using a timestamp to tail the logs, only the log files that are needed are considered, but they are still all decompressed to disk anyway instead of just being read line by line. So I set the logging driver to compress the logs, set a custom max-size and max-file, and everything is working well, until I try to actually read the logs with docker logs. Lazy decompression on read (and subsequently releasing/removing the decompressed file as soon as we are done with it) may help. You only need reading line by line from the beginning, keeping the last n lines you have seen in memory. What would docker do with the number of log lines ? It would be nice if we handled opening (and as such decompressing) files better here, however if the logs can't fit in the space, and you don't want to see the compressed logs, why are they being kept? Id check the actual content of the logs and see what it contains. ![]() ![]() This topic was automatically closed 30 days after the last reply. Gzip offers streaming compression and decompression.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |