EventSentry Disk Usage

We had been using eventsentry light software for about 8 months and honestly we hadn’t checked the resource usage after the first 2 weeks of monitoring but when we first started monitoring it was only hitting 20mbps of disk usage for under 2 minutes every 20 minutes, when we caught it last it was running 20-30mbps nonstop for days before I killed it. The drives it is monitoring are about 2tb of data with under 2million files. Has anyone had issues with the resource usage of eventsentry or have suggestions of settings changes on disk usage. Our only goal is to monitor checksum changes on these particular drives and letting us know if outside of our threshold. I will look into a way to post my config.


  • It doesn't look like there is a simple way to cut out just my settings and make the presentable. The only feature we have enabled and configured is File Monitoring, we define the folder and limit the file types to office products and pdfs. Enable Log to Event Log, Detect File check sum changes is the only monitoring we have turned on. We only verify checksum when last write time has changed. It is set to monitor real time, and right under that it is greyed out but I was told by support to set that to 30 minutes regardless since that effects when they do a sweep [again this is what I was told].

    Can anyone confirm if EventSentry is having to read the whole file just to see if the checksum has changed?

    Thank you
  • Hi Kenny,

    I suspect that recurring scan which you configured every 30 minutes is the cause of the issues, since the agent has to re-scan 2 million files every 30 minutes. Do you have a case number with support that I can look at?

    It looks like you already have it configured optimally otherwise, with file checksums only being evaluated when the last write time changed.

    Did you leave the default setting in place to not generate checksums for files larger than 25Mb? If so, have you considered reducing that number (e.g. 5Mb)? This could help if you have a lot larger files for which checksum generation does take a bit longer.

    Have you tried increasing that number to something like 300 (or even more) minutes? I'd be curious if that helps.

  • I will enable the skip on larger files, I do not have a case as I am running the light version which I was told does not provide support. What has me put off is that during my testing and first 2 weeks of implementation the scans were finishing on all those files in 2 minutes now they are taking much longer. The next initial scan is going to be quite long due to me turning off EventSentry for now to stop the disk usage. I will update after enabling over the weekend.
  • It's difficult for me to determine why the scans would be taking significantly longer now compared to earlier. If you haven't made any changes to EventSentry then one would need to assume that the files have changed significantly - could that be the case? Please keep us updated, thanks!
  • Welp it said their was an update so I ran that as well as changing the setting to not scan anything over 10mb. It ran the initial scan to update the checksum db but after that I haven't seen it jump over 5mbps disk usage and only for 2-3 minutes. If anything changes I will update.

    Thank you
  • Thank you for the update - glad it's working ok so far. Please let us know if anything changes.
Sign In or Register to comment.