capacity limits for log file monitoring

I have a rather chatty log file in text format that collects a little over 10 million records a day, growing to about 2GB in size. I'm not looking to collect all those events, but there are a few particular events that come up a few times a year that I'd like to tie to EventSentry actions. Is that something I'd be able to monitor with EventSentry? Would you expect any problems from the amount of data being written into the file I want to watch?

Comments

  • Although 10 million records per day is pretty high (about 115 records per second), I wouldn't really expect that would encounter any problems.

    Worst case scenario you may see a slightly elevated process utilization by the EventSentry agent (it's usually less than 1%, in this case it may be higher), but it should work. You can keep an eye on the CPU usage of eventsentry_svc.exe.

    If you do run into any issues then please let us know.

    Thank you!
  • Hi Ingmar and team, reporting back on this functionality. We set up a log file package to watch our target file for a text string match and write an informational event out to the application event log, then using an event log package to alert us via email. We do get some alerts when the inclusion string appears in the log, but upon looking back at the text file we're monitoring we're discovering that that we aren't getting application log events generated most of the time the text string appears. The file we monitor is generated with a new name each day at midnight local time with form syslog_yyyy-mm-dd . Could you suggest next steps for troubleshooting?
  • Upgrading to the latest EventSentry version and the service restart associated with that has revived the alerts generated by text file monitoring. Will be watching this, still curious about any general suggestions for troubleshooting.
  • Hi Erich,

    Has this feature been working as expected since you upgraded back in October?
Sign In or Register to comment.