c# - ASP.Net website static lock file write / IIS recycle issue -


i see lot of discussion regards static locking in website write single file logging.

there never use static lock , others called in such instance. naysayers argue: if iis application pool recycled? static lock lost , file write error occur.

my specific ask: if population of users reasonably small (n < 1000), , have single line of code inside lock executes file write of < 500 characters, astronomically improbable issue concerned?

and if issue of magnitude, simplest path of improvement avoid rare iis recycle-static lock error? simple try/catch on write "catch" multiple file access in such case?

use fileshare.readwrite , useasync = false

access file creating filestream using this constructor (don't use file.open) , specify following arguments:

var stream = new filestream(path,                              filemode.append,                              fileaccess.write,                              fileshare.readwrite,                              4096,                              false); 

the important arguments note fileshare.readwrite , useasync = false.

fileshare.readwrite: allows subsequent opening of file reading or writing. if flag not specified, request open file reading or writing (by process or process) fail until file closed. however, if flag specified, additional permissions might still needed access file.

useasync: specifies whether use asynchronous i/o or synchronous i/o. however, note underlying operating system might not support asynchronous i/o, when specifying true, handle might opened synchronously depending on platform. when opened asynchronously, beginread , beginwrite methods perform better on large reads or writes, might slower small reads or writes. if application designed take advantage of asynchronous i/o, set useasync parameter true. using asynchronous i/o correctly can speed applications as factor of 10, using without redesigning application asynchronous i/o can decrease performance as factor of 10.

by using these parameters, obtain file handle allow other processes access file in parallel. meanwhile writes synchronous prevent output being split in half other process. there still locking, it's handled underlying o/s , transparent , competing process.

add lock

if makes feel better, can wrap in lock:

lock (lockobject) {     using (var stream = new filestream(path, filemode.append, fileaccess.write, fileshare.readwrite, 4096, false))     {         var writer = new textwriter(stream);         writer.write(message);     } } 

note lock protects against competing threads, not competing processes. if have worker thread handles log writes (e.g. queue , producer/consumer pattern), don't need it, , adds unnecessary overhead. if writing log directly web worker threads, need it.

cross-process mutex

the above ought pretty darned safe, during app pool recycle. @ least i've never had problems. but.. if really paranoid , logging mission critical, use named mutex locking crosses process boundaries.

var mutex = new mutex(false, "myloggingmutex"); try {     mutex.waitone();     using (var stream = new filestream(path, filemode.append, fileaccess.write, fileshare.readwrite, 4096, false))     {         var writer = new textwriter(stream);         writer.write(message);     } } {     mutex.releasemutex();         } 

there's quite bit of overhead sort of thing, not use unless logging mission critical, e.g. you're dumping auditable data log file might used in support of non-repudiation (e.g. prove did don't sued, sort of thing). if case i'd stick database, makes problem trivial solve.


Comments

Popular posts from this blog

php - Vagrant up error - Uncaught Reflection Exception: Class DOMDocument does not exist -

vue.js - Create hooks for automated testing -

Add new key value to json node in java -