Added the much need code in Camera object to actual start all of
the threads.
Added multi instance support via the -d option.
Made the Loop object loop structure slot-signal compatible so
all objects using it can interupt the main loop to run other
slots.
Completely re-written the project to use the QT API. By using Qt,
I've open up use of useful tools like QCryptographicHash, QString,
QByteArray, QFile, etc.. In the future I could even make use of
slots/signals. The code is also in general much more readable and
thread management is by far much easier.
General operation of the app should be the same, this commit just
serves as a base for the migration over to QT.
the delay on the motion detection loop slowed it down too much to
the point that it falls too far behind live. I removed the delay
and re-introduced the frame gap so all frames in the video files
need to be decoded.
post command and event timers are now seperate but still tied to
a single thread so they can still be synced.
fixed an issued that cuased several thmubnails to not generate.
added more log lines the aid with debugging.
- reduced the hls size to 2 seconds.
- motion clips are not being combined. fixed it by making event
loop track the size of shared_t::recList instead of system time.
- maxScore is now global inside of shared_t instead of locally
inside detectMoInStream().
Fixed the crashing issue by adding tcp timeout args to ffmpeg and
having the app handle empty frames from a disconnected camera
better.
Reformed the directory structure by having live, logs and events in
seperate directories.
schLoop() no longer exists, postCmd is now handled by
detectMoInStream() to ensure motion detection is not done while the
command is running.
Event files are still not concatenating. I suspect the issue was in
eventLoop() caching old event objects. Changed up the loop to so it will
grab latest event object on each iteration.
Adjusted the event loop and motion detection to make better stand alone
m3u8 files. Hopefully doing this will make browsers treat the recorded
events as VODs instead of streams.
Logs are not rotating correctly. Changed up the code to append the logs
in memory and then dump them to permanent storage on every loop of
upkeep(). Hopefully this fixes the issue.
Completely reformed the internal workings of the application code. I
brought back multi-threaded functions so there is now 5 separate threads
for different tasks.
recLoop() - this function calls ffmpeg to begin recording footage from
the defined camera and stores the footage in hls format. It is designed
to keep running for as long as the application is running and if it does
stop for whatever reason, it will attempt to auto re-start.
upkeep() - this function does regular cleanup and enforcement of maxDays
maxLogSize and maxEvents without the need to stop recording or detecting
motion.
detectMo() - this function reads directly from recLoop's hls output and
list all footage that has motion in it. motion detection no longer has
to wait for the clip to finish recording thanks to the use of .ts
containers for the video clips. this makes the motion detection for less
cpu intensive now that it will now operate at the camera's fps (slower).
eventLoop() - this function reads the motion list from detectMo and
copies the footage pointed out by the list to an events folder, also in
hls format.
schLoop() - this function runs an optional user defined external command
every amount of seconds defined in sch_sec. this command temporary stops
motion detection without actually terminating the thread. It will also
not run the command at the scheduled time if motion was detected.
Benefits to this reform:
- far less cpu intensive operation
- multi-threaded architecture for better asynchronous operation
- it has support for live streaming now that hls is being used
- a buff_dir is no longer necessary
The fork() architecture from the previous commit is also deemed a
failure. Reverted back to v1.5.t19 code. I'll start from scratch, using
this commit as the new base.
going back to basics. removed all threading code and opted for a multi
process architecture using fork(). previous code had a bad memory leak
and doesn't handle unexpected camera disconnects and for some reason it
also didn't recover gracefully in systemctl when it crashes. Hopefully
this new re-write fixes all of those numerous issues.
moDetect() will now try multiple times to grab buffer footage before
giving up and moving on.
the crashing issue might be the detection threads going out-of-scope
before properly finishing. re-implemented share->detThreads from
previous stable code to see if this fixes the issue.
The crashing problems may have started after switching my test machine
to to multiple config file setup. I'll test this theory by completely
removing the multiple config file feasure and see if it crashes again.
I'll figure out a better solution for multi config files in the next
round of deveoplment.
Added a signal handler that will print out signal details upon receiving
them. This should give up some hint to the cause of crashes for
debugging reasons.
The root index web page will now only be updated once. Hopefully this
reduces chance of multiple instances clashing with each other.
Updated the documentation.
The test machine had a mystery crash that needs to be investigated. In
mean time, the timeout run code has been refactored and will not run
thread cancel unless is it absolutely needed at the individual thread
level (hopefully that fixes the crash issue).
post_cmd shall also now run via timeout. With that, no external commands
should cause this application to stall. Timeout protection should
prevent that.
Added string trimming to the vid_container parameter to filter out bad
user input.
Added detection_stream url to the config file and made it so the
application can now use a smaller/lower bit rate stream for motion
detection separate from the recording stream. This can significantly
lower CPU usage.
Moved away from using system() and the explicit timeout command. Instead
opted to using popen() and cancelable pthreads. Doing this pulls back
more control over ffmpeg than before and the app will now properly
respond term signals and even the CTRL-C keyboard interrupt.
Added the ability to read multiple config files so it's now possible to
load a singular global config and then load a camera specific config in
another.
Many elements in the web interface are coming out too small. Added meta
viewport device width in hopes that the web interface will self adjust
to device it is being displayed on.
Changed duration to num_of_clips and added clip_len so the amount of
seconds in each clip and the amount of clips to be processed for motion
are now adjustable.
Adjusted a several default values.
Fixed an invalid argument in the ffmpeg command from vcodec to -vcodec.
Also added a 10sec delay to simulate a running ffmpeg if it fails for
what ever reason.
The app is still failing with intermittent moov atom failures on trying
to open the video clips with opencv VideoCapture. I suspect it is trying
to open the files while ffmpeg is not done finalizing them. Reformed the
detection loop to spawn dedicated motion detection threads for each
video clip and only when ffmpeg confirmed finished.
Thanks the debug messages, in found a potential issue with the video
clips being pulled from the in house test cameras. Sometimes the video
clips are being pulled with incomplete meta information causing opencv
to fail to open the clips. Added "-movflags faststart" to the ffmpeg
command that should hopefully fix this and should help the app to handle
unreliable camera streams more gracefully.
max_clips now defaults to 90 instead of 30.
Added the ability change the video codec via the config file.
Changed the install script to now install the application in the /opt
directory and then symm link to /usr/bin. Doing this allowed me to
create a run script to start the application and enable the
OPENCV_VIDEOIO_DEBUG parameter for opencv. This should make it easier to
diagnose video-io issues with opencv.
Updated the README documentation with all of the changes done to the
application since v1.5.
Somewhere in the code is causing an infinite loop, root cause still
undermined. added more logging statements to help me find misbehavior.
imgDiff() will now handle empty frames on the parameters more
gracefully.
enforceMaxClips() will no longer assume all video clips are accompanied
by html and jpg files but will now instead "delete if exists."
The app is hard crashing now but I was able to determine the cause this
time. Must functions in filesystem tend to abort if the filesystem
object doesn't exists. Added protection where needed to prevent crashing
Logs are still being cutoff, I'm assuming the app is crashing but can't
locate the problem without any logs. Reformed logging to never overwrite
the logs and will instead append only. Size control will be in the form
of the byte size of the log files.
Moved logging out of it's own loop, hopefully this fixes the issue with
it not outputting all log lines. recLoop() and detectLoop() will now
update logs synchronously.
The setup.sh script will now include gstreamer and pkg-config. This
should help fix opencv video-io format support.
The error checking with ffmpeg is not working. Learned that it doesn't
always return 0 on success. Decided to remove the error checking
altogether. Instead ffmpeg failures should be checked manually using
stderr.
Dirent includes .. and . so I decided to switch to the filesystem entry
listing that should hopefully exclude those special directories.
The camera webroot was not generating .index files. those files would
only get generated if motion was detected. Copied the code that does
that onto recLoop() to execute regardless of motion.
Fixed the default webroot directory to apache's correct webroot. Also
renamed separated outDir from webRoot and made webRoot changeable on the
config file.
Added logging the recorder and detection loops to help with debugging
and troubleshooting. Just like the video clips, max log lines were added
to control the size of the data being saved to storage.
Decided to switch using opencv's builtin pixel diff motion detection via
absdiff and thresh. Doing this should increase efficiency instead of
using the home brewed pixel loops and threads.
Added a web interface of sorts by having html files output along with
the video clips. These files are designed to link together with the
assumption that the output directory is a web root like /var/www/html
that apache2 uses. The interface is crude at best but at least allow
playback of recorded footage.
Added max_clips config variable that can limit the amount of motion
events that can recorded to storage on a single day.
AI object detection via yolov5 didn't work out too well, in fact it was
crashing the detection threads for whatever reason. I could deep dive
why it was crashing but I think the better solution is to bring back
optical flow detection at the block level. the advantage of this over
object detection is the fact that a block doesn't need to have a whole
object in it.
re-formed the stats output and moved it out of the motion detect
function.
block pixel diff counts will now no longer stop at the threshold at each
block. it will now count the entire block and output the results in the
stats. the code now also pick the block with the highest pixDiff instead
of stopping at the first block with a high pixDiff.
added object detection code base on yolov5 machine vision model. also
added a stat file so motion and object detection values can be monitored
in real time if used with the 'watch' command.
Added a -v command line option to display the application's current
version.
The application version is now defined in a single a const value called
APP_VER so bumping the version number now means updating this single
value in common.h.
Versioning scheme will now be major.minor.[test_rev]. test_rev will be
t1, 2, 3, etc... as updates are pushed to the test branch. all code
pushes to master shall bump major or minor and then remove test_rev.
Removed the detect loop's motion latching affect so it ONLY calls wrOut
if the video clip contains motion.
Fixed a bug in the recording loop that fail to create the needed sub-dir
before calling FFMPEG.
Broken down the code into multiple files instead having it all in
main.cpp.
Also detached recording from detection by having them now run in
separate threads instead of having motion detection inline with
recording. this will hopefully make it so there is less missed motion
events due to processing overhead.
The recording loop now take advantage of FFMPEG's "-f segment" option
instead of generating the clips implicitly in separated FFMPEG calls.
again, all in hope to reduce missed motion events.
This application have the tendency to detect motion of small insects.
to prevent this it was determined with there will need to be some means
of identifying objects via machine vision. there is an object detection
function but it doesn't currently do anything at this time. this is
something that I will be working on in the near future.
created a test branch in the repository. all early, testing code will
now go in this branch going forward. only fully tested, stable code will
be committed to master going forward.
Video clips recorded from the camera are no longer append, instead the
clips are kept as is and then linked together in a playlist file in the
output_dir. this makes it much more efficient and easier to maintain
code.
Also discovered that ffmpeg have a tendency to stall mid execution of
recording from the rtsp stream every now and then. added a work around
in the form of calling ffmpeg via the timeout command instead of
directly so it will force kill ffmpeg if it goes longer than the
expected BUF_SZ.
Increased BUF_SZ to 10 secs.
Added a clause in the recording loop that will make it write out a
second clip if motion was detected.
major changes to the motion detection scheme and re-introduced
multi-threading. this further sped up the motion detection to a point
that it can now be called in line with the recording loop without
loosing any extra camera footage due to heavy cpu usage.
pixels are now read in blocks to further increase efficiency and to
filter out movements of small objects. the footage clip size is now
hard coded to 3 seconds instead of it being external adjustable.
changed the way footage with motion is now stored. its now down to
single level files with the current date. if footage of the same date
already exists, new footage will be appended to it.
the version number shall be updated going forward.
removed all threads from the application as there is no used for them at
at this time. instead, the application will now operate on a single
event loop and now directly utilize use ffmpeg to record video footage
instead of opencv's implementation.
old code pulled tons of frames the detection stream at full speed,
wasting a lot of cpu cycles. instead it will now pull frames in a steady
speed at the new detect_fps value. doing this significantly reduced cpu
usage and can potentially further reduce cpu usage for end users by
pulling the fps value lower then the default.
The app will now count the amount of secs to record post motion
detection instead of the amount of frames. Split the main loop timer
with the motion timer in separate threads to make that happen. The
parameter was added to the config file.
Recording fps is now adjustable.
decided to change frame comparison functions again from optical flow to
a home brewed function that compares gray levels in the pixels of each
frame. significant differences in gray levels between the frames can
potentially trigger a motion event.
also moved away command line arguments to an external config file to set
app parameters.
created a README file to get this project ready for general open source
release.
all current experimentation with the code leads up to this point
for optical flow motion detection. the code as it stands will
input frames in pairs and then compare each pair of frames for
any significant changes in the optical flow distance between
points.
experiments have shown that this actual does work fairly well;
however there is significant amounts of CPU usage and video
encoding options are not flexible at all. the code still picks
up false positives but I have high confidence something that
can be adjusted through external parameters which I will
impliment in the future.