- reduced the hls size to 2 seconds.
- motion clips are not being combined. fixed it by making event
loop track the size of shared_t::recList instead of system time.
- maxScore is now global inside of shared_t instead of locally
inside detectMoInStream().
Fixed the crashing issue by adding tcp timeout args to ffmpeg and
having the app handle empty frames from a disconnected camera
better.
Reformed the directory structure by having live, logs and events in
seperate directories.
schLoop() no longer exists, postCmd is now handled by
detectMoInStream() to ensure motion detection is not done while the
command is running.
Adjusted the event loop and motion detection to make better stand alone
m3u8 files. Hopefully doing this will make browsers treat the recorded
events as VODs instead of streams.
Completely reformed the internal workings of the application code. I
brought back multi-threaded functions so there is now 5 separate threads
for different tasks.
recLoop() - this function calls ffmpeg to begin recording footage from
the defined camera and stores the footage in hls format. It is designed
to keep running for as long as the application is running and if it does
stop for whatever reason, it will attempt to auto re-start.
upkeep() - this function does regular cleanup and enforcement of maxDays
maxLogSize and maxEvents without the need to stop recording or detecting
motion.
detectMo() - this function reads directly from recLoop's hls output and
list all footage that has motion in it. motion detection no longer has
to wait for the clip to finish recording thanks to the use of .ts
containers for the video clips. this makes the motion detection for less
cpu intensive now that it will now operate at the camera's fps (slower).
eventLoop() - this function reads the motion list from detectMo and
copies the footage pointed out by the list to an events folder, also in
hls format.
schLoop() - this function runs an optional user defined external command
every amount of seconds defined in sch_sec. this command temporary stops
motion detection without actually terminating the thread. It will also
not run the command at the scheduled time if motion was detected.
Benefits to this reform:
- far less cpu intensive operation
- multi-threaded architecture for better asynchronous operation
- it has support for live streaming now that hls is being used
- a buff_dir is no longer necessary
The fork() architecture from the previous commit is also deemed a
failure. Reverted back to v1.5.t19 code. I'll start from scratch, using
this commit as the new base.
going back to basics. removed all threading code and opted for a multi
process architecture using fork(). previous code had a bad memory leak
and doesn't handle unexpected camera disconnects and for some reason it
also didn't recover gracefully in systemctl when it crashes. Hopefully
this new re-write fixes all of those numerous issues.
moDetect() will now try multiple times to grab buffer footage before
giving up and moving on.
Added the ability to read multiple config files so it's now possible to
load a singular global config and then load a camera specific config in
another.
Many elements in the web interface are coming out too small. Added meta
viewport device width in hopes that the web interface will self adjust
to device it is being displayed on.
Changed duration to num_of_clips and added clip_len so the amount of
seconds in each clip and the amount of clips to be processed for motion
are now adjustable.
Adjusted a several default values.
The app is still failing with intermittent moov atom failures on trying
to open the video clips with opencv VideoCapture. I suspect it is trying
to open the files while ffmpeg is not done finalizing them. Reformed the
detection loop to spawn dedicated motion detection threads for each
video clip and only when ffmpeg confirmed finished.
Added another debug clause for opencv videoio so it will provide even
more debug information. Going back to implicitly defining FFMPEG as the
videoio for opencv, turns out FFMPEG is the only real stable option to
use when it comes to reading video files with opencv. Any other option
would just severely limit codec and container support.
Found the infinite loop issue in moDetect(), turns out the frame
parameters at some point were never returning empty, hence moDetect()
would continue into perpetuity. Changed the loop structure to use a
fixed frame count instead of relying on frameFF() to return empty on
EOF.
Somewhere in the code is causing an infinite loop, root cause still
undermined. added more logging statements to help me find misbehavior.
imgDiff() will now handle empty frames on the parameters more
gracefully.
enforceMaxClips() will no longer assume all video clips are accompanied
by html and jpg files but will now instead "delete if exists."
Moved logging out of it's own loop, hopefully this fixes the issue with
it not outputting all log lines. recLoop() and detectLoop() will now
update logs synchronously.
The setup.sh script will now include gstreamer and pkg-config. This
should help fix opencv video-io format support.
Can't get opencv to work with FFMPEG to open the buff file on the test
machine. I've given up on trying to figure out why. Testing out video
capture without explicitly specifying FFMPEG to see how that works out.
Fixed the default webroot directory to apache's correct webroot. Also
renamed separated outDir from webRoot and made webRoot changeable on the
config file.
Added logging the recorder and detection loops to help with debugging
and troubleshooting. Just like the video clips, max log lines were added
to control the size of the data being saved to storage.
Decided to switch using opencv's builtin pixel diff motion detection via
absdiff and thresh. Doing this should increase efficiency instead of
using the home brewed pixel loops and threads.
Added a web interface of sorts by having html files output along with
the video clips. These files are designed to link together with the
assumption that the output directory is a web root like /var/www/html
that apache2 uses. The interface is crude at best but at least allow
playback of recorded footage.
Added max_clips config variable that can limit the amount of motion
events that can recorded to storage on a single day.
completely removed object detection code because I don't foresee going
back to that model anytime soon. diffs will not reset to 0 instead
decrement and the consecutive pixel diffs are now adjustable via
consec_threshold.
updated README.md for the changes to pixel diff detection.
optical flow calculations use up a lot of processing power even at the
block level so I decided to take it back out. once again, no objection
detection is going to be used and will fall back to pixel diffs only.
also modified pixel diffs to decrement pixel diffs of no diff is
detected, going test how this works out.
AI object detection via yolov5 didn't work out too well, in fact it was
crashing the detection threads for whatever reason. I could deep dive
why it was crashing but I think the better solution is to bring back
optical flow detection at the block level. the advantage of this over
object detection is the fact that a block doesn't need to have a whole
object in it.
potentially fixed what was apparently a long standing bug that caused
motion detection to look at just the first block. this bug was found
thanks to the stats output.
re-formed the stats output and moved it out of the motion detect
function.
block pixel diff counts will now no longer stop at the threshold at each
block. it will now count the entire block and output the results in the
stats. the code now also pick the block with the highest pixDiff instead
of stopping at the first block with a high pixDiff.
added object detection code base on yolov5 machine vision model. also
added a stat file so motion and object detection values can be monitored
in real time if used with the 'watch' command.
Broken down the code into multiple files instead having it all in
main.cpp.
Also detached recording from detection by having them now run in
separate threads instead of having motion detection inline with
recording. this will hopefully make it so there is less missed motion
events due to processing overhead.
The recording loop now take advantage of FFMPEG's "-f segment" option
instead of generating the clips implicitly in separated FFMPEG calls.
again, all in hope to reduce missed motion events.
This application have the tendency to detect motion of small insects.
to prevent this it was determined with there will need to be some means
of identifying objects via machine vision. there is an object detection
function but it doesn't currently do anything at this time. this is
something that I will be working on in the near future.
created a test branch in the repository. all early, testing code will
now go in this branch going forward. only fully tested, stable code will
be committed to master going forward.