-didn't properly remove use of evhist from the last commit. it is
truely removed now.
-added delay cycles to detectLoop is motion was detected to
prevent some event overlap.
-removed use of evtHist. will instead allow eventLoop to que up
duplicate live video clips and then remove later using
QStringList::removeDuplicates().
-changed up the ffmpeg commands to utilize tcp and re-added a tcp
timeout argument, removing the need for command stall checking.
-added logic to pick the snapshot with the highest diff score as
the event thumbnail.
-added a termination slot to RecLoop that will kill the long term
ffmpeg commands and connected it to 'aboutToQuit' signal. this is
expected to kill the ffmpeg commands properly when quiting the
main process.
-max_event_secs is not being honered correctly. EventLoop was not
calculating the amount the hls clips to grab from live correctly.
chanaged it to properly calculate file count based on hls segment
size.
-updated the documentation as the current version nears stable
release.
-turns out the previous statment on the previous commit is
incorrect. it is every possible to overlap events. to
mitigate this, evtHist was added to shared_t to track
recently copied source vids and remove them from the
event queued to be written.
-removed the delay after motion was detected. it was not having
the desired effect and after more thought event overlap would
be impossible anyway.
-the test cameras are still picking up motion during the post
command. adjusted the after command delay to see if that
helps.
-reduced the DetectLoop heartbeat from 3 to 2 to better match
the record loop's cadence.
-the test cameras are picking up motion as post command is running.
added a delay increment to DetectLoop in hope to fix this.
-removed the upkeep log since it doesn't really provide any useful
information.
-adjusted the default motion score again.
-reduced the amount of image files DetectLoop needs from 3 to 2.
-added a dely to DetectLoop after a positive motion detection to
prevent motion event overlap.
-moved the 2 image diff pair compair to proper "end of array" in
DetectLoop.
-increased the size of the image stream so queded up events will
be able generate thumbnails properly.
-cleaned off a bunch of unused parameters in the code.
-adjusted the default motion sensitivity after real world
testing.
-added libfuse-dev to setup.sh since imagemagic needs that to
operate.
The Qt approach to grabbing frames from the live stream was also
a failure.
- decided to switch to a combination ffmpeg and imagemagic was
external commands to do motion detection. this approach
elimates the need for opencv altogeather so it was removed
from the project. system resource usage appears to be decent
and perhaps better than opencv.
I'm going to test a move away from opencv's videoio module.
Videoio simply refuses to open any video file even with
FFMPEG builtin. I tested old v2.2 code and even that failed
on a fresh install of ubuntu sever so this tells me an
update on opencv's side broke something.
This issue is not new and frankly I'm tired of chasing it.
I'm giving QT's QMediaPlayer a try to see how it works out.
Will still need opencv for the absdiff and threshold
functions, otherwise I would have dropped the API
altogeather.
Now that the app has QT::Multimedia, QT6 is now the minimum
version it will support. CMakeList.txt and the setup script
updated accordingly.
Got the app up to "not failing immediately" state.
However, for some reason DetectLoop is failing hard via opencv
being unable to open the stream clips.
I'll continue deep diving this. For now everything else works.
Added the much need code in Camera object to actual start all of
the threads.
Added multi instance support via the -d option.
Made the Loop object loop structure slot-signal compatible so
all objects using it can interupt the main loop to run other
slots.
Completely re-written the project to use the QT API. By using Qt,
I've open up use of useful tools like QCryptographicHash, QString,
QByteArray, QFile, etc.. In the future I could even make use of
slots/signals. The code is also in general much more readable and
thread management is by far much easier.
General operation of the app should be the same, this commit just
serves as a base for the migration over to QT.
the delay on the motion detection loop slowed it down too much to
the point that it falls too far behind live. I removed the delay
and re-introduced the frame gap so all frames in the video files
need to be decoded.
post command and event timers are now seperate but still tied to
a single thread so they can still be synced.
fixed an issued that cuased several thmubnails to not generate.
added more log lines the aid with debugging.
- reduced the hls size to 2 seconds.
- motion clips are not being combined. fixed it by making event
loop track the size of shared_t::recList instead of system time.
- maxScore is now global inside of shared_t instead of locally
inside detectMoInStream().
Fixed the crashing issue by adding tcp timeout args to ffmpeg and
having the app handle empty frames from a disconnected camera
better.
Reformed the directory structure by having live, logs and events in
seperate directories.
schLoop() no longer exists, postCmd is now handled by
detectMoInStream() to ensure motion detection is not done while the
command is running.
Event files are still not concatenating. I suspect the issue was in
eventLoop() caching old event objects. Changed up the loop to so it will
grab latest event object on each iteration.
Adjusted the event loop and motion detection to make better stand alone
m3u8 files. Hopefully doing this will make browsers treat the recorded
events as VODs instead of streams.
Apparently native html5 or modern browsers do not support running .m3u8
playlist directly or I was missing something in the original code. Even
adding the correct mime types in apache2 didn't work so I decided to
embed hls.js into the video html files to support hls playlist.
Logs are not rotating correctly. Changed up the code to append the logs
in memory and then dump them to permanent storage on every loop of
upkeep(). Hopefully this fixes the issue.
Completely reformed the internal workings of the application code. I
brought back multi-threaded functions so there is now 5 separate threads
for different tasks.
recLoop() - this function calls ffmpeg to begin recording footage from
the defined camera and stores the footage in hls format. It is designed
to keep running for as long as the application is running and if it does
stop for whatever reason, it will attempt to auto re-start.
upkeep() - this function does regular cleanup and enforcement of maxDays
maxLogSize and maxEvents without the need to stop recording or detecting
motion.
detectMo() - this function reads directly from recLoop's hls output and
list all footage that has motion in it. motion detection no longer has
to wait for the clip to finish recording thanks to the use of .ts
containers for the video clips. this makes the motion detection for less
cpu intensive now that it will now operate at the camera's fps (slower).
eventLoop() - this function reads the motion list from detectMo and
copies the footage pointed out by the list to an events folder, also in
hls format.
schLoop() - this function runs an optional user defined external command
every amount of seconds defined in sch_sec. this command temporary stops
motion detection without actually terminating the thread. It will also
not run the command at the scheduled time if motion was detected.
Benefits to this reform:
- far less cpu intensive operation
- multi-threaded architecture for better asynchronous operation
- it has support for live streaming now that hls is being used
- a buff_dir is no longer necessary
The fork() architecture from the previous commit is also deemed a
failure. Reverted back to v1.5.t19 code. I'll start from scratch, using
this commit as the new base.
going back to basics. removed all threading code and opted for a multi
process architecture using fork(). previous code had a bad memory leak
and doesn't handle unexpected camera disconnects and for some reason it
also didn't recover gracefully in systemctl when it crashes. Hopefully
this new re-write fixes all of those numerous issues.
moDetect() will now try multiple times to grab buffer footage before
giving up and moving on.
the crashing issue might be the detection threads going out-of-scope
before properly finishing. re-implemented share->detThreads from
previous stable code to see if this fixes the issue.
The crashing problems may have started after switching my test machine
to to multiple config file setup. I'll test this theory by completely
removing the multiple config file feasure and see if it crashes again.
I'll figure out a better solution for multi config files in the next
round of deveoplment.
Added a signal handler that will print out signal details upon receiving
them. This should give up some hint to the cause of crashes for
debugging reasons.
The root index web page will now only be updated once. Hopefully this
reduces chance of multiple instances clashing with each other.
Updated the documentation.
The test machine had a mystery crash that needs to be investigated. In
mean time, the timeout run code has been refactored and will not run
thread cancel unless is it absolutely needed at the individual thread
level (hopefully that fixes the crash issue).
post_cmd shall also now run via timeout. With that, no external commands
should cause this application to stall. Timeout protection should
prevent that.
Added string trimming to the vid_container parameter to filter out bad
user input.
Added detection_stream url to the config file and made it so the
application can now use a smaller/lower bit rate stream for motion
detection separate from the recording stream. This can significantly
lower CPU usage.
Moved away from using system() and the explicit timeout command. Instead
opted to using popen() and cancelable pthreads. Doing this pulls back
more control over ffmpeg than before and the app will now properly
respond term signals and even the CTRL-C keyboard interrupt.
The app was still cutting out last command line arg of my test setup.
Later found out it was the run script limiting the command line arg
count to 3. I extended it out to 8 but I'll need to find a better option
to make it limitless.
The is not currently parsing multiple config files properly. Changed up
the parser function without complicated check ahead logic. Will test if
this works.
Added the ability to read multiple config files so it's now possible to
load a singular global config and then load a camera specific config in
another.
Many elements in the web interface are coming out too small. Added meta
viewport device width in hopes that the web interface will self adjust
to device it is being displayed on.
Changed duration to num_of_clips and added clip_len so the amount of
seconds in each clip and the amount of clips to be processed for motion
are now adjustable.
Adjusted a several default values.
Fixed an invalid argument in the ffmpeg command from vcodec to -vcodec.
Also added a 10sec delay to simulate a running ffmpeg if it fails for
what ever reason.
The app is still failing with intermittent moov atom failures on trying
to open the video clips with opencv VideoCapture. I suspect it is trying
to open the files while ffmpeg is not done finalizing them. Reformed the
detection loop to spawn dedicated motion detection threads for each
video clip and only when ffmpeg confirmed finished.
Thanks the debug messages, in found a potential issue with the video
clips being pulled from the in house test cameras. Sometimes the video
clips are being pulled with incomplete meta information causing opencv
to fail to open the clips. Added "-movflags faststart" to the ffmpeg
command that should hopefully fix this and should help the app to handle
unreliable camera streams more gracefully.
max_clips now defaults to 90 instead of 30.