The Qt approach to grabbing frames from the live stream was also
a failure.
- decided to switch to a combination ffmpeg and imagemagic was
external commands to do motion detection. this approach
elimates the need for opencv altogeather so it was removed
from the project. system resource usage appears to be decent
and perhaps better than opencv.
I'm going to test a move away from opencv's videoio module.
Videoio simply refuses to open any video file even with
FFMPEG builtin. I tested old v2.2 code and even that failed
on a fresh install of ubuntu sever so this tells me an
update on opencv's side broke something.
This issue is not new and frankly I'm tired of chasing it.
I'm giving QT's QMediaPlayer a try to see how it works out.
Will still need opencv for the absdiff and threshold
functions, otherwise I would have dropped the API
altogeather.
Now that the app has QT::Multimedia, QT6 is now the minimum
version it will support. CMakeList.txt and the setup script
updated accordingly.
Completely re-written the project to use the QT API. By using Qt,
I've open up use of useful tools like QCryptographicHash, QString,
QByteArray, QFile, etc.. In the future I could even make use of
slots/signals. The code is also in general much more readable and
thread management is by far much easier.
General operation of the app should be the same, this commit just
serves as a base for the migration over to QT.
Fixed the default webroot directory to apache's correct webroot. Also
renamed separated outDir from webRoot and made webRoot changeable on the
config file.
Added logging the recorder and detection loops to help with debugging
and troubleshooting. Just like the video clips, max log lines were added
to control the size of the data being saved to storage.
Decided to switch using opencv's builtin pixel diff motion detection via
absdiff and thresh. Doing this should increase efficiency instead of
using the home brewed pixel loops and threads.
Added a web interface of sorts by having html files output along with
the video clips. These files are designed to link together with the
assumption that the output directory is a web root like /var/www/html
that apache2 uses. The interface is crude at best but at least allow
playback of recorded footage.
Added max_clips config variable that can limit the amount of motion
events that can recorded to storage on a single day.
completely removed object detection code because I don't foresee going
back to that model anytime soon. diffs will not reset to 0 instead
decrement and the consecutive pixel diffs are now adjustable via
consec_threshold.
updated README.md for the changes to pixel diff detection.
Broken down the code into multiple files instead having it all in
main.cpp.
Also detached recording from detection by having them now run in
separate threads instead of having motion detection inline with
recording. this will hopefully make it so there is less missed motion
events due to processing overhead.
The recording loop now take advantage of FFMPEG's "-f segment" option
instead of generating the clips implicitly in separated FFMPEG calls.
again, all in hope to reduce missed motion events.
This application have the tendency to detect motion of small insects.
to prevent this it was determined with there will need to be some means
of identifying objects via machine vision. there is an object detection
function but it doesn't currently do anything at this time. this is
something that I will be working on in the near future.
created a test branch in the repository. all early, testing code will
now go in this branch going forward. only fully tested, stable code will
be committed to master going forward.
created setup, build and install scripts to make it easier and
convenient to compile and install the application from source. no plans
distribute pre-compiled binaries because it's just so much easier to
guarantee the application will actually work in the target machine when
compiled by the target machine.
Video clips recorded from the camera are no longer append, instead the
clips are kept as is and then linked together in a playlist file in the
output_dir. this makes it much more efficient and easier to maintain
code.
Also discovered that ffmpeg have a tendency to stall mid execution of
recording from the rtsp stream every now and then. added a work around
in the form of calling ffmpeg via the timeout command instead of
directly so it will force kill ffmpeg if it goes longer than the
expected BUF_SZ.
Increased BUF_SZ to 10 secs.
Added a clause in the recording loop that will make it write out a
second clip if motion was detected.