Brainy Pi

Available to select audience (currently beta)

Today’s blog post will explore the robust capabilities of OpenVINO (Open Visual Inference and Neural Network Optimization) in image segmentation. We’ll showcase a demo that leverages OpenVINO on Brainy Pi, combining AI and computer vision to segment video frames. This blog targets developers and entrepreneurs interested in building computer vision products with OpenVINO. Let’s dive into implementing image segmentation on Brainy Pi!

Installing OpenVINO

Before we dive into the demo, we need to install OpenVINO and its dependencies on BrainyPi. Let’s start by opening a terminal and running the following command:
sudo apt install openvino-toolkit libopencv-dev
This command will install OpenVINO and the necessary OpenCV development files on your system, providing the foundation for our image segmentation demo.

Compiling the Demos

Once OpenVINO is successfully installed, we can proceed to compile the demos. These demos serve as an excellent starting point for understanding and exploring the capabilities of OpenVINO. Here are the steps to compile the demos:
  1. Set up the OpenVINO environment by sourcing the setupvars.sh script:
    source /opt/openvino/setupvars.sh
  2. Clone the Open Model Zoo repository, which contains the demos, by executing the following command:
    git clone --recurse-submodules https://github.com/openvinotoolkit/open_model_zoo.git
    cd open_model_zoo/demos/
  3. Build the demos by running the build_demos.sh script:
    ./build_demos.sh
These steps will compile the demo applications. It will make them ready for execution and enabling us to dive into the exciting world of image segmentation with OpenVINO.

Running the Image Segmentation on Brainy Pi

With the demos successfully compiled, we can now proceed to download the required models and run the image segmentation demo using OpenVINO. Let’s follow the steps below:
  1. Download the models required for the demo by running the following command:
    omz_downloader --list ~/open_model_zoo/demos/segmentation_demo/cpp/models.lst -o ~/models/ --precision FP16
    This command downloads the necessary models from the Open Model Zoo repository and saves them in the ~/models/ directory on your system.
  2. Download a test video to feed into the demo. We will use the following command to download a sample video:
    cd ~/
    wget https://raw.githubusercontent.com/intel-iot-devkit/sample-videos/master/head-pose-face-detection-male.mp4
  3. Once the models and the test video are downloaded, we can run the image segmentation demo using the following command:
    ~/omz_demos_build/aarch64/Release/segmentation_demo -i ~/head-pose-face-detection-male.mp4 -m ~/models/intel/semantic-segmentation-adas-0001/FP16/semantic-segmentation-adas-0001.xml
    This command executes the image segmentation demo, processing the frames of the test video and producing segmented output based on the downloaded model.
By following these steps, you can witness the power of OpenVINO in action, as it intelligently segments the frames in the video and showcases the potential for building advanced computer vision applications.

Conclusion

In this blog post, we explored the Image Segmentation Demo using OpenVINO on BrainyPi. We covered the installation process, compilation of demos, and the steps to run the image segmentation demo with OpenVINO. By leveraging OpenVINO’s capabilities, developers and entrepreneurs can unlock the potential of AI and computer vision for their own projects and build advanced computer vision products.
If you’re interested in diving deeper into OpenVINO, be sure to check out the official OpenVINO documentation for more detailed information and resources.
Reference: OpenVINO Segmentation Demo Documentation
0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*