<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Intel OpenVINO Archives - Brainy Pi</title>
	<atom:link href="https://brainypi.com/category/intel-openvino/feed/" rel="self" type="application/rss+xml" />
	<link>https://brainypi.com/category/intel-openvino/</link>
	<description>Brainy Pi -Enterprise Single board ARM Computer (for mass production ready prototype creation)</description>
	<lastBuildDate>Thu, 15 Jun 2023 07:16:47 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.3</generator>

 
	<item>
		<title>Pedestrian Tracker on Brainy Pi</title>
		<link>https://brainypi.com/pedestrian-tracker-on-brainy-pi/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=pedestrian-tracker-on-brainy-pi</link>
					<comments>https://brainypi.com/pedestrian-tracker-on-brainy-pi/#respond</comments>
		
		<dc:creator><![CDATA[BrainyPi Team]]></dc:creator>
		<pubDate>Thu, 15 Jun 2023 07:03:48 +0000</pubDate>
				<category><![CDATA[Blogs]]></category>
		<category><![CDATA[Intel OpenVINO]]></category>
		<guid isPermaLink="false">https://brainypi.com/?p=5531</guid>

					<description><![CDATA[<p>In this blog post, we will delve into the Pedestrian Tracking Demo using OpenVINO on Brainy Pi. This impressive integration of AI and computer vision enables the detection of pedestrians in frames and the construction of their movement trajectories, frame-by-frame. This demonstration not only highlights the immense potential of OpenVINO in the development of computer vision products but also serves [&#8230;]</p>
<p>The post <a href="https://brainypi.com/pedestrian-tracker-on-brainy-pi/">Pedestrian Tracker on Brainy Pi</a> appeared first on <a href="https://brainypi.com">Brainy Pi</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="5531" class="elementor elementor-5531" data-elementor-post-type="post">
						<section class="elementor-section elementor-top-section elementor-element elementor-element-b664ace elementor-section-boxed elementor-section-height-default elementor-section-height-default" data-id="b664ace" data-element_type="section">
						<div class="elementor-container elementor-column-gap-default">
					<div class="elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-19627ff" data-id="19627ff" data-element_type="column">
			<div class="elementor-widget-wrap elementor-element-populated">
						<div class="elementor-element elementor-element-3192591 elementor-widget elementor-widget-text-editor" data-id="3192591" data-element_type="widget" data-widget_type="text-editor.default">
				<div class="elementor-widget-container">
									<div class="note-text md"><div class="group w-full text-gray-800 dark:text-gray-100 border-b border-black/10 dark:border-gray-900/50 bg-gray-50 dark:bg-[#444654]"><div class="flex p-4 gap-4 text-base md:gap-6 md:max-w-2xl lg:max-w-[38rem] xl:max-w-3xl md:py-6 lg:px-0 m-auto"><div class="relative flex w-[calc(100%-50px)] flex-col gap-1 md:gap-3 lg:w-[calc(100%-115px)]"><div class="flex flex-grow flex-col gap-3"><div class="min-h-[20px] flex flex-col items-start gap-4 whitespace-pre-wrap break-words"><div class="markdown prose w-full break-words dark:prose-invert light"><h6>In this blog post, we will delve into the Pedestrian Tracking Demo using OpenVINO on <a href="https://brainypi.com/">Brainy Pi</a>. This impressive integration of AI and computer vision enables the detection of pedestrians in frames and the construction of their movement trajectories, frame-by-frame. This demonstration not only highlights the immense potential of OpenVINO in the development of computer vision products but also serves as a valuable resource for developers and entrepreneurs in this field. So, let&#8217;s roll up our sleeves and implement the Pedestrian Tracker on Brainy Pi with the power of OpenVINO!</h6></div></div></div></div></div></div><p dir="auto" data-sourcepos="7:1-7:71"><img decoding="async" src="https://brainypi.com/wp-content/uploads/2023/06/peds-track.gif" /></p><h2 dir="auto" data-sourcepos="9:1-9:39"><strong>Installing OpenVINO and Dependencies</strong></h2><h6 dir="auto" data-sourcepos="11:1-11:187">To get started, we need to install OpenVINO and its dependencies on BrainyPi. Open a terminal and run the following command to install OpenVINO and the necessary OpenCV development files:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">sudo apt <span class="nb">install </span>openvino-toolkit libopencv-dev</pre><h6 dir="auto" data-sourcepos="17:1-17:160">By installing OpenVINO, we gain access to a powerful set of tools and libraries for optimizing and deploying deep learning models on various hardware platforms.</h6><h2 dir="auto" data-sourcepos="19:1-19:22">Compiling the Demos</h2><h6 dir="auto" data-sourcepos="21:1-21:220">Once we install OpenVINO, we can proceed to compile the demos. These demos serve as an excellent starting point for understanding and exploring the capabilities of OpenVINO. Follow the steps below to compile the demos:</h6><ol dir="auto" data-sourcepos="23:1-47:0"><li data-sourcepos="23:1-30:0"><h6 data-sourcepos="23:4-23:73">Set up the OpenVINO environment by sourcing the <code>setupvars.sh</code> script:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">source /opt/openvino/setupvars.sh</pre><h6 data-sourcepos="29:5-29:104">This step is crucial as it configures the necessary environment variables for working with OpenVINO.</h6></li><li data-sourcepos="31:1-39:0"><h6 data-sourcepos="31:4-31:94">Clone the Open Model Zoo repository, which contains the demos, using the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">git clone <span class="nt">--recurse-submodules</span> https://github.com/openvinotoolkit/open_model_zoo.git
<span id="LC2" class="line" lang="shell"><span class="nb">cd </span>open_model_zoo/demos/</span></pre><h6 data-sourcepos="38:5-38:120">The Open Model Zoo provides a collection of pre-trained models and demo applications that can be used with OpenVINO.</h6></li><li data-sourcepos="40:1-47:0"><h6 data-sourcepos="40:4-40:60">Build the demos by executing the <code>build_demos.sh</code> script:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">./build_demos.sh</pre><h6 data-sourcepos="46:5-46:80">This step compiles the demo applications and makes them ready for execution.</h6></li></ol><h2 dir="auto" data-sourcepos="48:1-48:39"><strong>Running the Pedestrian Tracker on Brainy Pi<br /></strong></h2><h6 dir="auto" data-sourcepos="50:1-50:137">With the demos compiled, we can now download the required models and run the Pedestrian Tracking Demo using OpenVINO. Follow these steps:</h6><ol dir="auto" data-sourcepos="52:1-76:0"><li data-sourcepos="52:1-59:0"><h6 data-sourcepos="52:4-52:76">Download the models for the demo by running the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">omz_downloader <span class="nt">--list</span> ~/open_model_zoo/demos/pedestrian_tracker_demo/cpp/models.lst <span class="nt">-o</span> ~/models/ <span class="nt">--precision</span> FP16</pre><h6 data-sourcepos="58:5-58:115">The Open Model Zoo downloader allows us to easily fetch the models specified in the <code>models.lst</code> file.</h6></li><li data-sourcepos="60:1-68:0"><h6 data-sourcepos="60:4-60:27">Download the test video:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">cd ~/
<span id="LC2" class="line" lang="shell">wget https://raw.githubusercontent.com/intel-iot-devkit/sample-videos/master/people-detection.mp4</span></pre><h6 data-sourcepos="67:5-67:102">This command downloads a sample video &#8211; an input for the Pedestrian Tracking Demo.</h6></li><li data-sourcepos="69:1-76:0"><h6 data-sourcepos="69:4-69:123">Once the models and the test video are downloaded, you can run the Pedestrian Tracking Demo using the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">~/omz_demos_build/aarch64/Release/pedestrian_tracker_demo <span class="nt">-i</span> ~/people-detection.mp4 <span class="nt">-m_det</span> ~/models/intel/person-detection-retail-0013/FP16/person-detection-retail-0013.xml <span class="nt">-m_reid</span> ~/models/intel/person-reidentification-retail-0277/FP16/person-reidentification-retail-0277.xml <span class="nt">-at</span> ssd</pre><h6 data-sourcepos="75:5-75:158">This command executes the Pedestrian Tracking Demo, leveraging the person detection and person re-identification models to track pedestrians in the video.</h6></li></ol><h6 dir="auto" data-sourcepos="77:1-77:196">By following these steps, you can quickly set up and run the Pedestrian Tracking Demo, gaining insights into the capabilities of OpenVINO and its potential for developing computer vision products.</h6><h2 dir="auto" data-sourcepos="79:1-79:13"><strong>Conclusion</strong></h2><h6 dir="auto" data-sourcepos="81:1-81:447">The Pedestrian Tracking Demo using OpenVINO on Brainy Pi demonstrates the power of AI and computer vision in detecting and tracking pedestrians. By utilizing OpenVINO&#8217;s optimization and deployment capabilities, developers and entrepreneurs can build robust computer vision applications for various domains. We hope this blog post provides you with a useful overview and inspiration for incorporating OpenVINO into your computer vision projects.</h6><h2 dir="auto" data-sourcepos="83:1-83:13"><strong>References</strong></h2><ul dir="auto" data-sourcepos="85:1-86:195"><li data-sourcepos="85:1-85:80"><h6>OpenVINO Documentation: <a href="https://docs.openvino.ai/" target="_blank" rel="nofollow noreferrer noopener">https://docs.openvino.ai/</a></h6></li><li data-sourcepos="86:1-86:195"><h6>Open Model Zoo Pedestrian Tracking Demo: <a href="https://docs.openvino.ai/2022.3/omz_demos_pedestrian_tracker_demo_cpp.html" target="_blank" rel="nofollow noreferrer noopener">https://docs.openvino.ai/2022.3/omz_demos_pedestrian_tracker_demo_cpp.html</a></h6></li></ul></div>								</div>
				</div>
					</div>
		</div>
					</div>
		</section>
				</div>
		<p>The post <a href="https://brainypi.com/pedestrian-tracker-on-brainy-pi/">Pedestrian Tracker on Brainy Pi</a> appeared first on <a href="https://brainypi.com">Brainy Pi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://brainypi.com/pedestrian-tracker-on-brainy-pi/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://raw.githubusercontent.com/intel-iot-devkit/sample-videos/master/people-detection.mp4" length="5482579" type="video/mp4" />

			</item>
		<item>
		<title>Image Segmentation on Brainy Pi</title>
		<link>https://brainypi.com/image-segmentation-on-brainy-pi/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=image-segmentation-on-brainy-pi</link>
					<comments>https://brainypi.com/image-segmentation-on-brainy-pi/#respond</comments>
		
		<dc:creator><![CDATA[BrainyPi Team]]></dc:creator>
		<pubDate>Thu, 15 Jun 2023 06:53:27 +0000</pubDate>
				<category><![CDATA[Blogs]]></category>
		<category><![CDATA[Intel OpenVINO]]></category>
		<guid isPermaLink="false">https://brainypi.com/?p=5503</guid>

					<description><![CDATA[<p>Today&#8217;s blog post will explore the robust capabilities of OpenVINO (Open Visual Inference and Neural Network Optimization) in image segmentation. We&#8217;ll showcase a demo that leverages OpenVINO on Brainy Pi, combining AI and computer vision to segment video frames. This blog targets developers and entrepreneurs interested in building computer vision products with OpenVINO. Let&#8217;s dive into implementing image segmentation on [&#8230;]</p>
<p>The post <a href="https://brainypi.com/image-segmentation-on-brainy-pi/">Image Segmentation on Brainy Pi</a> appeared first on <a href="https://brainypi.com">Brainy Pi</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="5503" class="elementor elementor-5503" data-elementor-post-type="post">
						<section class="elementor-section elementor-top-section elementor-element elementor-element-f00c51c elementor-section-boxed elementor-section-height-default elementor-section-height-default" data-id="f00c51c" data-element_type="section">
						<div class="elementor-container elementor-column-gap-default">
					<div class="elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-350200a" data-id="350200a" data-element_type="column">
			<div class="elementor-widget-wrap elementor-element-populated">
						<div class="elementor-element elementor-element-c62092f elementor-widget elementor-widget-text-editor" data-id="c62092f" data-element_type="widget" data-widget_type="text-editor.default">
				<div class="elementor-widget-container">
									<div class="group w-full text-gray-800 dark:text-gray-100 border-b border-black/10 dark:border-gray-900/50 bg-gray-50 dark:bg-[#444654]"><div class="flex p-4 gap-4 text-base md:gap-6 md:max-w-2xl lg:max-w-[38rem] xl:max-w-3xl md:py-6 lg:px-0 m-auto"><div class="relative flex w-[calc(100%-50px)] flex-col gap-1 md:gap-3 lg:w-[calc(100%-115px)]"><div class="flex flex-grow flex-col gap-3"><div class="min-h-[20px] flex flex-col items-start gap-4 whitespace-pre-wrap break-words"><div class="markdown prose w-full break-words dark:prose-invert light"><h6>Today&#8217;s blog post will explore the robust capabilities of OpenVINO (Open Visual Inference and Neural Network Optimization) in image segmentation. We&#8217;ll showcase a demo that leverages OpenVINO on <a href="https://brainypi.com/">Brainy Pi</a>, combining AI and computer vision to segment video frames. This blog targets developers and entrepreneurs interested in building computer vision products with OpenVINO. Let&#8217;s dive into implementing image segmentation on Brainy Pi!</h6></div></div></div></div></div></div><p dir="auto" data-sourcepos="7:1-7:75"><img decoding="async" src="https://brainypi.com/wp-content/uploads/2023/06/segmentation.gif" /></p><h2 dir="auto" data-sourcepos="10:1-10:22"><strong>Installing OpenVINO</strong></h2><h6 dir="auto" data-sourcepos="12:1-12:160">Before we dive into the demo, we need to install OpenVINO and its dependencies on BrainyPi. Let&#8217;s start by opening a terminal and running the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">sudo apt <span class="nb">install </span>openvino-toolkit libopencv-dev</pre><h6 dir="auto" data-sourcepos="18:1-18:151">This command will install OpenVINO and the necessary OpenCV development files on your system, providing the foundation for our image segmentation demo.</h6><h2 dir="auto" data-sourcepos="20:1-20:22"><strong>Compiling the Demos</strong></h2><h6 dir="auto" data-sourcepos="22:1-22:229">Once OpenVINO is successfully installed, we can proceed to compile the demos. These demos serve as an excellent starting point for understanding and exploring the capabilities of OpenVINO. Here are the steps to compile the demos:</h6><ol dir="auto" data-sourcepos="24:1-42:0"><li data-sourcepos="24:1-29:0"><h6 data-sourcepos="24:4-24:73">Set up the OpenVINO environment by sourcing the <code>setupvars.sh</code> script:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">source /opt/openvino/setupvars.sh</pre></li><li data-sourcepos="30:1-36:0"><h6 data-sourcepos="30:4-30:101">Clone the Open Model Zoo repository, which contains the demos, by executing the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">git clone <span class="nt">--recurse-submodules</span> https://github.com/openvinotoolkit/open_model_zoo.git
<span id="LC2" class="line" lang="shell"><span class="nb">cd </span>open_model_zoo/demos/</span></pre></li><li data-sourcepos="37:1-42:0"><h6 data-sourcepos="37:4-37:58">Build the demos by running the <code>build_demos.sh</code> script:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">./build_demos.sh</pre></li></ol><h6 dir="auto" data-sourcepos="43:1-43:164">These steps will compile the demo applications. It will make them ready for execution and enabling us to dive into the exciting world of image segmentation with OpenVINO.</h6><h2 dir="auto" data-sourcepos="45:1-45:38"><strong>Running the Image Segmentation on Brainy Pi<br /></strong></h2><h6 dir="auto" data-sourcepos="47:1-47:170">With the demos successfully compiled, we can now proceed to download the required models and run the image segmentation demo using OpenVINO. Let&#8217;s follow the steps below:</h6><ol dir="auto" data-sourcepos="49:1-71:0"><li data-sourcepos="49:1-56:0"><h6 data-sourcepos="49:4-49:78">Download the models required for the demo by running the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">omz_downloader <span class="nt">--list</span> ~/open_model_zoo/demos/segmentation_demo/cpp/models.lst <span class="nt">-o</span> ~/models/ <span class="nt">--precision</span> FP16</pre><h6 data-sourcepos="55:4-55:141">This command downloads the necessary models from the Open Model Zoo repository and saves them in the <code>~/models/</code> directory on your system.</h6></li><li data-sourcepos="57:1-63:0"><h6 data-sourcepos="57:4-57:109">Download a test video to feed into the demo. We will use the following command to download a sample video:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">cd ~/
<span id="LC2" class="line" lang="shell">wget https://raw.githubusercontent.com/intel-iot-devkit/sample-videos/master/head-pose-face-detection-male.mp4</span></pre></li><li data-sourcepos="64:1-71:0"><h6 data-sourcepos="64:4-64:121">Once the models and the test video are downloaded, we can run the image segmentation demo using the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">~/omz_demos_build/aarch64/Release/segmentation_demo <span class="nt">-i</span> ~/head-pose-face-detection-male.mp4 <span class="nt">-m</span> ~/models/intel/semantic-segmentation-adas-0001/FP16/semantic-segmentation-adas-0001.xml</pre><h6 data-sourcepos="70:4-70:155">This command executes the image segmentation demo, processing the frames of the test video and producing segmented output based on the downloaded model.</h6></li></ol><h6 dir="auto" data-sourcepos="72:1-72:207">By following these steps, you can witness the power of OpenVINO in action, as it intelligently segments the frames in the video and showcases the potential for building advanced computer vision applications.</h6><h2 dir="auto" data-sourcepos="74:1-74:13"><strong>Conclusion</strong></h2><h6 dir="auto" data-sourcepos="78:2-78:226">In this blog post, we explored the Image Segmentation Demo using OpenVINO on BrainyPi. We covered the installation process, compilation of demos, and the steps to run the image segmentation demo with OpenVINO. By leveraging OpenVINO&#8217;s capabilities, developers and entrepreneurs can unlock the potential of AI and computer vision for their own projects and build advanced computer vision products.</h6><h6 dir="auto" data-sourcepos="80:1-80:154">If you&#8217;re interested in diving deeper into OpenVINO, be sure to check out the official OpenVINO documentation for more detailed information and resources.</h6><h6 dir="auto" data-sourcepos="82:1-82:123">Reference: <a href="https://docs.openvino.ai/2022.3/omz_demos_segmentation_demo_cpp.html" target="_blank" rel="nofollow noreferrer noopener">OpenVINO Segmentation Demo Documentation</a></h6>								</div>
				</div>
					</div>
		</div>
					</div>
		</section>
				</div>
		<p>The post <a href="https://brainypi.com/image-segmentation-on-brainy-pi/">Image Segmentation on Brainy Pi</a> appeared first on <a href="https://brainypi.com">Brainy Pi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://brainypi.com/image-segmentation-on-brainy-pi/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://raw.githubusercontent.com/intel-iot-devkit/sample-videos/master/head-pose-face-detection-male.mp4" length="15522596" type="video/mp4" />

			</item>
		<item>
		<title>Human Pose Detection on Brainy Pi</title>
		<link>https://brainypi.com/human-pose-detection-on-brainy-pi/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=human-pose-detection-on-brainy-pi</link>
					<comments>https://brainypi.com/human-pose-detection-on-brainy-pi/#respond</comments>
		
		<dc:creator><![CDATA[BrainyPi Team]]></dc:creator>
		<pubDate>Thu, 15 Jun 2023 06:33:59 +0000</pubDate>
				<category><![CDATA[Blogs]]></category>
		<category><![CDATA[Intel OpenVINO]]></category>
		<guid isPermaLink="false">https://brainypi.com/?p=5480</guid>

					<description><![CDATA[<p>Are you a developer or entrepreneur interested in harnessing the power of AI and computer vision to build innovative products? Well, look no further! In this blog post, we will not only walk you through the process of using OpenVINO on Brainy Pi to predict human poses using AI and computer vision, but also provide valuable insights and tips for [&#8230;]</p>
<p>The post <a href="https://brainypi.com/human-pose-detection-on-brainy-pi/">Human Pose Detection on Brainy Pi</a> appeared first on <a href="https://brainypi.com">Brainy Pi</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="5480" class="elementor elementor-5480" data-elementor-post-type="post">
						<section class="elementor-section elementor-top-section elementor-element elementor-element-aa07e87 elementor-section-boxed elementor-section-height-default elementor-section-height-default" data-id="aa07e87" data-element_type="section">
						<div class="elementor-container elementor-column-gap-default">
					<div class="elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-76289f9" data-id="76289f9" data-element_type="column">
			<div class="elementor-widget-wrap elementor-element-populated">
						<div class="elementor-element elementor-element-2090b61 elementor-widget elementor-widget-text-editor" data-id="2090b61" data-element_type="widget" data-widget_type="text-editor.default">
				<div class="elementor-widget-container">
									<div class="group w-full text-gray-800 dark:text-gray-100 border-b border-black/10 dark:border-gray-900/50 bg-gray-50 dark:bg-[#444654]"><div class="flex p-4 gap-4 text-base md:gap-6 md:max-w-2xl lg:max-w-[38rem] xl:max-w-3xl md:py-6 lg:px-0 m-auto"><div class="relative flex w-[calc(100%-50px)] flex-col gap-1 md:gap-3 lg:w-[calc(100%-115px)]"><div class="flex flex-grow flex-col gap-3"><div class="min-h-[20px] flex flex-col items-start gap-4 whitespace-pre-wrap break-words"><div class="markdown prose w-full break-words dark:prose-invert light"><h6>Are you a developer or entrepreneur interested in harnessing the power of AI and computer vision to build innovative products? Well, look no further! In this blog post, we will not only walk you through the process of using OpenVINO on <a href="https://brainypi.com">Brainy Pi</a> to predict human poses using AI and computer vision, but also provide valuable insights and tips for your own computer vision projects. This demo not only showcases the remarkable capabilities of OpenVINO but also offers a solid starting point for your exciting journey into the world of computer vision. So, let&#8217;s dive right in and implement Human Pose Detection on Brainy Pi together!</h6></div></div></div></div></div></div><p dir="auto" data-sourcepos="5:1-5:59"><img decoding="async" src="https://brainypi.com/wp-content/uploads/2023/06/pose.gif" /></p><h2 dir="auto" data-sourcepos="7:1-7:22"><strong>Installing OpenVINO</strong></h2><h6 dir="auto" data-sourcepos="9:1-9:125">Before we dive into the demo, let&#8217;s start by installing OpenVINO and its dependencies on BrainyPi. Follow these simple steps:</h6><ol dir="auto" data-sourcepos="11:1-18:0"><li data-sourcepos="11:1-18:0"><h6 data-sourcepos="11:4-11:138">Open a terminal on your BrainyPi device and enter the following command to install OpenVINO and the necessary OpenCV development files:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">sudo apt <span class="nb">install </span>openvino-toolkit libopencv-dev</pre><h6 data-sourcepos="17:4-17:72">This command will ensure that OpenVINO is installed and ready to use.</h6></li></ol><h2 dir="auto" data-sourcepos="19:1-19:18"><strong>Compiling Demos</strong></h2><h6 dir="auto" data-sourcepos="21:1-21:208">Now that OpenVINO is installed, we can proceed to compile the demos. These demos serve as valuable resources for understanding and exploring the capabilities of OpenVINO. Here&#8217;s how you can compile the demos:</h6><ol dir="auto" data-sourcepos="23:1-43:0"><li data-sourcepos="23:1-28:0"><h6 data-sourcepos="23:4-23:73">Set up the OpenVINO environment by sourcing the <code>setupvars.sh</code> script:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">source /opt/openvino/setupvars.sh</pre></li><li data-sourcepos="29:1-35:0"><h6 data-sourcepos="29:4-29:94">Clone the Open Model Zoo repository, which contains the demos, using the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">git clone <span class="nt">--recurse-submodules</span> https://github.com/openvinotoolkit/open_model_zoo.git
<span id="LC2" class="line" lang="shell"><span class="nb">cd </span>open_model_zoo/demos/</span></pre></li><li data-sourcepos="36:1-43:0"><h6 data-sourcepos="36:4-36:60">Build the demos by executing the <code>build_demos.sh</code> script:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">./build_demos.sh</pre><h6 data-sourcepos="42:4-42:77">This will compile the demo applications and make them ready for execution.</h6></li></ol><h2 dir="auto" data-sourcepos="44:1-44:20"><strong>Running the Demos</strong></h2><h6 dir="auto" data-sourcepos="46:1-46:123">With the demos compiled, we can now download the required models and run the human pose detection demo. Follow these steps:</h6><ol dir="auto" data-sourcepos="48:1-72:0"><li data-sourcepos="48:1-55:0"><h6 data-sourcepos="48:4-48:76">Download the models needed for the demo by running the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">omz_downloader <span class="nt">--list</span> ~/open_model_zoo/demos/human_pose_estimation_demo/cpp/models.lst <span class="nt">-o</span> ~/models/ <span class="nt">--precision</span> FP16</pre><h6 data-sourcepos="54:4-54:107">This command will download the necessary models for the demo and save them in the <code>~/models/</code> directory.</h6></li><li data-sourcepos="56:1-64:0"><h6 data-sourcepos="56:4-56:58">Download the test video that will be used for the demo:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">cd ~/
<span id="LC2" class="line" lang="shell">wget https://raw.githubusercontent.com/intel-iot-devkit/sample-videos/master/face-demographics-walking.mp4</span></pre><h6 data-sourcepos="63:4-63:84">This command will download a sample video called <code>face-demographics-walking.mp4</code>.</h6></li><li data-sourcepos="65:1-72:0"><h6 data-sourcepos="65:4-65:103">Once the models and the test video are downloaded, you can run the demo using the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">~/omz_demos_build/aarch64/Release/human_pose_estimation_demo <span class="nt">-i</span> ~/face-demographics-walking.mp4 <span class="nt">-m</span> ~/models/intel/human-pose-estimation-0001/FP16/human-pose-estimation-0001.xml <span class="nt">-at</span> openpose</pre><h6 data-sourcepos="71:4-71:93">This command will execute the human pose estimation demo using OpenVINO on the test video.</h6></li></ol><h2 dir="auto" data-sourcepos="73:1-73:13"><strong>Conclusion</strong></h2><h6 dir="auto" data-sourcepos="75:1-75:319">Congratulations! You have successfully installed OpenVINO, compiled the demos, and run the human pose detection demo on BrainyPi. This demo showcases the power of AI and computer vision in predicting human poses, providing a solid foundation for developers and entrepreneurs to build their own computer vision products.</h6><h6 dir="auto" data-sourcepos="77:1-77:166">To delve deeper into the human pose detection demo and explore additional features and options, refer to the <a href="https://docs.openvino.ai/2022.3/omz_demos_human_pose_estimation_demo_cpp.html">OpenVINO documentation</a>.</h6><h6 dir="auto" data-sourcepos="81:1-81:133">Get creative and leverage the potential of OpenVINO to unlock a world of possibilities in computer vision applications. Happy coding!</h6><h6 dir="auto" data-sourcepos="83:1-83:124"><em>Note: The content of this blog is based on OpenVINO 2023.0 documentation and may be subject to updates in future releases.</em></h6>								</div>
				</div>
					</div>
		</div>
					</div>
		</section>
				</div>
		<p>The post <a href="https://brainypi.com/human-pose-detection-on-brainy-pi/">Human Pose Detection on Brainy Pi</a> appeared first on <a href="https://brainypi.com">Brainy Pi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://brainypi.com/human-pose-detection-on-brainy-pi/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://raw.githubusercontent.com/intel-iot-devkit/sample-videos/master/face-demographics-walking.mp4" length="6406124" type="video/mp4" />

			</item>
		<item>
		<title>Social Distance Monitoring with Brainy Pi</title>
		<link>https://brainypi.com/social-distance-monitoring-with-brainy-pi/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=social-distance-monitoring-with-brainy-pi</link>
					<comments>https://brainypi.com/social-distance-monitoring-with-brainy-pi/#respond</comments>
		
		<dc:creator><![CDATA[BrainyPi Team]]></dc:creator>
		<pubDate>Thu, 15 Jun 2023 03:41:58 +0000</pubDate>
				<category><![CDATA[Blogs]]></category>
		<category><![CDATA[Intel OpenVINO]]></category>
		<guid isPermaLink="false">https://brainypi.com/?p=5398</guid>

					<description><![CDATA[<p>In today&#8217;s world, where social distancing has become an essential practice, leveraging computer vision and artificial intelligence (AI) can play a vital role in monitoring and ensuring safe distancing measures. In this blog, we will showcase a Social Distancing Monitoring Demo using OpenVINO and Brainy Pi, a powerful combination for developers and entrepreneurs looking to build computer vision products. We [&#8230;]</p>
<p>The post <a href="https://brainypi.com/social-distance-monitoring-with-brainy-pi/">Social Distance Monitoring with Brainy Pi</a> appeared first on <a href="https://brainypi.com">Brainy Pi</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="5398" class="elementor elementor-5398" data-elementor-post-type="post">
						<section class="elementor-section elementor-top-section elementor-element elementor-element-26c285d elementor-section-boxed elementor-section-height-default elementor-section-height-default" data-id="26c285d" data-element_type="section">
						<div class="elementor-container elementor-column-gap-default">
					<div class="elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-cf9ee03" data-id="cf9ee03" data-element_type="column">
			<div class="elementor-widget-wrap elementor-element-populated">
						<div class="elementor-element elementor-element-eed0f84 elementor-widget elementor-widget-text-editor" data-id="eed0f84" data-element_type="widget" data-widget_type="text-editor.default">
				<div class="elementor-widget-container">
									<h6 dir="auto" data-sourcepos="4:1-4:584">In today&#8217;s world, where social distancing has become an essential practice, leveraging computer vision and artificial intelligence (AI) can play a vital role in monitoring and ensuring safe distancing measures. In this blog, we will showcase a Social Distancing Monitoring Demo using OpenVINO and <a href="https://brainypi.com/">Brainy Pi</a>, a powerful combination for developers and entrepreneurs looking to build computer vision products. We will guide you through the installation process, compiling the demos, and running the demo application, allowing you to determine the distance between individuals in a video/camera-feed. Lets implement Social Distance Monitoring Product !</h6><p dir="auto" data-sourcepos="6:1-6:95"><img decoding="async" src="https://brainypi.com/wp-content/uploads/2023/06/Socialdistancing1.gif" /></p><h2 dir="auto" data-sourcepos="8:1-8:22"><strong>Installing OpenVINO</strong></h2><h6 dir="auto" data-sourcepos="10:1-10:118">To begin, we need to install OpenVINO and its dependencies on BrainyPi. Open a terminal and run the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">sudo apt <span class="nb">install </span>openvino-toolkit libopencv-dev</pre><h6 dir="auto" data-sourcepos="16:1-16:93">This command will install OpenVINO and the necessary OpenCV development files on your system.</h6><h2 dir="auto" data-sourcepos="18:1-18:18"><strong>Compiling Demos</strong></h2><h6 dir="auto" data-sourcepos="20:1-20:193">Once OpenVINO is installed, we can proceed to compile the demos. These demos provide a great starting point for understanding and exploring the capabilities of OpenVINO. Follow the steps below:</h6><ol dir="auto" data-sourcepos="22:1-40:0"><li data-sourcepos="22:1-27:0"><h6 data-sourcepos="22:4-22:73">Set up the OpenVINO environment by sourcing the <code>setupvars.sh</code> script:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">source /opt/openvino/setupvars.sh</pre></li><li data-sourcepos="28:1-34:0"><h6 data-sourcepos="28:4-28:94">Clone the Open Model Zoo repository, which contains the demos, using the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">git clone <span class="nt">--recurse-submodules</span> https://github.com/openvinotoolkit/open_model_zoo.git
<span id="LC2" class="line" lang="shell"><span class="nb">cd </span>open_model_zoo/demos/</span></pre></li><li data-sourcepos="35:1-40:0"><h6 data-sourcepos="35:4-35:60">Build the demos by executing the <code>build_demos.sh</code> script:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">./build_demos.sh</pre></li></ol><h6 dir="auto" data-sourcepos="41:1-41:74">This will compile the demo applications and make them ready for execution.</h6><h2 dir="auto" data-sourcepos="43:1-43:20"><strong>Running the Demo for Social Distance Monitoring<br /></strong></h2><h6 dir="auto" data-sourcepos="45:1-45:113">With the demos compiled, we can now download the required models and run them using OpenVINO. Follow these steps:</h6><ol dir="auto" data-sourcepos="47:1-65:0"><li data-sourcepos="47:1-52:0"><h6 data-sourcepos="47:4-47:76">Download the models needed for the demo by running the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">omz_downloader <span class="nt">--list</span> ~/open_model_zoo/demos/social_distance_demo/cpp/models.lst <span class="nt">-o</span> ~/models/ <span class="nt">--precision</span> FP16</pre></li><li data-sourcepos="53:1-59:0"><h6 data-sourcepos="53:4-53:27">Download the test video:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">cd ~/
<span id="LC2" class="line" lang="shell">wget https://raw.githubusercontent.com/intel-iot-devkit/sample-videos/master/face-demographics-walking.mp4</span></pre></li><li data-sourcepos="60:1-65:0"><h6 data-sourcepos="60:4-60:120">Once the models and the test video are downloaded, you can run the object detection demo using the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">~/omz_demos_build/aarch64/Release/social_distance_demo <span class="nt">-i</span> ~/face-demographics-walking.mp4 <span class="nt">-m_det</span> ~/models/intel/person-detection-retail-0013/FP16/person-detection-retail-0013.xml <span class="nt">-m_reid</span> ~/models/intel/person-reidentification-retail-0277/FP16/person-reidentification-retail-0277.xml</pre></li></ol><h2 dir="auto" data-sourcepos="66:1-66:13"><strong>Reference:</strong></h2><h6 dir="auto" data-sourcepos="67:1-67:71"><a href="https://docs.openvino.ai/2022.3/omz_demos_social_distance_demo_cpp.html" target="_blank" rel="nofollow noreferrer noopener">https://docs.openvino.ai/2022.3/omz_demos_social_distance_demo_cpp.html</a></h6><h2 dir="auto" data-sourcepos="69:1-69:14"><strong>Conclusion:</strong></h2><div class="group w-full text-gray-800 dark:text-gray-100 border-b border-black/10 dark:border-gray-900/50 bg-gray-50 dark:bg-[#444654]"><div class="flex p-4 gap-4 text-base md:gap-6 md:max-w-2xl lg:max-w-[38rem] xl:max-w-3xl md:py-6 lg:px-0 m-auto"><div class="relative flex w-[calc(100%-50px)] flex-col gap-1 md:gap-3 lg:w-[calc(100%-115px)]"><div class="flex flex-grow flex-col gap-3"><div class="min-h-[20px] flex flex-col items-start gap-4 whitespace-pre-wrap break-words"><div class="markdown prose w-full break-words dark:prose-invert light"><h6>In this blog, we have demonstrated how to build a Social Distancing Monitoring product using OpenVINO and BrainyPi. First and foremost, by combining AI and computer vision, you can determine the distance between individuals in a video, providing valuable insights for ensuring safe social distancing. Furthermore, developers and entrepreneurs looking to incorporate this technology into their computer vision products now have a starting point to explore and enhance the capabilities of OpenVINO. Additionally, by following the steps outlined in this blog, you can get started on your journey to building innovative solutions for a safer future.</h6></div></div></div></div></div></div><div class="group w-full text-gray-800 dark:text-gray-100 border-b border-black/10 dark:border-gray-900/50 bg-gray-50 dark:bg-[#444654]"><div class="flex p-4 gap-4 text-base md:gap-6 md:max-w-2xl lg:max-w-[38rem] xl:max-w-3xl md:py-6 lg:px-0 m-auto"><div class="relative flex w-[calc(100%-50px)] flex-col gap-1 md:gap-3 lg:w-[calc(100%-115px)]"><div class="flex flex-grow flex-col gap-3"><div class="min-h-[20px] flex flex-col items-start gap-4 whitespace-pre-wrap break-words"><div class="markdown prose w-full break-words dark:prose-invert light"><h6>Remember, social distancing plays a crucial role in our collective well-being because it helps maintain safe distances and prevents the spread of contagious diseases. Additionally, technology can be a powerful ally in achieving these goals, so it is important to leverage its capabilities effectively.</h6></div></div></div></div></div></div>								</div>
				</div>
					</div>
		</div>
					</div>
		</section>
				</div>
		<p>The post <a href="https://brainypi.com/social-distance-monitoring-with-brainy-pi/">Social Distance Monitoring with Brainy Pi</a> appeared first on <a href="https://brainypi.com">Brainy Pi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://brainypi.com/social-distance-monitoring-with-brainy-pi/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://raw.githubusercontent.com/intel-iot-devkit/sample-videos/master/face-demographics-walking.mp4" length="6406124" type="video/mp4" />

			</item>
		<item>
		<title>Cross Road Camera Demo On Brainy Pi</title>
		<link>https://brainypi.com/cross-road-camera-demo-on-brainy-pi/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=cross-road-camera-demo-on-brainy-pi</link>
					<comments>https://brainypi.com/cross-road-camera-demo-on-brainy-pi/#respond</comments>
		
		<dc:creator><![CDATA[BrainyPi Team]]></dc:creator>
		<pubDate>Thu, 15 Jun 2023 03:36:58 +0000</pubDate>
				<category><![CDATA[Blogs]]></category>
		<category><![CDATA[Intel OpenVINO]]></category>
		<guid isPermaLink="false">https://brainypi.com/?p=5385</guid>

					<description><![CDATA[<p>In this blog post, we will explore the AI pipeline for person detection, recognition, and reidentification using OpenVINO on BrainyPi. OpenVINO, short for Open Visual Inference and Neural Network Optimization, is a powerful toolkit by Intel that enables developers to deploy deep learning models efficiently on various hardware platforms. Let&#8217;s implement Cross Road Camera Demo on Brainy Pi ! The [&#8230;]</p>
<p>The post <a href="https://brainypi.com/cross-road-camera-demo-on-brainy-pi/">Cross Road Camera Demo On Brainy Pi</a> appeared first on <a href="https://brainypi.com">Brainy Pi</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="5385" class="elementor elementor-5385" data-elementor-post-type="post">
						<section class="elementor-section elementor-top-section elementor-element elementor-element-95b2a5b elementor-section-boxed elementor-section-height-default elementor-section-height-default" data-id="95b2a5b" data-element_type="section">
						<div class="elementor-container elementor-column-gap-default">
					<div class="elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-054e579" data-id="054e579" data-element_type="column">
			<div class="elementor-widget-wrap elementor-element-populated">
						<div class="elementor-element elementor-element-a6df691 elementor-widget elementor-widget-text-editor" data-id="a6df691" data-element_type="widget" data-widget_type="text-editor.default">
				<div class="elementor-widget-container">
									<h6 dir="auto" data-sourcepos="5:1-5:347">In this blog post, we will explore the AI pipeline for person detection, recognition, and reidentification using OpenVINO on BrainyPi. OpenVINO, short for Open Visual Inference and Neural Network Optimization, is a powerful toolkit by Intel that enables developers to deploy deep learning models efficiently on various hardware platforms. Let&#8217;s implement Cross Road Camera Demo on Brainy Pi !</h6><h6 dir="auto" data-sourcepos="7:1-7:338">The ability to detect, recognize, and reidentify persons is a crucial component in many computer vision applications, such as surveillance systems, crowd analysis, and personalized marketing. By leveraging OpenVINO&#8217;s optimization techniques and the <a href="https://brainypi.com/">Brainy Pi</a> platform, we can build a robust and efficient solution for person-related tasks.</h6><p dir="auto" data-sourcepos="9:1-9:95"><img decoding="async" src="https://brainypi.com/wp-content/uploads/2023/06/camerademo1.gif" /></p><h2 dir="auto" data-sourcepos="11:1-11:22"><strong>Installing OpenVINO</strong></h2><h6 dir="auto" data-sourcepos="13:1-13:128">To get started, we need to install OpenVINO and its dependencies on BrainyPi. Open a terminal and execute the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">sudo apt install openvino-toolkit libopencv-dev</pre><h6 dir="auto" data-sourcepos="19:1-19:102">This command will install OpenVINO and the necessary OpenCV development files on your BrainyPi system.</h6><h2 dir="auto" data-sourcepos="21:1-21:18"><strong>Compiling Demos</strong></h2><h6 dir="auto" data-sourcepos="23:1-23:245">Once OpenVINO is installed, we can proceed to compile the demos  by Open Model Zoo. These demos serve as excellent starting points for understanding and exploring the capabilities of OpenVINO. Follow the steps below to compile the demos:</h6><ol dir="auto" data-sourcepos="25:1-43:0"><li data-sourcepos="25:1-30:0"><h6 data-sourcepos="25:4-25:73">Set up the OpenVINO environment by sourcing the <code>setupvars.sh</code> script:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">source /opt/openvino/setupvars.sh</pre></li><li data-sourcepos="31:1-37:0"><h6 data-sourcepos="31:4-31:101">Clone the Open Model Zoo repository, which contains the demos, by executing the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">git clone --recurse-submodules https://github.com/openvinotoolkit/open_model_zoo.git
<span lang="shell">cd open_model_zoo/demos/</span></pre></li><li data-sourcepos="38:1-43:0"><h6 data-sourcepos="38:4-38:58">Now, build the demos by running the <code>build_demos.sh</code> script:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">./build_demos.sh</pre></li></ol><h6 dir="auto" data-sourcepos="44:1-44:94">This process will compile the demo applications and make them ready for execution on BrainyPi.</h6><h2 dir="auto" data-sourcepos="46:1-46:20"><strong>Running Cross Road Camera Demo<br /></strong></h2><h6 dir="auto" data-sourcepos="48:1-48:130">With the demos compiled, we can now download the required models and run them using OpenVINO. Follow these steps to run the demos:</h6><ol dir="auto" data-sourcepos="50:1-68:0"><li data-sourcepos="50:1-55:0"><h6 data-sourcepos="50:4-50:80">Download the models required for the demo by executing the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">omz_downloader --list ~/open_model_zoo/demos/crossroad_camera_demo/cpp/models.lst -o ~/models/ --precision FP16</pre></li><li data-sourcepos="56:1-62:0"><h6 data-sourcepos="56:4-56:72">Download the test video that we will use for demonstration purposes:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">cd ~/
<span lang="shell">wget https://raw.githubusercontent.com/intel-iot-devkit/sample-videos/master/people-detection.mp4</span></pre></li><li data-sourcepos="63:1-68:0"><h6 data-sourcepos="63:4-63:120">Once the models and the test video are downloaded, you can run the object detection demo using the following command:</h6><pre style="padding: 5px 10px; font-family: Monaco, Menlo, Consolas, 'Courier New', monospace; font-size: 13px; color: #f8f8f2; border-radius: 3px; margin-top: 5px; margin-bottom: 5px; line-height: 20px; background-color: #23241f; border: 1px solid #d3d3d3; max-width: 100%; max-height: 500px; overflow: hidden auto;">~/omz_demos_build/aarch64/Release/crossroad_camera_demo -i ~/people-detection.mp4 -m ~/models/intel/person-vehicle-bike-detection-crossroad-0078/FP16/person-vehicle-bike-detection-crossroad-0078.xml -m_pa ~/models/intel/person-attributes-recognition-crossroad-0230/FP16/person-attributes-recognition-crossroad-0230.xml -m_reid ~/models/intel/person-reidentification-retail-0287/FP16/person-reidentification-retail-0287.xml</pre></li></ol><h6 dir="auto" data-sourcepos="69:1-69:239">In the above command, we specify the input video, as well as the paths to the downloaded models for person detection, person attributes recognition, and person reidentification. Feel free to adjust these paths based on your specific setup.</h6><h2 dir="auto" data-sourcepos="71:1-71:12"><strong>Reference</strong></h2><h6 dir="auto" data-sourcepos="72:1-72:72"><a href="https://docs.openvino.ai/2022.3/omz_demos_crossroad_camera_demo_cpp.html" target="_blank" rel="nofollow noreferrer noopener">https://docs.openvino.ai/2022.3/omz_demos_crossroad_camera_demo_cpp.html</a></h6><h2 dir="auto" data-sourcepos="74:1-74:13"><strong>Conclusion</strong></h2><h6 dir="auto" data-sourcepos="76:1-76:160">In this blog post, we have walked through the process of setting up OpenVINO on BrainyPi and using it to build an AI pipeline for person detection, recognition,</h6><h6 dir="auto" data-sourcepos="78:2-78:188">and reidentification. By leveraging the power of OpenVINO and the BrainyPi platform, developers and entrepreneurs can integrate these capabilities into their own computer vision products.</h6><h6 dir="auto" data-sourcepos="80:1-80:385">The ability to accurately detect and recognize persons opens up numerous possibilities for applications in various domains, including security, retail analytics, and customer experience enhancement. With the step-by-step instructions provided in this blog, you now have a solid foundation to start implementing your own AI-powered computer vision solutions using OpenVINO and BrainyPi.</h6>								</div>
				</div>
					</div>
		</div>
					</div>
		</section>
				</div>
		<p>The post <a href="https://brainypi.com/cross-road-camera-demo-on-brainy-pi/">Cross Road Camera Demo On Brainy Pi</a> appeared first on <a href="https://brainypi.com">Brainy Pi</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://brainypi.com/cross-road-camera-demo-on-brainy-pi/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://raw.githubusercontent.com/intel-iot-devkit/sample-videos/master/people-detection.mp4" length="5482579" type="video/mp4" />

			</item>
	</channel>
</rss>
