Internet of Things articles https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/bg-p/IoTBlog Internet of Things articles Sat, 23 Oct 2021 13:51:33 GMT IoTBlog 2021-10-23T13:51:33Z IoT at Microsoft Ignite https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/iot-at-microsoft-ignite/ba-p/2874658 <P>It is fall season in the northern hemisphere, spring season in the southern and time for a new edition of Microsoft Ignite!</P> <P>If you have not yet registered for the free virtual event, you can do so at&nbsp;<A href="#" target="_blank" rel="noopener">https://myignite.microsoft.com</A>.</P> <P>&nbsp;</P> <P>As you navigate through the many sessions and opportunities to connect with Microsoft's product teams, you will want to build yourself some learning path, and if you are interested in learning more about IoT, here are some pointers to help you out.</P> <P>Below is the list of all the IoT related content featured at the event. As you will notice this edition is focused on roundtables to give you an opportunity to not just learn about the latest in IoT technologies at Microsoft but also to provide feedback, share what your needs and pain points are to the teams developing the next generation of IoT tools and services.</P> <P>&nbsp;</P> <TABLE border="1" width="100%"> <TBODY> <TR> <TD width="20%"><STRONG>Title</STRONG></TD> <TD width="15%"><STRONG>Type</STRONG></TD> <TD width="15%"><STRONG>Speakers</STRONG></TD> <TD width="50%"><STRONG>Description</STRONG></TD> </TR> <TR> <TD width="20%"><A class="session-block__link dark-theme" title="PRT080 - Developing AI Edge modules on Windows for IoT devices" href="#" target="_blank" rel="noopener" aria-label="Developing AI Edge modules on Windows for IoT devices" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog - Developing AI Edge modules on Windows for IoT devices&quot;,&quot;cN&quot;:&quot;Developing AI Edge modules on Windows for IoT devices - Martin Tuip,Terry Warwick&quot;}" data-cy="session-block-link">Developing AI Edge modules on Windows for IoT devices</A></TD> <TD width="15%">Product Rountables</TD> <TD width="15%"><A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Martin Tuip" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Martin Tuip&quot;,&quot;cN&quot;:&quot;Martin Tuip&quot;}"><SPAN class="mwf-button-content">Martin Tuip</SPAN></A>,&nbsp;<A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Terry Warwick" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Terry Warwick&quot;,&quot;cN&quot;:&quot;Terry Warwick&quot;}"><SPAN class="mwf-button-content">Terry Warwick</SPAN></A></TD> <TD width="50%"><SPAN>Linux is often used for AI modules but the hurdle is the management of the devices in the infrastructure. What if you could use the best of both of them? Join the Azure EFLOW team to provide feedback and input on developing IoT Edge Modules, what types of devices you want to create, features you think we should add</SPAN></TD> </TR> <TR> <TD width="20%"><A class="session-block__link dark-theme" title="CONLL106 - Deploy IoT Solutions with Azure SQL Database" href="#" target="_blank" rel="noopener" aria-label="Deploy IoT Solutions with Azure SQL Database" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog - Deploy IoT Solutions with Azure SQL Database&quot;,&quot;cN&quot;:&quot;Deploy IoT Solutions with Azure SQL Database - Anna Hoffman,Davide Mauri&quot;}" data-cy="session-block-link">Deploy IoT Solutions with Azure SQL Database</A></TD> <TD width="15%">Connection Zone</TD> <TD width="15%"><A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Anna Hoffman" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Anna Hoffman&quot;,&quot;cN&quot;:&quot;Anna Hoffman&quot;}"><SPAN class="mwf-button-content">Anna Hoffman</SPAN></A>,&nbsp;<A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Davide Mauri" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Davide Mauri&quot;,&quot;cN&quot;:&quot;Davide Mauri&quot;}"><SPAN class="mwf-button-content">Davide Mauri</SPAN></A></TD> <TD width="50%"><SPAN>Many organizations are investing in IoT to increase operational efficiency, deliver better customer experiences, increase levels of security, enhance workplace safety, and reduce costs. This session will introduce how Azure SQL Database provides a price-performant backend, including templates that simplify deploying and configuring IoT solutions for any scenario.</SPAN></TD> </TR> <TR> <TD width="20%"><A class="session-block__link dark-theme" title="PRT090 - IoT in manufacturing" href="#" target="_blank" rel="noopener" aria-label="IoT in manufacturing" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog - IoT in manufacturing&quot;,&quot;cN&quot;:&quot;IoT in manufacturing - Fabian Frank,Jehona Morina,Sean Parham,Ranga Vadlamudi,David Walker&quot;}" data-cy="session-block-link">IoT in manufacturing</A></TD> <TD width="15%">Product Roundtables</TD> <TD width="15%"><A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Fabian Frank" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Fabian Frank&quot;,&quot;cN&quot;:&quot;Fabian Frank&quot;}"><SPAN class="mwf-button-content">Fabian Frank</SPAN></A>,&nbsp;<SPAN>Jehona Morina,&nbsp;<SPAN class="speaker-badges__name">Sean Parham,&nbsp;Ranga Vadlamudi,&nbsp;David Walker</SPAN></SPAN></TD> <TD width="50%"><SPAN>Tell the Azure IoT engineering team about your experiences with connected devices in manufacturing settings. We'll share some materials on which we'd like your feedback. Your insights will help inform investments for upcoming releases</SPAN></TD> </TR> <TR> <TD width="20%"><A class="session-block__link dark-theme" title="PRT089 - IoT devices are typically an organizations weakest security link. Help us prioritize features to help strengthen your IoT security posture" href="#" target="_blank" rel="noopener" aria-label="IoT devices are typically an organizations weakest security link. Help us prioritize features to help strengthen your IoT security posture" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog - IoT devices are typically an organizations weakest security link. Help us prioritize features to help strengthen your IoT security posture&quot;,&quot;cN&quot;:&quot;IoT devices are typically an organizations weakest security link. Help us prioritize features to help strengthen your IoT security posture - Yossi Basha,Nir Krumer,Phil Neray&quot;}" data-cy="session-block-link">IoT devices are typically an organizations weakest security link. Help us prioritize features to help strengthen your IoT security posture</A></TD> <TD width="15%">Product Roundtables</TD> <TD width="15%"><SPAN>Yossi Basha,&nbsp;<A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Nir Krumer" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Nir Krumer&quot;,&quot;cN&quot;:&quot;Nir Krumer&quot;}"><SPAN class="mwf-button-content">Nir Krumer</SPAN></A>,&nbsp;<A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Phil Neray" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Phil Neray&quot;,&quot;cN&quot;:&quot;Phil Neray&quot;}"><SPAN class="mwf-button-content">Phil Neray</SPAN></A></SPAN></TD> <TD width="50%"><SPAN>Attacks on Enterprise IoT devices are increasing and it’s no wonder. IoT devices are not designed with security in mind like you’ll find with nearly ever every other traditional type of endpoint (workstations, servers mobile devices). In addition, the ability to monitor such devices is limited due to the lack of a deployable agent that can perform configuration management (</SPAN><A class="c-hyperlink c-hyperlink-url" href="#" target="_blank" rel="noopener noreferrer">e.gf</A><SPAN>.: settings and patching) and internal security monitoring using high fidelity endpoint signals. In this session, we are seeking your feedback so that we can learn and properly prioritize the roadmap for delivering IoT security capabilities to the Microsoft Defender suite.</SPAN></TD> </TR> <TR> <TD width="20%"><A class="session-block__link dark-theme" title="BRK250 - How to Develop a Security Vision and Strategy for Cyber-Physical and IoT/OT Systems" href="#" target="_blank" rel="noopener" aria-label="How to Develop a Security Vision and Strategy for Cyber-Physical and IoT/OT Systems" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog - How to Develop a Security Vision and Strategy for Cyber-Physical and IoT/OT Systems&quot;,&quot;cN&quot;:&quot;How to Develop a Security Vision and Strategy for Cyber-Physical and IoT/OT Systems - Phil Neray,Katell Thielemann&quot;}" data-cy="session-block-link">How to Develop a Security Vision and Strategy for Cyber-Physical and IoT/OT Systems</A></TD> <TD width="15%">Breakout</TD> <TD width="15%"><A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Phil Neray" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Phil Neray&quot;,&quot;cN&quot;:&quot;Phil Neray&quot;}"><SPAN class="mwf-button-content">Phil Neray</SPAN></A>,&nbsp;<SPAN>Katell Thielemann</SPAN></TD> <TD width="50%"><SPAN>Recent ransomware attacks that halted production for a gas pipeline operator and food processor have raised board-level awareness about IoT and Operational Technology (OT) risk. Security leaders are now responsible for new threats from cyber physical systems (CPS) and parts of the organization they never traditionally worried about. Join Katell Thielemann from Gartner® to discuss how to develop a CPS risk strategy using the “language of the business” to show security as a strategic business enabler. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.</SPAN></TD> </TR> <TR> <TD width="20%"><A class="session-block__link dark-theme" title="OD148 - Onboarding to Azure IoT" href="#" target="_blank" rel="noopener" aria-label="Onboarding to Azure IoT" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog - Onboarding to Azure IoT&quot;,&quot;cN&quot;:&quot;Onboarding to Azure IoT - Ricardo Minguez Pablos,Cory Newton-Smith&quot;}" data-cy="session-block-link">Onboarding to Azure IoT</A></TD> <TD width="15%">On-Demand</TD> <TD width="15%"><SPAN>Ricardo Minguez Pablos,&nbsp;<A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Cory Newton-Smith" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Cory Newton-Smith&quot;,&quot;cN&quot;:&quot;Cory Newton-Smith&quot;}"><SPAN class="mwf-button-content">Cory Newton-Smith</SPAN></A></SPAN></TD> <TD width="50%"><SPAN>Learn how to onboard to Azure IoT through our recommended approach to solution building. We’ll share how to get started with an aPaaS offering that helps you crystalize your IoT solution needs, set a strategic direction with confidence, and deliver value to your organization. You will learn how Azure IoT is simplifying IoT by providing an out-of-the-box and ready-to-use UX and API surface that works with customization options.</SPAN></TD> </TR> <TR> <TD width="20%"><A href="#" target="_blank" rel="noopener">US Local Connection: Data is the new air – how customers use data &amp; apps to drive innovation</A></TD> <TD width="15%">Connection Zone</TD> <TD width="15%">&nbsp;</TD> <TD width="50%"><SPAN>How do we break down data silos and derive valuable insights about your customers to deliver transformational experiences? Join us for this session to learn how to leverage data, AI, and IoT to build innovative applications that benefit both your employees and customers.</SPAN></TD> </TR> <TR> <TD width="20%"><A class="session-block__link dark-theme" title="PRT088 - Integrating Azure Sphere into products" href="#" target="_blank" rel="noopener" aria-label="Integrating Azure Sphere into products" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog - Integrating Azure Sphere into products&quot;,&quot;cN&quot;:&quot;Integrating Azure Sphere into products - Barry Bond,Susmitha Kothari,James Scott&quot;}" data-cy="session-block-link">Integrating Azure Sphere into products</A></TD> <TD width="15%">Product Roundtables</TD> <TD width="15%"><A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Barry Bond" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Barry Bond&quot;,&quot;cN&quot;:&quot;Barry Bond&quot;}"><SPAN class="mwf-button-content">Barry Bond</SPAN></A>,&nbsp;<SPAN class="speaker-badges__name">Susmitha Kothari,&nbsp;<A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for James Scott" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_James Scott&quot;,&quot;cN&quot;:&quot;James Scott&quot;}"><SPAN class="mwf-button-content">James Scott</SPAN></A></SPAN></TD> <TD width="50%"><SPAN>Calling all developers and device builders who have used Azure Sphere or want to know more about how MT3620 can accelerate your IoT journey. Join the Azure Sphere product team to hear what’s new and upcoming for the MT3620, and to share your feedback on what could be done to make the product better suit your needs.</SPAN></TD> </TR> <TR> <TD width="20%"><A class="session-block__link dark-theme" title="BRK228 - Manufacturing a Resilient Future with Microsoft Cloud for Manufacturing" href="#" target="_blank" rel="noopener" aria-label="Manufacturing a Resilient Future with Microsoft Cloud for Manufacturing" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog - Manufacturing a Resilient Future with Microsoft Cloud for Manufacturing&quot;,&quot;cN&quot;:&quot;Manufacturing a Resilient Future with Microsoft Cloud for Manufacturing - Çağlayan Arkan,Arun Kumar Bhaskara-baba,Zikar Dawood,Satish Thomas&quot;}" data-cy="session-block-link">Manufacturing a Resilient Future with Microsoft Cloud for Manufacturing</A></TD> <TD width="15%">Breakout</TD> <TD width="15%"><A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Çağlayan Arkan" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Çağlayan Arkan&quot;,&quot;cN&quot;:&quot;Çağlayan Arkan&quot;}"><SPAN class="mwf-button-content">Çağlayan Arkan</SPAN></A>,&nbsp;<A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Arun Kumar Bhaskara-baba" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Arun Kumar Bhaskara-baba&quot;,&quot;cN&quot;:&quot;Arun Kumar Bhaskara-baba&quot;}"><SPAN class="mwf-button-content">Arun Kumar Bhaskara-baba</SPAN></A>,&nbsp;<A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Zikar Dawood" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Zikar Dawood&quot;,&quot;cN&quot;:&quot;Zikar Dawood&quot;}"><SPAN class="mwf-button-content">Zikar Dawood,&nbsp;</SPAN></A><A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Satish Thomas" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Satish Thomas&quot;,&quot;cN&quot;:&quot;Satish Thomas&quot;}">Satish Thomas</A></TD> <TD width="50%"> <DIV data-f-expanded="false" aria-expanded="false"><SPAN class="">Microsoft Cloud for Manufacturing drives new levels of asset and workforce productivity while streamlining and improving security for IT, OT, and industrial IoT across the manufacturing value chain. By aligning cloud services to industry-specific requirements, we give customers a starting point in the cloud that easily integrates into their existing operations. Microsoft Cloud for manufacturing provides access to the broader portfolio of Microsoft cloud services enabling manufacturers to begin where the need for technology or business transformation is most urgent.</SPAN></DIV> </TD> </TR> <TR> <TD width="20%"><A class="session-block__link dark-theme" title="CONATEBRK228 - Ask the Experts: Manufacturing a Resilient Future with Microsoft Cloud for Manufacturing" href="#" target="_blank" rel="noopener" aria-label="Ask the Experts: Manufacturing a Resilient Future with Microsoft Cloud for Manufacturing" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog - Ask the Experts: Manufacturing a Resilient Future with Microsoft Cloud for Manufacturing&quot;,&quot;cN&quot;:&quot;Ask the Experts: Manufacturing a Resilient Future with Microsoft Cloud for Manufacturing - Valerio Frediani,Colin Masson,Pepijn Richter,Severin Wandji&quot;}" data-cy="session-block-link">Ask the Experts: Manufacturing a Resilient Future with Microsoft Cloud for Manufacturing</A></TD> <TD width="15%">Connection Zone</TD> <TD width="15%"> <DIV> <DIV class="speaker-badges__profile-image-container"> <DIV id="tinyMceEditorOlivierBloch_0" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P>&nbsp;</P> </DIV> <DIV class="speaker-badges__details"><A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Valerio Frediani" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Valerio Frediani&quot;,&quot;cN&quot;:&quot;Valerio Frediani&quot;}"><SPAN class="mwf-button-content">Valerio Frediani,</SPAN></A>&nbsp;<SPAN class="speaker-badges__name" style="font-family: inherit; background-color: transparent;">Colin Masson,&nbsp;</SPAN><SPAN class="speaker-badges__name" style="font-family: inherit; background-color: transparent;">Pepijn Richter,&nbsp;</SPAN><A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Severin Wandji" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Severin Wandji&quot;,&quot;cN&quot;:&quot;Severin Wandji&quot;}"><SPAN class="mwf-button-content">Severin Wandji</SPAN></A></DIV> </DIV> </TD> <TD width="50%"><SPAN>Microsoft Cloud for Manufacturing drives new levels of asset and workforce productivity while streamlining and improving security for IT, OT, and industrial IoT across the manufacturing value chain. By aligning cloud services to industry-specific requirements, we give customers a starting point in the cloud that easily integrates into their existing operations. Microsoft Cloud for manufacturing provides access to the broader portfolio of Microsoft cloud services enabling manufacturers to begin where the need for technology or business transformation is most urgent.</SPAN></TD> </TR> <TR> <TD width="20%"><A class="session-block__link dark-theme" title="OD150 - Scaling Unreal Engine in Azure with Pixel Streaming and Integrating Azure Digital Twins " href="#" target="_blank" rel="noopener" aria-label="Scaling Unreal Engine in Azure with Pixel Streaming and Integrating Azure Digital Twins" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog - Scaling Unreal Engine in Azure with Pixel Streaming and Integrating Azure Digital Twins &quot;,&quot;cN&quot;:&quot;Scaling Unreal Engine in Azure with Pixel Streaming and Integrating Azure Digital Twins - Steve Busby,Erik Jansen,Maurizio Sciglio,Aaron Sternberg,David Weir-McCall&quot;}" data-cy="session-block-link">Scaling Unreal Engine in Azure with Pixel Streaming and Integrating Azure Digital Twins</A></TD> <TD width="15%">On-Demand</TD> <TD width="15%"> <DIV class="speaker-badges__profile-image-container"> <DIV id="tinyMceEditorOlivierBloch_0" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <DIV> <DIV class="speaker-badges__details"><A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Steve Busby" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Steve Busby&quot;,&quot;cN&quot;:&quot;Steve Busby&quot;}"><SPAN class="mwf-button-content">Steve Busby</SPAN></A>,&nbsp;<A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" style="font-family: inherit;" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Erik Jansen" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Erik Jansen&quot;,&quot;cN&quot;:&quot;Erik Jansen&quot;}"><SPAN class="mwf-button-content">Erik Jansen</SPAN></A>,&nbsp;<SPAN style="background-color: transparent;">Maurizio Sciglio,&nbsp;</SPAN><SPAN class="speaker-badges__name">Aaron Sternberg,&nbsp;</SPAN><A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for David Weir-McCall" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_David Weir-McCall&quot;,&quot;cN&quot;:&quot;David Weir-McCall&quot;}"><SPAN class="mwf-button-content">David Weir-McCall<BR /></SPAN></A></DIV> </DIV> </DIV> </TD> <TD width="50%"><SPAN>Deep dive into how Unreal Engine games or apps can be easily deployed and auto-scaled in Azure using Pixel Streaming, leveraging a new solution built by Azure Engineering that can save hundreds of hours of developer’s time to build the infrastructure and management to deploy at massive scale. Additionally, learn how Unreal Engine can integrate with Azure Digital Twins to deliver immersive and live visualizations of your IoT and digital twin environments.</SPAN></TD> </TR> <TR> <TD width="20%"><A class="session-block__link dark-theme" title="OD414 - Green Transformation: driving measurable sustainability every step of the way" href="#" target="_blank" rel="noopener" aria-label="Green Transformation: driving measurable sustainability every step of the way" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog - Green Transformation: driving measurable sustainability every step of the way&quot;,&quot;cN&quot;:&quot;Green Transformation: driving measurable sustainability every step of the way - Rockford Lhotka,Anil Nagaraj,Deepak (Vensi) Ramchandani&quot;}" data-cy="session-block-link">Green Transformation: driving measurable sustainability every step of the way</A></TD> <TD width="15%">On-Demand</TD> <TD width="15%"> <DIV class="speaker-badges__profile-image-container"> <DIV id="tinyMceEditorOlivierBloch_0" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <DIV> <DIV class="speaker-badges__details"><A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Rockford Lhotka" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Rockford Lhotka&quot;,&quot;cN&quot;:&quot;Rockford Lhotka&quot;}"><SPAN class="mwf-button-content">Rockford Lhotka</SPAN></A>,&nbsp;<A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" style="font-family: inherit;" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Anil Nagaraj" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Anil Nagaraj&quot;,&quot;cN&quot;:&quot;Anil Nagaraj&quot;}"><SPAN class="mwf-button-content">Anil Nagaraj</SPAN></A>,&nbsp;<A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Deepak (Vensi) Ramchandani" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Deepak (Vensi) Ramchandani&quot;,&quot;cN&quot;:&quot;Deepak (Vensi) Ramchandani&quot;}"><SPAN class="mwf-button-content">Deepak (Vensi) Ramchandani</SPAN></A></DIV> </DIV> </DIV> </TD> <TD width="50%"><SPAN>Microsoft Cloud services are up to 98% more carbon efficient than traditional datacenters, but the sustainability benefits don’t stop there. In this session, our experts discuss how to infuse your digital transformation with sustainable practices every step of the way — from migration and application development to creating more value for clients and measuring results. We’ll also explore a real-world scenario utilizing IoT and edge technology to power green initiatives for a global manufacturer.</SPAN></TD> </TR> <TR> <TD width="20%"><A class="session-block__link dark-theme" title="OD408 - Powering Cloud at the Edge: A Data Story Told Across Three Horizons " href="#" target="_blank" rel="noopener" aria-label="Powering Cloud at the Edge: A Data Story Told Across Three Horizons" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog - Powering Cloud at the Edge: A Data Story Told Across Three Horizons &quot;,&quot;cN&quot;:&quot;Powering Cloud at the Edge: A Data Story Told Across Three Horizons - Jai Mishra,Girish Phadke&quot;}" data-cy="session-block-link">Powering Cloud at the Edge: A Data Story Told Across Three Horizons</A></TD> <TD width="15%">On-Demand</TD> <TD width="15%"> <DIV class="speaker-badges__profile-image-container"> <DIV id="tinyMceEditorOlivierBloch_0" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P><SPAN>Jai Mishra, <A class="mwf-button f-lightweight c-button mwf-button__no-margin speaker-badges__name" href="#" target="_blank" rel="noopener" aria-label="Speaker details for Girish Phadke" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog_Girish Phadke&quot;,&quot;cN&quot;:&quot;Girish Phadke&quot;}"><SPAN class="mwf-button-content">Girish Phadke</SPAN></A></SPAN></P> </DIV> </TD> <TD width="50%"><SPAN>As your organization builds its digital core, how can you develop crucial cloud-native capabilities that will help you quickly scale? Join us to learn what to consider when building edge-to-cloud solutions, and the unique roles that AI, IoT, and cognitive services play in being able to get the most out of the edge. We’ll also share use cases that highlight end-to-end scenarios, the respective Azure edge solutions deployed, and the roles that data, IoT and AI can play.</SPAN></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> Fri, 22 Oct 2021 20:21:39 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/iot-at-microsoft-ignite/ba-p/2874658 OlivierBloch 2021-10-22T20:21:39Z 5 Reasons Why I Dread Writing Embedded GUIs https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/5-reasons-why-i-dread-writing-embedded-guis/ba-p/2873988 <!-- wp:paragraph {"dropCap":true} --> <P class="has-drop-cap">A consequence of the massive adoption of Internet of Things technologies across all industries is an increasing need for <STRONG>embedded development skills</STRONG>. Yet, embedded development has historically been a pretty complex domain, and not something that one can add to their skillset overnight.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>Luckily, over the last decade, silicon vendors have put a lot of effort into simplifying embedded development, especially for people with little to no experience in the domain. Communities such as <A href="#" target="_blank" rel="noreferrer noopener">Arduino</A> and <A href="#" target="_blank" rel="noreferrer noopener">PlatformIO</A> have also immensely contributed to providing easy-to-use tools and high-level libraries that can hide most of the scary details—yes, <STRONG>assembly</STRONG>, I'm looking at you!—of embedded programming while still allowing for professional applications to be written.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>In my experience though, there is at least one area where things remain overly cumbersome: <STRONG>graphical user interface (GUI) development</STRONG>. Many applications require at least <EM>some </EM>kind of graphical user interface: the display might be small and monochrome, with hardly any buttons for the user to press, but it's still a UI, eh?</P> <P>I am sure many of you will relate: GUI development can be a lot of fun… until it isn't!</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>In this article, I have compiled <STRONG>5 reasons why I tend to not enjoy writing embedded GUI code so much anymore</STRONG>. And since you might not be interested in simply reading a rant, I am also sharing some tips and some of the tools I use to help keep GUI development enjoyable :)</img>.</P> <P>&nbsp;</P> <!-- /wp:paragraph --> <H2 class="has-primary-color has-text-color">Hardware Integration &amp; Portability</H2> <!-- /wp:heading --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>Most display devices out there come with sample code and drivers that will give you a head start in being able to at least display&nbsp;<EM>something</EM>.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>But there is more to a GU<STRONG>I</STRONG> than just a screen, as an <STRONG>I</STRONG>nterface is also made up of inputs, right? How about those push buttons, touch screen inputs, and other sensors in your system that may all participate in your interactions?</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="hello_world.jpg" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/319273i1A3F302612AB801F/image-size/medium?v=v2&amp;px=400" role="button" title="hello_world.jpg" alt="Image credit: Seeed Studio." /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image credit: Seeed Studio.</span></span></P> <P>&nbsp;</P> <!-- /wp:paragraph --><!-- /wp:image --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>It might not seem like much, but <STRONG>properly handling simple hardware inputs </STRONG>such as buttons being pressed <STRONG>can be a lot of work</STRONG> when running on a constrained system, and you can quickly end up having to deal with complex timing or interrupt-management issues (see <A href="https://gorovian.000webhostapp.com/?exam=#event-management" target="_blank" rel="noopener">Event Management and "super-loop"</A> section below for more). And as these often involve low-level programming, they tend to be pretty hardware-dependent and not easily portable.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>A lot of embedded development is done using C, so, except for low-level bootstrapping code, <STRONG>embedded code can in theory be fairly portable</STRONG>. However, writing <STRONG>portable GUI code</STRONG> is a whole different story and unless you're building on top of an existing framework such as <A href="#" target="_blank" rel="noreferrer noopener">LVGL</A> or <A href="#" target="_blank" rel="noreferrer noopener">Azure RTOS GUIX</A>, it requires a lot of effort to abstract all the hardware dependencies, even more so when trying to keep the <A href="https://gorovian.000webhostapp.com/?exam=#Performance" target="_blank" rel="noopener">performance</A> optimal.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>Of course, it is not always necessary (or possible) to have GUI code that's 100% portable. However, in these times of<STRONG> <A href="#" target="_blank" rel="noreferrer noopener">global chip shortage</A></STRONG>, it can prove very handy to not have a hard dependency on a specific kind of micro-controller or LCD display.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:heading --> <H2>Memory Management</H2> <!-- /wp:heading --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>Like I mentioned in the introduction, manipulating pixels can be a lot of fun—it really is! However, constrained systems have <STRONG>limited amounts of memory</STRONG>, and those pixels that you're manipulating in your code can quickly add up to <STRONG>thousands of bytes of precious RAM and Flash </STRONG>storage.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>Let's take the example of a tiny <STRONG>128×64 pixels monochrome display</STRONG>. As the screen only supports black &amp; white, each pixel can be represented in memory using just a single <STRONG>bit</STRONG>, meaning a <STRONG>byte</STRONG> can hold up to 8 pixels—yay! But if you do the maths:</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:image {"align":"center","width":408,"height":86} --> <DIV class="wp-block-image"> <FIGURE class="aligncenter is-resized"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="gif.gif" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/319269i4549C449A87F803D/image-size/medium?v=v2&amp;px=400" role="button" title="gif.gif" alt="gif.gif" /></span> <P>&nbsp;</P> </FIGURE> </DIV> <!-- /wp:image --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>That's already <STRONG><SPAN style="text-decoration: underline;">1KB of RAM</SPAN></STRONG>, which is quite significant if your MCU only has, say, 16KB total. Interested in displaying a handful of 32×32px icons? That will be an additional 128 bytes for each of these tiny icons!</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="128bytes (1).png" style="width: 200px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/319270iCA6345E8A27DB1F2/image-size/small?v=v2&amp;px=200" role="button" title="128bytes (1).png" alt="A 32x32px bitmap uses 128 bytes of memory!" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">A 32x32px bitmap uses 128 bytes of memory!</span></span></P> <P>&nbsp;</P> <!-- /wp:paragraph --> <P>In short: your graphical user interface will likely eat up a lot of your memory, and you need to be extra clever to leave enough room for your actual application. As an example, a quick way to save on graphics memory is to double-check whether some of your raster graphics (ex. icons) can be replaced by a vector equivalent: surely it takes a lot less code and RAM to <STRONG>directly draw</STRONG> a simple 32x32px red square on the screen, instead of having it stored as a bitmap in memory.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:heading --> <H2>Resource Management</H2> <!-- /wp:heading --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>It can be tricky to properly <STRONG>manage the various resources that make up a GUI project</STRONG>.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>More specifically, and whether you are lucky enough to work with a graphics designer or not, your GUI mockups will likely consist of a variety of image files, icons, fonts, etc. However, in an embedded context, you typically can't expect to be able to directly manipulate that nice transparent PNG file or TrueType font in your code! It first needs to be converted in a format that allows it to be manipulated in your embedded code.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>If you are a seasoned embedded developer, I am sure that you (or someone at your company) have probably developed your very own macros and tools to help you streamline the conversion/<A href="https://gorovian.000webhostapp.com/?exam=#Event_Handling_Performance" target="_blank" rel="noopener">optimization </A>of your binary assets, but in my experience it's always been a lot of tinkering, and one-off, quick and dirty, conversion scripts, which impact long-term maintainability. Add version control to the mix, and it becomes pretty hairy to keep your assets tidy at all times!</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:heading --> <H2>Event Handling &amp; Performance</H2> <!-- /wp:heading --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>Graphical programming is by nature very much <STRONG>event-driven</STRONG>. It is therefore quite natural to expect embedded GUI code to look as follows (pseudo-code):</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:code --> <PRE class="wp-block-code"><CODE>Button btnOK; btnOK.onClick = btnOK_click; btnOK_click = function() { // handle click on button btnOK // ... }</CODE></PRE> <!-- /wp:code --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>As you can imagine, things are not always that simple…</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>Firstly, C, which is still the most used embedded programming language, is <STRONG>not exactly object-oriented</STRONG>. As a consequence, even if it's perfectly possible to aim for a high-level API that looks like the above, there is a good chance you will find yourself juggling with error-prone function pointers whenever adding/updating an event handler to one of your UI elements.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>Assuming that you have indeed found an elegant way to associate event handlers to the various pieces of your UI, you still need to implement some kind of <STRONG>event loop</STRONG>. Indeed, you must regularly process the events happening in your system ("button A pressed", etc.) and dispatch them to the proper event handlers. A common pattern in embedded programming consists in doing so through a so-called <STRONG>super loop</STRONG>: the program is running an infinite loop that invokes each task the system needs to perform, <EM>ad infinitum</EM>.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:code --> <PRE class="wp-block-code"><CODE>int main() { setup(); while (1) { read_sensors(); refresh_ui(); // etc. } /* program's execution will never reach here */ return 0; }</CODE></PRE> <!-- /wp:code --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>A benefit of this approach is that <STRONG>the execution flow remains pretty readable and straightforward</STRONG>, and it also avoids some potential headaches that may be induced by complex multi-threading or interrupt handling. However, any event handler running for too long (or crashing!) can compromise the performance and stability of your main application.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>As <STRONG>embedded real-time operating systems</STRONG> such as <A href="#" target="_blank" rel="noreferrer noopener">FreeRTOS</A>, or <A href="#" target="_blank" rel="noopener">Azure RTOS ThreadX</A> are becoming more popular, a more modern approach is to have the <STRONG>UI event loop run in a dedicated background task</STRONG>. The operating system can therefore ensure that this task, given its lower priority, will not compromise the performance of your main application.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>An <STRONG>embedded GUI does not always need to be performant</STRONG> as in <EM>fast</EM> and <EM>responsive</EM>. However, it is considered a good practice to use embedded resources as efficiently as possible. Making sure that your GUI &amp; app code are as performant as reasonably possible can potentially save you a lot of money as it means you can stick to using the smallest possible MCU for the task.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:heading --> <H2>Tooling</H2> <!-- /wp:heading --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>Last but not least: tooling. To be honest, I have never been a big fan of designing graphical user interfaces using a <A href="#" target="_blank" rel="noreferrer noopener">WYSIWYG</A> (What You See Is What You Get) approach. That being said, coding a graphical user interface that has more than just a couple of screens requires at least <EM>some</EM> tooling support, since most of the "boring" glue code can often be automatically generated.</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>What's more, testing an embedded GUI can quickly become painful, as <STRONG>re-compiling your app and downloading it to your target can take quite some time</STRONG>. This can be <SPAN style="text-decoration: underline;">very</SPAN> frustrating when you need to wait minutes to e.g test how that new fancy animation you coded looks like. :unamused_face:</img></P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>Over the past few months, I have started to use <A href="#" target="_blank" rel="noreferrer noopener">Renode</A> more often. It is a pretty complete open-source tool suite for emulating embedded hardware—including their display!—straight from your computer. In a future post, I plan on sharing more on how I started to use Renode for drastically shortening the "inner embedded development loop", i.e. the time between making a code change and being able to test it live on your—emulated!—device.</P> <P>&nbsp;</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:video {"autoplay":true,"id":4375,"loop":true,"muted":false,"preload":"auto","src":"https://blog.benjamin-cabe.com/wp-content/uploads/2021/10/renode-guix.webm"} --> <FIGURE class="wp-block-video"><VIDEO src="https://blog.benjamin-cabe.com/wp-content/uploads/2021/10/renode-guix.webm" preload="auto" autoplay="autoplay" loop="loop" controls="controls" width="600" height="300"></VIDEO></FIGURE> <!-- /wp:video --> <P>&nbsp;</P> <!-- wp:separator --><HR /><!-- /wp:separator --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>I would be curious to hear about your experience, and your pain points when working with embedded GUIs. Let me know in the comments below!</P> <!-- /wp:paragraph --> <P>&nbsp;</P> <!-- wp:paragraph --> <P>Like I already mentioned, stay tuned for upcoming articles where I will be covering some of the tools and frameworks that I have started to use (and love) and that make my life <STRONG><SPAN style="text-decoration: underline;">SO</SPAN></STRONG> much easier when it comes to GUI development!</P> <!-- /wp:paragraph --> <P>&nbsp;</P> Fri, 22 Oct 2021 13:57:44 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/5-reasons-why-i-dread-writing-embedded-guis/ba-p/2873988 kartben 2021-10-22T13:57:44Z Monitoring 3D Print Quality using Azure Percept DK https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/monitoring-3d-print-quality-using-azure-percept-dk/ba-p/2812741 <P>A few weeks back I was running a demo on Azure Percept DK with one of Microsoft’s Global Partners and the Dev Kit I had setup on a tripod fell off! So, I decided to do something about it.&nbsp;I setup my 3D Printer and after some time using <STRONG>Autodesk Fusion 360</STRONG> and <STRONG>Ultimaker Cura</STRONG> I managed to print a custom mount that fits a standard tripod fitting and the Azure Percept DK 80/20 rail.&nbsp;</P> <P>&nbsp;</P> <P>However, the process started me thinking, could I use the Azure Percept DK to monitor the 3D Printer for fault detection?&nbsp;Could I create a Machine Learning model that would understand what a fault was? and had this been created already? 3D Printing can be&nbsp;notoriously tricky, and if unmanaged can produce wasted material and prints. So, creating a solution to monitor and react unattended would be beneficial and lets me honest - Fun!</P> <P>&nbsp;</P> <P>The idea of using IoT, AI/ML is common in manufacturing, but I wanted to create something very custom and if possible, using no code. I'm not a data scientist or AI/ML expert so my goal was to, where possible, use the Azure Services&nbsp;that were available to me to create my 3D printing fault detector. After some thinking I decided to go with the following technologies to build the solution:</P> <P>&nbsp;</P> <UL> <LI><A href="#" target="_blank" rel="noopener">Azure Percept DK</A></LI> <LI><A href="#" target="_blank" rel="noopener">Azure Percept Studio</A></LI> <LI><A href="#" target="_self">Custom Vision</A></LI> <LI><A href="#" target="_blank" rel="noopener">Azure IoT Hub</A></LI> <LI>Message Routing, Custom Endpoint</LI> <LI><A href="#" target="_blank" rel="noopener">Azure Service Bus / Queues</A></LI> <LI><A href="#" target="_blank" rel="noopener">Azure Logic Apps</A> + API Connections</LI> <LI>REST API</LI> <LI>Email integration / Twilio SMS ability</LI> </UL> <P>&nbsp;</P> <P>You may ask "Why not use Event Grid?". For this solution I wanted a FIFO (First In / First Out) model to take actions on how the printer would respond to commands, but we will get to that a little later. However, if you are interested in knowing how to use Event Grid do this as well? Look at the doc from Microsoft:&nbsp;<A href="#" target="_blank" rel="noopener">Tutorial - Use IoT Hub events to trigger Azure Logic Apps - Azure Event Grid | Microsoft Docs.&nbsp;</A>For a comparison on Azure Message Handling solutions take a look at&nbsp;<A href="#" target="_self">Compare Azure messaging services - Azure Event Grid | Microsoft Docs</A></P> <P>&nbsp;</P> <H3>Steps, Code, and Examples</H3> <P>To create my solution I broke the process down into steps:</P> <UL> <LI>Stage 1 - Setup Azure Percept DK</LI> <LI>Stage 2 - Create the Custom Vison Model and Test</LI> <LI>Stage 3 - Ascertain how to control the printer</LI> <LI>Stage 4 - Filter and Route the telemetry</LI> <LI>Stage 5 - Build my Actions and Responses</LI> </UL> <P>I deliberately only used the consoles, no ARM, TF or AZ CLI. So, throughout the article I have added links the Microsoft docs on how to perform each action which I will call out at the relevant points. However, I've complied all the commands you would need to recreate the resources for the solution into a Public GitHub Repo so that this can be replicated by anyone. The repo can be found <A href="#" target="_self">GitHub Repo</A>&nbsp;and all are welcome to comment or contribute.</P> <P>&nbsp;</P> <H2>Stage 1 - Setup Azure Percept DK</H2> <P>&nbsp;</P> <H3>Configure Azure Percept DK</H3> <P>Configuring the OOBE for Azure Percept DK is very straightforward. I’m not going to cover it in this post but you can find a great example here: <A href="#" target="_blank" rel="noopener">Set up the Azure Percept DK device | Microsoft Docs</A> along with a full video walkthrough.</P> <P><BR />I wanted to use an existing IoT Hub and Resource Group which I had already specified, but this is fully supported in the OOBE.</P> <P>&nbsp;</P> <P>After running through the OOBE I now had my Azure Percept DK connected to my home network and registered in Azure Percept Studio and Azure IoT Hub. Azure Percept Studio is configured as part of the Azure Percept DK setup.</P> <P>&nbsp;</P> <P>I configured the Dev Kit to sit as low as possible in front of the printer. This was so that the "Azureye" camera module had a clear view of the printing nozzle and bed.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="General_Image.jpg" style="width: 651px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/315623i65F9622CE1FDDFA4/image-size/large?v=v2&amp;px=999" role="button" title="General_Image.jpg" alt="Azure Percept DK Position" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Azure Percept DK Position</span></span></P> <P><BR />Now for the model…</P> <P>&nbsp;</P> <H2>Stage 2 - Create the Custom Vison Model and Test</H2> <P>&nbsp;</P> <H3>Fault Detection Model</H3> <P>I knew I wanted to create the model in <A href="#" target="_self">Custom Vision</A>&nbsp;and I had a good idea of what domain and settings I needed, but had this been done before? The answer was yes.</P> <H3><BR />An Overview of 3D Printing Concepts</H3> <P>In 3D Printing the most common issue is something called “Spaghetti”. Essentially a 3D printer melts plastic filament into small layers to build the object you are looking to print, but if several settings are not correct (nozzle temp, bed temp, z-axis offset) the filament does not make a good bond and starts to come out looking like, you guessed it – “Spaghetti”.</P> <P>&nbsp;</P> <P>After some research I found that there is a great Open Source project called the <A href="#" target="_self">Spaghetti Detective</A>&nbsp;with the code hosted on <A href="#" target="_self">GitHub.</A></P> <P>The project itself can be run on an NVIDIA Jetson Nano (which I had also tried), but no support was available yet for Azure Percept. So I started to create a model.</P> <P>&nbsp;</P> <H3>Model Creation</H3> <P>I started by creating a new Custom Vision Project. If you have not used Custom Vision before you will need to signup via the portal (full instructions can be found <A href="#" target="_self">here</A>). However, as I already had an account I created my project via the link within Azure Percept Studio.</P> <P>&nbsp;</P> <P>Because the model needed to look for “Spaghetti” within the image, I knew I needed to have a Project Type of “Object Detection”, and because I needed a split between Accuracy and Low Latency, “Balanced” was the correct Optimization option.</P> <P><BR />The reason behind using “balanced” was although I wanted to detect errors quickly as to not waste any filament, I also wanted to make sure there was an element of accuracy, as I did not want to have the printer to stop or pause unnecessarily.</P> <P><BR />Moving to the next stage I wanted to use generic images to tag, rather than using the device stream to capture them, so I just selected my IoT Hub and Device and moved on to Tag Images and Model Training. This allowed me to open the Custom Vision Portal.</P> <P>&nbsp;</P> <P>Now I was able to upload my images (you will need a minimum of 15 images per tag for object detection). I did a scan on the web and found around 20 images I could use to train the model, then spent some time tagging regions for each as “Spaghetti”. I also changed the model Domain to General (Compact) so that I could export the model at a later date. (You can find this in the GitHub Repo).</P> <P><BR />The specifics of this are out of scope for this post, but you can find a full walkthrough <A href="#" target="_self">here</A>.</P> <P><BR />Once my model was trained to an acceptable level for Precision, Recall and mAP, I returned to the Azure Percept Studio and deployed the model to the Dev Kit.</P> <P>&nbsp;</P> <H3>Testing the Model</H3> <P>Before moving on I wanted to ensure the model was working and could successfully detect “Spaghetti” on the printer. I selected the “View Your Device Stream” option within Azure Percept Studio and also the “View Live Telemetry”. This would ensure that detection was working, and I could also get an accurate representation of the payload schema.</P> <P><BR />I used an old print that had produced “Spaghetti” on a previous job, and success the model worked!</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Spaghetti_Detection_Test.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/315282i93D1577799641D14/image-size/large?v=v2&amp;px=999" role="button" title="Spaghetti_Detection_Test.jpg" alt="&quot;Spaghetti&quot; detection and confidence level" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">"Spaghetti" detection and confidence level</span></span></P> <P><EM>Example Payload</EM></P> <DIV><LI-CODE lang="json">{ "body": { "NEURAL_NETWORK": [ { "bbox": [ 0.521, 0.375, 0.651, 0.492 ], "label": "Spaghetti", "confidence": "0.552172", "timestamp": "1633438718613618265" } ] } }</LI-CODE></DIV> <P>&nbsp;</P> <H2>Stage 3 - Ascertain how to control the printer</H2> <P>&nbsp;</P> <H3>OctoPrint Overview and Connectivity</H3> <P>The printer I use is the&nbsp;<A href="#" target="_blank" rel="noopener">Ender-3 V2 3D Printer (creality.com)</A>, which by default does not have any connected services. It essentially uses either a USB connection or MicroSD card to upload the printer files. However, an amazing Opensource solution is available from <A href="#" target="_self">OctoPrint.org</A> that runs on Raspberry Pi and is built into the Octopi image. This allows you to enable full remote control and monitoring to printer via your network, but the main reason for using this is that it enables API Management. Full instructions on how to set this up can be found <A href="#" target="_self">here</A>.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="OctoPrint.jpg" style="width: 860px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/315535iE1B2B30F5AC80DA3/image-size/large?v=v2&amp;px=999" role="button" title="OctoPrint.jpg" alt="OctoPrint Server" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">OctoPrint Server</span></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>In order to test my solution, I needed to enable port forwarding on my firewall and create an API key within OctoPrint, all of which can be found here: <A href="#" target="_blank" rel="noopener">REST API — OctoPrint master documentation</A></P> <P>&nbsp;</P> <P>For my example I am just using port 80 (HTTP), however in a production situation this would need to be secured and possibly NAT implemented.</P> <P>&nbsp;</P> <H3>Postman Dry Run</H3> <P>In order to test the REST API I needed to send a few commands direct to the printer using Postman. The first was to check internally within my LAN that I could connect to the printer and retrieve data using REST, the second was then to ensure the external IP and port was accessible for the same.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="PostMan_Image_1.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/315591i1F445D946B4A68C9/image-size/large?v=v2&amp;px=999" role="button" title="PostMan_Image_1.jpg" alt="Postman - api/version" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Postman - api/version</span></span></P> <P>&nbsp;</P> <P>Once I knew this was responding I could send a command to pause the printer in the same manor:</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="PostMan_Image_2.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/315592iCE84BE3C4A4D5268/image-size/large?v=v2&amp;px=999" role="button" title="PostMan_Image_2.jpg" alt="Postman Example - api/job" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Postman Example - api/job</span></span></P> <P>&nbsp;</P> <P>Because I was using a POST rather than a GET this time, I needed to send the commands within the body:</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="PostMan_Image_3.jpg" style="width: 650px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/315304i8F81D8621CBE65C2/image-size/large?v=v2&amp;px=999" role="button" title="PostMan_Image_3.jpg" alt="Postman Example - Body Context" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Postman Example - Body Context</span></span></P> <P>&nbsp;</P> <P>It was during these checks I thought, what if the printer has already paused? I ran some more checks by pausing a job and then sending the pause command again. This gave a new response of "409 CONFLICT". I decided I could use this within my logic app as a condition.</P> <P>&nbsp;</P> <H3>Recap on The Services</H3> <P>Going back to the beginning I could now tick off some of the services I mentioned:</P> <P>&nbsp;</P> <UL> <LI><STRIKE>Azure Percept DK</STRIKE></LI> <LI><STRIKE>Azure Percept Studio</STRIKE></LI> <LI><SPAN style="text-decoration: line-through;">Custom Vision</SPAN><STRIKE></STRIKE></LI> <LI><STRIKE>Azure IoT Hub</STRIKE></LI> <LI>Message Routing / Custom Endpoint</LI> <LI>Azure Service Bus / Queues</LI> <LI>Azure Logic Apps + API Connections</LI> <LI><STRIKE>REST API</STRIKE></LI> <LI>Email integration / Twilio SMS ability</LI> </UL> <P>Now it was time to wrap this all up and make everything work.</P> <P>&nbsp;</P> <H2>Stage 4 - Filter and Route the telemetry</H2> <P>&nbsp;</P> <H3>Azure Service Bus</H3> <P>As I mentioned earlier, I could have gone with Azure Event Grid to make things a little simpler. However I really wanted to control the flow of the messages coming into the Logic App, to ensure my API commands followed a specific order.</P> <P>&nbsp;</P> <P><A href="#" target="_self">Use the Azure portal to create a Service Bus queue - Azure Service Bus | Microsoft Docs</A></P> <P>&nbsp;</P> <P>Once I had my Azure Service Bus and Queue created I needed to configure my device telemetry coming from the Azure Percept DK to be sent to it. For this I needed to configure Message Routing in Azure IoT Hub. I chose to create this using the portal as I wanted to visually check some of the settings.&nbsp;&nbsp;</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Custom_Endpoint.jpg" style="width: 590px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/315538i70DA3BD70871BAE4/image-size/large?v=v2&amp;px=999" role="button" title="Custom_Endpoint.jpg" alt="Custom Endpoint" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Custom Endpoint</span></span></P> <P>Once I had the Custom Endpoint created I could setup Message Routing. However, I did not want to send all telemetry to the queue, only those that matched the "Spaghetti" label, and only with a confidence of &gt; 30%. For this I needed to use an example message body which I took from the Telemetry I received during the model testing.&nbsp; I then used the query language to create a <A href="#" target="_self">routing query</A> that I could use to test against.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Message_Routing.jpg" style="width: 863px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/315539i5650CF29141FF6B4/image-size/large?v=v2&amp;px=999" role="button" title="Message_Routing.jpg" alt="Message Routing" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Message Routing</span></span></P> <P>&nbsp;</P> <P>As I only had the one device in my IoT Hub I did not add any filters based on the Device Twin, but for a 3D Printing cluster this would be a great option, which then could be passed into the message details within the Logic App.</P> <P>&nbsp;</P> <H2>Stage 5 - Build my Actions and Responses</H2> <P>&nbsp;</P> <H3>Logic App</H3> <P>Lastly was the Logic App. I know that what I wanted to create was an alert for any message that came into the queue. Remember we have already filtered the messages by this stage, so we know the messages we now receive we need to take action on. However, I also want to ensure the process handling was clean.</P> <P>&nbsp;</P> <P>I also had to consider how to deal with not only the message body, but the content-data within the message body (The actual telemetry) was Base64 encrypted. With some research time, trial and error and discussions with some awesome people in my team, I finally came up with the workflow I needed. Plus I would also need some API connection for Service Bus, Outlook and Twilio (For SMS messaging).</P> <P>&nbsp;</P> <P><A href="#" target="_self">https://docs.microsoft.com/azure/logic-apps/quickstart-create-first-logic-app-workflow</A>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Logic_App.jpg" style="width: 756px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/315545iE19CE8738F0D6AE4/image-size/large?v=v2&amp;px=999" role="button" title="Logic_App.jpg" alt="Logic App Operations, Actions and Conditions" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Logic App Operations, Actions and Conditions</span></span></P> <P>&nbsp;</P> <P>The steps for the Logic App workflow are as follows:</P> <P>&nbsp;</P> <UL> <LI>(Operation / API Connection - Service Bus) When a message is received into the queue (TESTQUEUE) <UL> <LI>The operation is set to check the queue every 5 seconds</LI> </UL> </LI> <LI>(Action) Parse the Service Bus Message as JSON <UL> <LI>This takes the Service Bus Message Payload and generates a schema to use.</LI> </UL> </LI> <LI>(Action) Decode the Content-Data section of the message from Base64 to String. <UL> <LI>This takes the following Expression <FONT color="#3366FF">json(base64ToString(triggerBody()?['ContentData']))</FONT> to convert the Telemetry and uses an example payload again.</LI> </UL> </LI> <LI>(Action) Get the current Printer Pause Status <UL> <LI>Adds a True/False Condition.</LI> <LI>Sends a GET API call to the printers external IP address to see if the status is “409 CONFLICT” <UL> <LI>If this is TRUE, the printer is already paused and the Logic App is terminated.</LI> <LI>If False, the next Action is triggered.</LI> </UL> </LI> </UL> </LI> <LI>(Action) Issue HTTP Pause Command <UL> <LI>Sends a POST API call to the external IP address of the printer with both “action”: “pause” and “command”: “pause” included in the body.</LI> </UL> </LI> <LI>(Parallel Actions) <UL> <LI>API Connection Outlook) Send Detection Email <UL> <LI>Sends a High Priority email with the Subject “Azure Percept DK – 3D Printing Alert”</LI> </UL> </LI> <LI>(API Connection Twilio) Send Detection SMS <UL> <LI>Sends an SMS via Twilio. This is easy to setup and can be done using a trial account. All details are <A href="#" target="_self">here</A></LI> </UL> </LI> </UL> </LI> </UL> <P>All the schema and payloads I used are included in the GitHub Repo, along with the actual Logic App in JSON form.</P> <P>&nbsp;</P> <H3>Testing the End-to-End Process</H3> <P>Now everything was in place it was time to test all the services End-to-End. In order to really test the model and the alerting I decided to start printing an object and then deliberately change the settings to cause a fault. I also decided to use a clear printing filament to make things a little harder to detect.&nbsp;</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Spaghetti_Detection_059.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/315554i74E27D314AC0AA01/image-size/large?v=v2&amp;px=999" role="button" title="Spaghetti_Detection_059.jpg" alt="Detection" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Detection</span></span></P> <P>&nbsp;</P> <P>Success! The model detected the fault and sent the telemetry to the queue as it matched both the label and confidence. The Logic App trigger the actions and checked to see if the printer had already been paused. In this instance it had not so the Logic App sent the pause command via the API and then confirmed this with an email and SMS message.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Email_059.jpg" style="width: 825px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/315552iAA28BADD2DA59B9A/image-size/large?v=v2&amp;px=999" role="button" title="Email_059.jpg" alt="Detection Email" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Detection Email</span></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SMS_059.jpg" style="width: 610px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/315553i63D6C07D396EEADE/image-size/large?v=v2&amp;px=999" role="button" title="SMS_059.jpg" alt="Detection SMS" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Detection SMS</span></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <H3>Wrap Up</H3> <P>So that's it, a functioning 3D printing fault detection system running on Azure using Azure Percept DK and Custom Vision. Reverting back to my original concept of creating the solution only using services and no code, I believe I managed to get very close. I did need to create some query code and a few lines within the Logic App, but generally that was it. Although I have provided the AZ CLI code to reproduce the solution, I purposefully only used the Azure and Custom Vision Portal to build out.</P> <P>&nbsp;</P> <P>What do you think? How would you approach this? Would you look to use Azure Functions? I'd love to get some feedback on this, so please take a look at the GitHub Repo and see if you can replicate the solution. I will also be updating the model with a new iteration to improve the accuracy in the next few weeks, so keep an eye out on the GitHub repo for the updates.</P> <P>&nbsp;</P> <H3>Learn More about Azure Percept</H3> <P><A href="#" target="_self">Azure Percept - Product Details</A></P> <P><A href="#" target="_self">Pre-Built AI Models</A></P> <P><A href="#" target="_self">Azure Percept - YouTube</A></P> <P>&nbsp;</P> <H3>Purchase Azure Percept</H3> <P><A href="#" target="_self">Build Your Azure Percept</A></P> <P>&nbsp;</P> <H3>Azure Percept Tripod Mount</H3> <P>For anyone who is interested you can find the model files I created for this on&nbsp;<A href="#" target="_self">Thingiverse</A>&nbsp;to print yourself. - Enjoy! <img class="lia-deferred-image lia-image-emoji" src="https://techcommunity.microsoft.com/html/@8341BD79091AF36AA2A09063B554B5CDhttps://techcommunity.microsoft.com/images/emoticons/smile_40x40.gif" alt=":smile:" title=":smile:" /></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Thingiverse.jpg" style="width: 959px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/315532iECE8CCDAC9837AE2/image-size/large?v=v2&amp;px=999" role="button" title="Thingiverse.jpg" alt="Thingiverse - Azure Percept DK Tripod Mount" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Thingiverse - Azure Percept DK Tripod Mount</span></span></P> Thu, 21 Oct 2021 14:54:31 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/monitoring-3d-print-quality-using-azure-percept-dk/ba-p/2812741 chrisjeffreyuk 2021-10-21T14:54:31Z Enhanced Azure IoT Edge tools for development https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/enhanced-azure-iot-edge-tools-for-development/ba-p/2843294 <P>Developing, deploying, and managing IoT Edge modules with Azure IoT Edge tools has never been faster, easier, or more secure. We have several Azure IoT Edge tool offerings which enhance developer experiences in both inner and outer loop.<BR /><BR /></P> <H2>Existing Azure IoT Edge tool offerings</H2> <P><BR />Existing Azure IoT Edge tools provide best-in-class integrated experiences with Visual Studio Code or Visual Studio family. Whether you are developing, debugging, and testing custom IoT Edge or managing scalable and reliable production IoT solutions with continuous integration and delivery pipeline, the tool offerings help accelerate your development experiences.<BR /><BR /></P> <UL> <LI><STRONG>IDE (<A href="#" target="_self">Visual Studio Code</A> or <A href="#" target="_self">Visual Studio</A>) extensions</STRONG>: Provide seamless development environment with IoT Edge solution template and integrated simulation of IoT Edge runtime to help debug custom IoT Edge modules</LI> <LI><A href="#" target="_blank" rel="noopener"><STRONG>IoT Edge simulator</STRONG></A>: Provides a local development experience to create, develop, test and debug IoT Edge module. Removes a need to install and configure a physical device</LI> <LI><A href="#" target="_blank" rel="noopener"><STRONG>IoT Edge Dev tool</STRONG></A>: Simplifies Azure IoT Edge development down to simple commands with the IoT Edge solution scaffolding that contains all the required configurations</LI> <LI><A href="#" target="_blank" rel="noopener"><STRONG>IoT Edge CICD task</STRONG></A>: Azure IoT Edge task for Azure DevOps pipeline used to build, push, and deploy IoT Edge modules and solutions<BR /><BR /></LI> </UL> <H2>Updates to the Azure IoT Edge tool family</H2> <P><BR />Newly developed tools and updates we made to the existing Azure IoT Edge tools make the IoT Edge module development faster. Key enhancements are made to support various stages of development journey: Development environment setup, installation and configuration of IoT Edge, testing custom edge module, and scalable solution integration in CICD pipeline.<BR /><BR /></P> <UL> <LI><STRONG>Development container for tools</STRONG>:&nbsp;A newly released development containers for <A href="#" target="_self">IoT Edge simulator</A>&nbsp;and <A href="#" target="_self">IoT Edge Dev Tool</A> and <A href="#" target="_self">Visual Studio Code extension</A> provide secure development environment and removes any package dependencies</LI> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-iot/check-out-azure-iot-edge-configuration-tool/m-p/2504420" target="_blank" rel="noopener"><STRONG>IoT Edge configuration tool</STRONG></A>: A newly released IoT Edge configuration tool streamlines the installation and configuration of IoT Edge runtime on a given device with one single command</LI> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-iot/azure-iot-edge-runtime-1-1-support-in-tools/m-p/2612630" target="_blank" rel="noopener"><STRONG>Edge runtime selection</STRONG></A><STRONG>:</STRONG> Developers can specify the Edge runtime image version to debug, test and simulate their custom IoT Edge module with specified Edge runtime</LI> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-iot/azure-iot-edge-dev-tool-now-supports-automatic-deployments-and/m-p/2766771#M440" target="_blank" rel="noopener"><STRONG>Automatic and layered deployment support</STRONG></A><STRONG>:</STRONG> IoT Edge Dev Tool now supports enhanced developer experience for generating, pushing, and deploying automatic/layered deployments</LI> </UL> <P><STRONG>&nbsp;</STRONG></P> <P>Check out the IoT Show episode below for more details:</P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/3eSAWobRuf0?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="Your toolbox for Azure IoT Edge development" widget_referrer="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things"></IFRAME></DIV> <P>&nbsp;</P> <H2>Resources</H2> <UL> <LI>IDE Extensions <UL> <LI><A href="#" target="_blank" rel="noopener">Visual Studio Edge extension</A></LI> <LI><A href="#" target="_blank" rel="noopener">Visual Studio Code Edge extension</A></LI> <LI><A href="#" target="_blank" rel="noopener">Azure IoT </A><A href="#" target="_blank" rel="noopener">Tools </A><SPAN>extension</SPAN><SPAN> (Azure IoT Edge extension comes with it)</SPAN></LI> </UL> </LI> <LI>Standalone tools <UL> <LI><A href="#" target="_blank" rel="noopener">IoT Edge simulator</A></LI> <LI><A href="#" target="_blank" rel="noopener">IoTEdgeDev</A><A href="#" target="_blank" rel="noopener"> tool</A></LI> <LI><A href="#" target="_blank" rel="noopener">IoT Edge configuration tool</A></LI> </UL> </LI> <LI>IoT Show <UL> <LI><A href="#" target="_self">VS Code Edge extension</A></LI> <LI><A href="#" target="_blank" rel="noopener">VS2019 Edge extension</A></LI> <LI><A href="#" target="_blank" rel="noopener">IoTEdge</A><A href="#" target="_blank" rel="noopener"> Dev Tool</A></LI> </UL> </LI> <LI>Tech blog/GitHub <UL> <LI><A href="#" target="_blank" rel="noopener">SSH Remote debugging</A></LI> <LI><A href="#" target="_blank" rel="noopener">Debugging C/C# Linux edge module</A></LI> <LI><A href="#" target="_blank" rel="noopener">CICD task</A></LI> </UL> </LI> </UL> <P>&nbsp;</P> Mon, 18 Oct 2021 16:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/enhanced-azure-iot-edge-tools-for-development/ba-p/2843294 konichi3 2021-10-18T16:00:00Z Windows for IoT now goes to 11 with Windows 11 IoT Enterprise https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/windows-for-iot-now-goes-to-11-with-windows-11-iot-enterprise/ba-p/2850335 <P>Today marks an exciting day,<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener nofollow noreferrer">Windows for IoT</A><SPAN>&nbsp;</SPAN>is releasing the newest member of its product family, Windows 11 IoT Enterprise. After recently shipping<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Windows Server IoT 2022</A>, Windows 11 IoT Enterprise marks the ongoing commitment from Microsoft to deliver innovation and new functionality to the IoT market.</P> <P>&nbsp;</P> <P>In&nbsp;<A href="#" target="_blank" rel="noopener">February,</A><SPAN>&nbsp;</SPAN>it was announced that there will be a release of Windows 10 Enterprise LTSC and Windows 10 IoT Enterprise LTSC in the second half (H2) of the calendar year 2021. &nbsp;In that announcement, it was communicated that the client edition, Windows 10 Enterprise LTSC, will<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">change its servicing</A><SPAN>&nbsp;</SPAN>from a 10-year to a 5-year support lifecycle, aligning with the changes to the next perpetual version of Microsoft Office. We also stated that<SPAN>&nbsp;</SPAN><STRONG>Windows 10 IoT Enterprise will maintain its 10-year support lifecycle<SPAN>&nbsp;</SPAN></STRONG>with this upcoming 21H2 release.&nbsp;</P> <P>&nbsp;</P> <P>These commitments remain firm with this Windows 11 IoT Enterprise release and we are on track to release an LTSC version of Windows 10 IoT Enterprise soon with a 10-year support lifecycle. &nbsp;&nbsp;Today's Windows 11 IoT Enterprise release is not an LTSC release and instead will have a servicing timeline of<SPAN>&nbsp;</SPAN><STRONG>36 months</STRONG><SPAN>&nbsp;</SPAN>from the month of the release as&nbsp;described in the<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">product lifecycle documentation</A>.</P> <P>&nbsp;</P> <H2 id="toc-hId--328284535">What is new?</H2> <P>&nbsp;</P> <P>Windows 11 IoT Enterprise will deliver new features and functionality that will enable the Windows for IoT ecosystem to build innovative and modern devices.</P> <P>&nbsp;</P> <H3 id="toc-hId-362276939">Windows Subsystem for Linux GUI</H3> <P>With Windows 11 IoT Enterprise, customers will be able to take advantage of the highly anticipated feature, Windows Subsystem for Linux GUI (WSLg), which brings Linux GUI applications to the Windows Subsystem for Linux (WSL).</P> <P>&nbsp;</P> <P>Today, WSL lets you run a Linux environment, and up until this point has focused on enabling command-line tools, utilities, and applications. GUI app support enables you to now use your favorite Linux GUI applications with WSL. WSL is used in a wide variety of applications, workloads, and use cases. To learn more, check out the&nbsp;<A href="#" target="_blank" rel="noopener noreferrer">announcement</A>&nbsp;and&nbsp;<A href="#" target="_blank" rel="noopener noreferrer">blog</A>.</P> <P>&nbsp;</P> <H3 id="toc-hId--1445177524">USB 4.0</H3> <P>Windows 11 IoT Enterprise brings support to Universal Serial Bus 4 (USB4). To learn more, please review the&nbsp;<A href="#" target="_blank" rel="noopener noreferrer">feature documentation</A>.</P> <P>&nbsp;</P> <H3 id="toc-hId-1042335309">Wi-Fi 6E</H3> <P>Windows 11 IoT Enterprise brings Wi-Fi 6E support to IoT devices. Wi-Fi 6E gives you better wireless coverage and performance with added security. Review&nbsp;<A href="#" target="_blank" rel="noopener noreferrer">Windows 11 Specifications</A>&nbsp;for more information.</P> <P>&nbsp;</P> <H3 id="toc-hId--765119154">Newly Designed Modern Interface</H3> <P>One of the most exciting features of the Windows 11 IoT Enterprise operating system is the new user interface. The new design and sounds are modern, fresh, clean, and beautiful, bringing you a sense of calm and ease. To learn more about the enhanced UI, check out this<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener nofollow noreferrer">Windows Experience Blog</A><SPAN>&nbsp;</SPAN>which walks you through all the new and exciting improvements and capabilities.</P> <P>&nbsp;</P> <H3 id="toc-hId-1722393679">Building more accessible IoT devices</H3> <P><EM>“Accessible technology is a fundamental building block that can unlock opportunities in every part of society. A more accessible Windows experience has the power to help tackle the “disability divide” — to contribute to more education and employment opportunities for people with disabilities across the world.”</EM></P> <P>Jeff Petty – Windows Accessibility Leader</P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener nofollow noreferrer">Windows 11 is the most inclusively designed version of Windows</A>&nbsp;with new accessibility improvements that were built for and by people with disabilities, allowing device manufacturers to develop, design, and deploy IoT devices that make it easier for people with disabilities to interact with and use their devices.</P> <P>&nbsp;</P> <H3 id="toc-hId--85060784">Moving forward</H3> <P>Microsoft continues to address the unique needs of the IoT industry by offering&nbsp;<A href="#" target="_blank" rel="noopener noreferrer">Windows for IoT Enterprise LTSC</A>&nbsp;and the Long Term Servicing Channel of Windows Server, which today is Windows Server IoT 2022. Each of these products will&nbsp;<STRONG>continue to have a 10-year support lifecycle</STRONG>, as documented on our&nbsp;<A href="#" target="_blank" rel="noopener noreferrer">product lifecycle datasheet</A>.</P> <P>&nbsp;</P> <P>We remain committed to the ongoing success of Windows for IoT, which is deployed on millions of intelligent-edge solutions around the world. Industries such as manufacturing, retail, medical equipment, and public safety choose Windows for IoT to power their edge devices. There are many benefits of developing on the platform. These benefits include creating locked-down, interactive user experiences with natural input, providing world-class security, enterprise-grade device management, and allowing customers and partners to build solutions that are made to last. To learn more about how to get started with Windows for IoT, visit the developer experience website,<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">Windows on Devices</A>.</P> <P>&nbsp;</P> <P>Lastly, we would like to invite you to join us at the Windows for IoT Launch Summit on Tuesday, October 19<SUP>th</SUP>, 2021. We will be talking in detail about topics such as features and functionalities in the upcoming release, IoT device security, and how you can bring intelligence to the edge with Windows for IoT. This virtual event is open to anyone and will be repeated twice to accommodate our global time zones. The event is<SPAN>&nbsp;</SPAN><U>free</U><SPAN>&nbsp;</SPAN>to attend – and we hope to see you there!</P> <P>&nbsp;</P> <P>Session 1: 7 AM PDT, register:<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener nofollow noreferrer">https://aka.ms/WinIoTLaunchSummit</A></P> <P>Session 2: 6 PM PDT, register:<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener nofollow noreferrer">https://aka.ms/WinIoTLaunchSummitAPAC</A></P> <P>&nbsp;</P> <P>To keep up to date on Windows 11 IoT Enterprise, bookmark the Windows 11 IoT Enterprise<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">feature documentation.</A></P> Fri, 15 Oct 2021 16:19:43 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/windows-for-iot-now-goes-to-11-with-windows-11-iot-enterprise/ba-p/2850335 joecoco 2021-10-15T16:19:43Z Mosquitto Client Tools and Azure IoT Hub. The beginning of something good.... https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/mosquitto-client-tools-and-azure-iot-hub-the-beginning-of/ba-p/2824717 <P>Public cloud brings a paradigm shift in what can be done, the art of the possible is possible and in the context of this post, I am planning to connect my existing MQTT-based Smart Home system to Azure.</P> <P>&nbsp;</P> <P>From ML (Machine Learning) through to anomaly detection and everything in between, the bells and whistles Azure graces myself is far and beyond the capabilities of any on premise system, coupled with low administrative effort of consuming a managed service.</P> <P>&nbsp;</P> <P>Who has the longest showers? Who wakes up first? Which devices are powered on, yet there is no activity? These questions and many can all be answered from the thousands events occurring each day in my house, but first we need to get these events in to the Cloud, a.k.a. Azure.</P> <P>&nbsp;</P> <P>The thing is whilst familiar with MQTT, I am new to Azure IoT (and Azure in general). So, in this multi-part series of blog posts, let's learn together as I walk you through my journey.</P> <P>&nbsp;</P> <H2>The inception</H2> <P>Today I use <A href="#" target="_blank" rel="noreferrer noopener">Mosquitto</A> (in Docker container) locally with <A href="#" target="_blank" rel="noreferrer noopener">Home Assistant</A>, <A href="#" target="_blank" rel="noreferrer noopener">Tasmota </A>devices, a <A href="#" target="_blank" rel="noreferrer noopener">PLC </A>and an <A href="#" target="_blank" rel="noreferrer noopener">Arduino Mega</A>. I publish and consume thousands of messages each day (1.4 messages per second, which is mostly <A href="#" target="_blank" rel="noreferrer noopener">PIR</A> data). The easiest path here is to not have these devices publish to Azure IoT Hub (the Azure service used as a Cloud gateway for IoT devices) directly, but to have Mosquitto replicate all messages locally through to Azure IoT Hub.</P> <P>&nbsp;</P> <P>Remember the three laws of IoT:</P> <P>&nbsp;</P> <OL> <LI> <P><STRONG>The Law of Physics :&nbsp;</STRONG>Latency to the cloud can be unacceptable, think crash avoidance system and medical alerts. So some decision-making must continue to be executed locally on the device. High-value and safety-critical processes must always continue working.</P> </LI> <LI> <P><STRONG>The Law of Economics :&nbsp;</STRONG>The cost of bandwidth is not falling as fast as the cost of storage and compute.</P> <P>We might act on low value data locally and upload high value and aggregate data to the cloud for analytics and storage.</P> </LI> <LI> <P><STRONG>The Law of the Land:&nbsp;</STRONG>For legal or compliance reasons, and concerns around privacy, some industries prefer to store data locally more-so, some governments impose data sovereignty restrictions on where data may be stored.<BR /><BR /></P> </LI> </OL> <P>But here, it’s my snarky 10-year-old wondering why there is so much lag when a button is pressed (The law of physics). Therefore, some decision-making must continue to be executed locally and in the absence of Internet connection, my house must continue to function. Ideally something like this image below, once we are publishing events in Azure IoT Hub we are then able to leverage these in the Azure platform as described above.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Pic1.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316088iC216C60E7F3B1AD1/image-size/large?v=v2&amp;px=999" role="button" title="Pic1.png" alt="What I want to do" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">What I want to do</span></span></P> <P>&nbsp;</P> <P>The thing is, you need to crawl before you can walk and the first thing I did was to create an Azure IoT Hub instance, create a device identity and publish/subscribe MQTT (version 3.11 compliant) messages via <A href="#" target="_blank" rel="noreferrer noopener">Mosquitto client tools </A>(mosquitto_pub / mosquitto_sub) to Azure IoT Hub, all this using the <A href="#" target="_blank" rel="noreferrer noopener">Azure CLI</A>.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Pic2.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316090i71346655DD321D87/image-size/large?v=v2&amp;px=999" role="button" title="Pic2.png" alt="What we will do" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">What we will do</span></span></P> <P>&nbsp;</P> <P>The first thing I did was to leverage my favorite search engine to see if this has been done before I found this interesting guide (<A tabindex="-1" title="https://github.com/azure-samples/iotmqttsample/tree/master/src/mosquitto_pub" href="#" target="_blank" rel="noopener noreferrer" aria-label="Link https://github.com/Azure-Samples/IoTMQTTSample/tree/master/src/Mosquitto_pub">https://github.com/Azure-Samples/IoTMQTTSample/tree/master/src/Mosquitto_pub</A>), and whilst light on details, 2 hours later my MQTT messages started flowing in my IoT Hub, but intially I had many questions. Why use a SAS token for auth? How do I create my Azure IoT Hub? How do I create my device? But more importantly how can I do this in a way that will be part of a build pipeline?</P> <P>&nbsp;</P> <H3>What is Azure IoT Hub?</H3> <P><A href="#" target="_blank" rel="noreferrer noopener">Azure IoT Hub</A> brings highly secure and reliable communication between your Internet of Things (IoT) application and the devices it manages. Some key features are per-device authentication, built-in device management and scaled provisioning. From an IoT perspective it provides AMQP, MQTT and HTTP endpoints (it's MQTT we will be focusing on in this post). It does a lot, but in the context of this post, think of it as your Cloud gateway for devices that you don't need to manage.</P> <P>&nbsp;</P> <H3>How about AZ-CLI?</H3> <P>Simply put, your CLI (Command Line Interface) for Azure. It is a cross-platform CLI to connect to Azure and execute administrative commands on Azure resources. It allows the execution of commands through a terminal using interactive command-line prompts or a script. The Azure CLI, known as AZ CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation and I see it as that interim step between the console and using a direct SDK or service API. You can find more information on Azure CLI <A href="#" target="_blank" rel="noreferrer noopener">here</A>.</P> <P>&nbsp;</P> <P>You can install the Azure CLI locally on Linux, Mac, or Windows computers. It can also be used from a browser through the <A href="#" target="_blank" rel="noreferrer noopener">Azure Cloud Shell</A> (very cool :thumbs_up:</img>:thumbs_up:</img>) or run from inside a Docker container.</P> <P>I am not going to cover setting up Azure CLI in here. See the Azure CLI <A href="#" target="_blank" rel="noreferrer noopener">setup guide</A> for more information on setting up and configuring for your subscription but suffice to say, Azure CLI ticks that box, which will allow build pipeline integration.Enough about this, lets get started.</P> <P>&nbsp;</P> <H2>The implementation</H2> <H3>High Level Steps</H3> <OL> <LI>Create an Azure IoT Hub instance</LI> <LI>Extract Connection Strings</LI> <LI>Create an IoT Device identity</LI> <LI>Create a SAS Policy</LI> <LI>TLS Certificate</LI> <LI>Pulling it all together</LI> <LI>Mosquitto_pub/Mosquitto_sub</LI> </OL> <P>&nbsp;</P> <H3>Variables You Will Need To Substitute</H3> <P>This post serves as a high level walkthrough for establishing full duplex (publish and subscribe) communication from your local broker in to Azure. You will be presented with outputs from CLI or from your Azure subscription, that you will need to substitute with your values in order to pass in to the Mosquitto client tools. Some key values you need to substitute are the following:</P> <DIV><LI-CODE lang="json">Resource-group Device name SAS token Connection string</LI-CODE> <DIV> <P>&nbsp;</P> <H3>Step 1: Create the Azure IoT Hub</H3> <P>You will need to firstly create your Azure IoT Hub instance. There are many SKU’s but the F1 SKU provides 8000 messages per day for free and is a good way to get started without incurring cost. Alternate SKU’s can be found <A href="#" target="_blank" rel="noreferrer noopener">here</A>.</P> <DIV><LI-CODE lang="bash">az iot hub create --resource-group BaldacchinoRG --name Baldacchino-IOTHub -sku F1 --partition-count 2</LI-CODE> <DIV> <P>&nbsp;</P> <P>The output of this command will provide a lot of JSON. Extract your Azure IoT Hub endpoint. Copy and paste this in to a document in your favourite IDE, we will be adding many values to this document to build out the required.</P> <DIV><LI-CODE lang="json">"hostName": "BaldacchinoIotHub.azure-devices.net",</LI-CODE> <DIV> <P>&nbsp;</P> <H3>Step 2: Extract the Primary Connection String</H3> <P>You will need to identify the connection string of your IoT Hub</P> <DIV><LI-CODE lang="bash">az iot hub connection-string show</LI-CODE> <DIV> <P>&nbsp;</P> <P>This output of this command will provide your connection string (there is two, this will only show the primary). Copy and paste this in to your document.</P> <DIV><LI-CODE lang="json">[ { "connectionString": "HostName=Baldacchino-IOTHub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=******************************************************=", "name": "Baldacchino-IOTHub" } ]</LI-CODE> <DIV> <H3><BR />Step 3: Create your device</H3> <P>You now need to create your MQTT device, for the purpose of this walkthrough we will use a SAS token for authentication.</P> <DIV><LI-CODE lang="bash">az iot hub device-identity create -n Baldacchino-IOTHub -d Mosquitto</LI-CODE> <DIV> <P>&nbsp;</P> <P>There is no JSON you will need to copy to your clipboard here but you will notice in the JSON output that the type is SAS and we are not using x509 certificates.</P> <DIV><LI-CODE lang="json">"type": "sas", "x509Thumbprint": { "primaryThumbprint": null, "secondaryThumbprint": null } }, "capabilities": { "iotEdge": false }</LI-CODE> <DIV> <P>&nbsp;</P> <H3>Step 4: Generate a SAS token</H3> <P>There are multiple ways to provide Authentication. <A href="#" target="_blank" rel="noreferrer noopener">SAS </A>tokens and <A href="#" target="_blank" rel="noreferrer noopener">x509 </A>certificates are the common approaches. The SAS token is&nbsp;a string that you generate on the client side, and you pass this string to Azure IoT Hub for authentication. Azure IoT Hub then checks the SAS parameters and the signature to verify that it is valid. In a production environment you will likely prefer using X-509 certificates for this authentication but for the sake of my project, SAS token will work.</P> <DIV><LI-CODE lang="bash">az iot hub generate-sas-token -d Mosquitto -n Baldacchino-IOTHub</LI-CODE> <DIV> <P>&nbsp;</P> <P>Copy and paste the SharedAccessSignature from the JSON output in your document.</P> <DIV><LI-CODE lang="json">{ "sas": "SharedAccessSignature sr=Baldacchino-IOTHub.azure-devices.net%2Fdevices%2FMosquitto&amp;sig=%2BdJrIWgg6XIzaT9sIvRZGSMYGv9lRKwihG2JnyEXMO4%3D&amp;se=1633391816" }</LI-CODE> <DIV> <P>&nbsp;</P> <H3>Step 5: We need TLS</H3> <P>MQTT typically runs on port 1833 in an unsecure manner but Azure IoT Hub mandates MQTT over TLS on 8883. The root CA might not be part of your Operating System’s keychain and as such you will need to download the <A href="#" target="_blank" rel="noreferrer noopener">PEM</A> file to use with Mosquitto.</P> <P>You can download the certificate from GitHub at <A href="#" target="_blank" rel="noreferrer noopener">https://raw.githubusercontent.com/Azure/azure-iot-sdk-c/master/certs/certs.c</A>. Copy the Baltimore certificate, to save you some time I have pasted this below, and save it as Baltimore.pem</P> <P>&nbsp;</P> <DIV><LI-CODE lang="json">-----BEGIN CERTIFICATE----- MIIDdzCCAl+gAwIBAgIEAgAAuTANBgkqhkiG9w0BAQUFADBaMQswCQYDVQQGEwJJ RTESMBAGA1UEChMJQmFsdGltb3JlMRMwEQYDVQQLEwpDeWJlclRydXN0MSIwIAYD VQQDExlCYWx0aW1vcmUgQ3liZXJUcnVzdCBSb290MB4XDTAwMDUxMjE4NDYwMFoX DTI1MDUxMjIzNTkwMFowWjELMAkGA1UEBhMCSUUxEjAQBgNVBAoTCUJhbHRpbW9y ZTETMBEGA1UECxMKQ3liZXJUcnVzdDEiMCAGA1UEAxMZQmFsdGltb3JlIEN5YmVy VHJ1c3QgUm9vdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKMEuyKr mD1X6CZymrV51Cni4eiVgLGw41uOKymaZN+hXe2wCQVt2yguzmKiYv60iNoS6zjr IZ3AQSsBUnuId9Mcj8e6uYi1agnnc+gRQKfRzMpijS3ljwumUNKoUMMo6vWrJYeK mpYcqWe4PwzV9/lSEy/CG9VwcPCPwBLKBsua4dnKM3p31vjsufFoREJIE9LAwqSu XmD+tqYF/LTdB1kC1FkYmGP1pWPgkAx9XbIGevOF6uvUA65ehD5f/xXtabz5OTZy dc93Uk3zyZAsuT3lySNTPx8kmCFcB5kpvcY67Oduhjprl3RjM71oGDHweI12v/ye jl0qhqdNkNwnGjkCAwEAAaNFMEMwHQYDVR0OBBYEFOWdWTCCR1jMrPoIVDaGezq1 BE3wMBIGA1UdEwEB/wQIMAYBAf8CAQMwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3 DQEBBQUAA4IBAQCFDF2O5G9RaEIFoN27TyclhAO992T9Ldcw46QQF+vaKSm2eT92 9hkTI7gQCvlYpNRhcL0EYWoSihfVCr3FvDB81ukMJY2GQE/szKN+OMY3EU/t3Wgx jkzSswF07r51XgdIGn9w/xZchMB5hbgF/X++ZRGjD8ACtPhSNzkE1akxehi/oCr0 Epn3o0WC4zxe9Z2etciefC7IpJ5OCBRLbf1wbWsaY71k5h+3zvDyny67G7fyUIhz ksLi4xaNmjICq44Y3ekQEe5+NauQrz4wlHrQMz2nZQ/1/I6eYs9HRCwBXbsdtTLS R9I4LtD+gdwyah617jzV/OeBHRnDJELqYzmp -----END CERTIFICATE-----</LI-CODE> <DIV> <P>&nbsp;</P> <P>You can validate your TLS certificate by using openSSL</P> <DIV><LI-CODE lang="bash">openssl x509 -in Balitore.pem -text Certificate: Data: Version: 3 (0x2) Serial Number: 33554617 (0x20000b9) Signature Algorithm: sha1WithRSAEncryption Issuer: C = IE, O = Baltimore, OU = CyberTrust, CN = Baltimore CyberTrust Root Validity Not Before: May 12 18:46:00 2000 GMT Not After : May 12 23:59:00 2025 GMT Subject: C = IE, O = Baltimore, OU = CyberTrust, CN = Baltimore CyberTrust Root Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:a3:04:bb:22:ab:98:3d:57:e8:26:72:9a:b5:79: d4:29:e2:e1:e8:95:80:b1:b0:e3:5b:8e:2b:29:9a: 64:df:a1:5d:ed:b0:09:05:6d:db:28:2e:ce:62:a2: 62:fe:b4:88:da:12:eb:38:eb:21:9d:c0:41:2b:01: 52:7b:88:77:d3:1c:8f:c7:ba:b9:88:b5:6a:09:e7: 73:e8:11:40:a7:d1:cc:ca:62:8d:2d:e5:8f:0b:a6: 50:d2:a8:50:c3:28:ea:f5:ab:25:87:8a:9a:96:1c: a9:67:b8:3f:0c:d5:f7:f9:52:13:2f:c2:1b:d5:70: 70:f0:8f:c0:12:ca:06:cb:9a:e1:d9:ca:33:7a:77: d6:f8:ec:b9:f1:68:44:42:48:13:d2:c0:c2:a4:ae: 5e:60:fe:b6:a6:05:fc:b4:dd:07:59:02:d4:59:18: 98:63:f5:a5:63:e0:90:0c:7d:5d:b2:06:7a:f3:85: ea:eb:d4:03:ae:5e:84:3e:5f:ff:15:ed:69:bc:f9: 39:36:72:75:cf:77:52:4d:f3:c9:90:2c:b9:3d:e5: c9:23:53:3f:1f:24:98:21:5c:07:99:29:bd:c6:3a: ec:e7:6e:86:3a:6b:97:74:63:33:bd:68:18:31:f0: 78:8d:76:bf:fc:9e:8e:5d:2a:86:a7:4d:90:dc:27: 1a:39 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Key Identifier: E5:9D:59:30:82:47:58:CC:AC:FA:08:54:36:86:7B:3A:B5:04:4D:F0 X509v3 Basic Constraints: critical CA:TRUE, pathlen:3 X509v3 Key Usage: critical Certificate Sign, CRL Sign Signature Algorithm: sha1WithRSAEncryption 85:0c:5d:8e:e4:6f:51:68:42:05:a0:dd:bb:4f:27:25:84:03: bd:f7:64:fd:2d:d7:30:e3:a4:10:17:eb:da:29:29:b6:79:3f: 76:f6:19:13:23:b8:10:0a:f9:58:a4:d4:61:70:bd:04:61:6a: 12:8a:17:d5:0a:bd:c5:bc:30:7c:d6:e9:0c:25:8d:86:40:4f: ec:cc:a3:7e:38:c6:37:11:4f:ed:dd:68:31:8e:4c:d2:b3:01: 74:ee:be:75:5e:07:48:1a:7f:70:ff:16:5c:84:c0:79:85:b8: 05:fd:7f:be:65:11:a3:0f:c0:02:b4:f8:52:37:39:04:d5:a9: 31:7a:18:bf:a0:2a:f4:12:99:f7:a3:45:82:e3:3c:5e:f5:9d: 9e:b5:c8:9e:7c:2e:c8:a4:9e:4e:08:14:4b:6d:fd:70:6d:6b: 1a:63:bd:64:e6:1f:b7:ce:f0:f2:9f:2e:bb:1b:b7:f2:50:88: 73:92:c2:e2:e3:16:8d:9a:32:02:ab:8e:18:dd:e9:10:11:ee: 7e:35:ab:90:af:3e:30:94:7a:d0:33:3d:a7:65:0f:f5:fc:8e: 9e:62:cf:47:44:2c:01:5d:bb:1d:b5:32:d2:47:d2:38:2e:d0: fe:81:dc:32:6a:1e:b5:ee:3c:d5:fc:e7:81:1d:19:c3:24:42: ea:63:39:a9 </LI-CODE> <DIV> <P>&nbsp;</P> <H3>Step 6: Pulling it together</H3> <P>We will soon be using Mosquitto_pub and Mosquitto_sub to publish and subscribe to messages. I am going to assume you have these installed on your local environment.You can easily install this on a Debian based environment with the following command.</P> <DIV><LI-CODE lang="bash">sudo apt install mosquitto-clients</LI-CODE> <DIV> <P>&nbsp;</P> <P>For Windows and MAC, see the <A href="#" target="_blank" rel="noreferrer noopener">Mosquitto website</A>. We now have everything we need for Mosquitto but publish and subscribe have different required parameters for publishing and subscribing. <BR /><BR />What Mosquitto_pub needs:</P> <DIV><LI-CODE lang="bash">mosquitto_pub \ -t "MQTT topic name" \ -i "pub_client" \ -u "username" \ -P "password" \ -h "host name" \ -V mqttv311 \ -p 8883 \ --cafile Baltimore.pem -m '{"key":"value"}'</LI-CODE> <DIV> <P>&nbsp;</P> <P>What Mosquitto_sub needs:</P> <DIV><LI-CODE lang="bash">mosquitto_sub \ -t "MQTT topic name" \ -i "pub_client" \ -u "username" \ -P "password" \ -h "host name" \ -V mqttv311 \ -p 8883 \ --cafile Baltimore.pem</LI-CODE> <DIV> <P>&nbsp;</P> <P><STRONG>&nbsp;</STRONG>To publish a MQTT message to Azure, you can not just any topic name. It must be in the following format:</P> <DIV><LI-CODE lang="json">devices/{DeviceID}/messsages/events/</LI-CODE> <DIV> <P>In my example Mosquitto is my DeviceID.<BR /><BR /></P> <H3>Step 7: Example Publishing and Subscribing</H3> <P>We now have all of the data and understand the required parameters to pass to both Mosquitto_pub and Mosquitto_sub. Now lets leverage these command line tools to build our completed commands and publish a message.</P> <P>Substitute the values in the examples above with your data. My topic in this example is ‘devices/Mosquitto/messages/events/’. I am using ‘-d’ for verbose logging.&nbsp;<BR /><BR /></P> <P>Mosquitto_pub:</P> <DIV><LI-CODE lang="bash">mosquitto_pub -t "devices/Mosquitto/messages/events/" -i "Mosquitto" -u "Baldacchino-IOTHub.azure-devices.net.azure-devices.net/Mosquitto/?api-version=2018-06-30" -P "SharedAccessSignature sr=Baldacchino-IOTHub.azure-devices.net%2Fdevices%2FMosquitto&amp;sig=O5Pt61EL3n1HLzt9G2%2FYJglpCk6m4I6XsbEu4WfnRoA%3D&amp;se=1636984597" -h "Baldacchino-IOTHub.azure-devices.net" -V mqttv311 -p 8883 --cafile Balitore.pem -m '{"key":"value"}' -d Client Mosquitto sending CONNECT Client Mosquitto received CONNACK (0) Client Mosquitto sending PUBLISH (d0, q0, r0, m1, 'devices/Mosquitto/messages/events/', ... (12 bytes)) Client Mosquitto sending DISCONNECT </LI-CODE> <DIV> <P>&nbsp;</P> <P>Mosquitto_sub:</P> <DIV><LI-CODE lang="bash">mosquitto_sub -t "devices/Mosquitto/messages/events/" -i "Mosquitto" -u "Baldacchino-IOTHub.azure-devices.net.azure-devices.net/Mosquitto/?api-version=2018-06-30" -P "SharedAccessSignature sr=Baldacchino-IOTHub.azure-devices.net%2Fdevices%2FMosquitto&amp;sig=O5Pt61EL3n1HLzt9G2%2FYJglpCk6m4I6XsbEu4WfnRoA%3D&amp;se=1636984597" -h "Baldacchino-IOTHub.azure-devices.net" -V mqttv311 -p 8883 --cafile c:\scripts\cert.pem -d Client Mosquitto sending CONNECT Client Mosquitto received CONNACK (0) Client Mosquitto sending SUBSCRIBE (Mid: 1, Topic: devices/Mosquitto/messages/events/, QoS: 0, Options: 0x00) Client Mosquitto received SUBACK Subscribed (mid: 1): 0 </LI-CODE> <DIV> <P>&nbsp;</P> <P>We can validate the message has been received via Azure IoT Hub by monitoring incoming events on the end-point:</P> <DIV><LI-CODE lang="bash">az iot hub monitor-events --hub-name Baldacchino-IOTHubStarting event monitor, use ctrl-c to stop...{"event": {"origin": "Mosquitto","module": "","interface": "","component": "","payload": "'{key:value}'"}}</LI-CODE> <DIV> <P>&nbsp;</P> <H2>Conclusion</H2> <P>We just walked through how you can use Mosquitto Client Tools (mosquitto_pub / mosquitto_sub) to publish and subscribe MQTT messages to Azure IoT Hub and you did it not via the <A href="#" target="_blank" rel="noreferrer noopener">Azure Portal</A> but via Azure CLI :flexed_biceps:</img>:flexed_biceps:</img>. Whilst this is a baby step, it is the step we need to take before we can look at MQTT broker (the Mosquitto daemon) to Cloud (Azure IoT Hub) replication.</P> <P>MQTT is the de-facto IoT protocol for devices, from Alexa through to lights and locks and devices on the factory floor. It is as close to an IoT protocol standard that we have today. Azure IoT on the other hand is your gateway in to the world of Cloud. The Azure Cloud platform contains more than 200 (and growing) products and services designed to help you bring your solutions to life, solve today’s challenges and create the future. Join me in part 2 of this multi-part blog series where I will be extending upon what we have just built and will look at broker to Cloud communication before feeding the thousands of events that occur in my house daily in to Azure IoT Hub and beyond.<BR /><BR /></P> <P>Think-big and happy automating.</P> <P>Shane Baldacchino</P> <P>&nbsp;</P> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> </DIV> Fri, 15 Oct 2021 15:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/mosquitto-client-tools-and-azure-iot-hub-the-beginning-of/ba-p/2824717 shanebaldacchino 2021-10-15T15:00:00Z Using Azure Percept to improve onsite workers safety https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/using-azure-percept-to-improve-onsite-workers-safety/ba-p/2846272 <P>Here is a how to article where we explore the use of Azure Percept to improve workers safety and build a proof of concept that connects to a microcontroller to control an alarm. The article describes how to connect Azure Percept to Arduino, set firmware and demonstrates the use of .NET IoT in a docker container.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_0-1634239964622.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317491i3E8544EDD091EF90/image-size/large?v=v2&amp;px=999" role="button" title="RonDagdag_0-1634239964622.png" alt="RonDagdag_0-1634239964622.png" /></span></P> <P>&nbsp;</P> <H2>The importance of Worker Safety</H2> <P>&nbsp;</P> <P>According to OSHA, the&nbsp;<A href="#" target="_blank" rel="noopener">most common causes of worker deaths on construction sites in America</A> are the following</P> <OL> <LI>Falls (accountable for 33.5% of construction worker deaths)</LI> <LI>Struck by an object (accountable for 11.1% of construction worker deaths)</LI> <LI>Electrocutions (accountable for 8.5% of construction worker deaths)</LI> <LI>Caught in/between (accountable for 5.5% of construction worker deaths)</LI> </OL> <P>&nbsp;</P> <P>Safety is very important on every job site. There are areas the workers must avoid. According to an article from Construction Dive: <A href="#" target="_blank" rel="noopener">Elevator-related construction deaths on the rise</A></P> <P>&nbsp;</P> <P>“While the number of total elevator-related deaths among construction and maintenance workers is relatively small when compared to total construction fatalities, the rate of such deaths doubled from 2003 to 2016, from 14 to 28, with a peak of 37 in 2015, according to a report from The Center for Construction Research and Training (CPWR). However, falls are the cause of most elevator-related fatalities, just like in the construction industry at large. More than 53% of elevator-related deaths were from falls, and of those incidents, almost 48% were from heights of more than 30 feet.“</P> <P>&nbsp;</P> <P>Looking at these statistics, it’s being at the wrong place at the wrong time. Warning Signs are typically posted around the dangerous area but can be ignored or forgotten.&nbsp;</P> <P>&nbsp;</P> <P>What if we can use Azure Percept to monitor these dangerous areas and warn workers to stay away and avoid it. The goal is to have a deterrent, audible sound that reminds them about possible danger. Once worker leave the area and are not detected anymore, the audible sound stops automatically. Also during maintenance period, allow to remotely disable detection with authorizations needed to comply with regulations.&nbsp;</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_0-1634221134810.png" style="width: 422px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317359i18AC87AA81E52E02/image-dimensions/422x562?v=v2" width="422" height="562" role="button" title="RonDagdag_0-1634221134810.png" alt="RonDagdag_0-1634221134810.png" /></span></P> <H2><BR />Getting Started</H2> <P>&nbsp;</P> <P>We will look at the Hardware, Software requirements and Architecture diagram for this project. Then walk through step-by-step instructions on how to set up and deploy the application.</P> <P>&nbsp;</P> <H3>Hardware</H3> <P>Azure Percept Device Kit</P> <P><A href="#" target="_blank" rel="noopener">https://www.microsoft.com/d/azure-percept-dk/8mn84wxz7xww</A></P> <P>&nbsp;</P> <P>ELEGOO UNO R3 Super Starter Kit Compatible With Arduino IDE</P> <P><A href="#" target="_blank" rel="noopener">https://www.elegoo.com/products/elegoo-uno-project-super-starter-kit</A></P> <H3><BR />Software</H3> <P>Azure Subscription (needed for Azure Percept)</P> <P><A href="#" target="_blank" rel="noopener">https://azure.microsoft.com</A></P> <P>&nbsp;</P> <P>Visual Studio Code&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">https://code.visualstudio.com/</A></P> <P>&nbsp;</P> <P>Azure IoT Tools Extensions for VS Code</P> <P><A href="#" target="_blank" rel="noopener">https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools</A></P> <P>&nbsp;</P> <P>Azure IoT Edge Dev Tools</P> <P><A href="#" target="_blank" rel="noopener">https://github.com/Azure/iotedgedev</A></P> <P>&nbsp;</P> <P>Docker with DockerHub account</P> <P><A href="#" target="_blank" rel="noopener">https://docs.docker.com/get-docker/</A></P> <P>&nbsp;</P> <P>Arduino IDE</P> <P><A href="#" target="_blank" rel="noopener">https://www.arduino.cc/en/software</A></P> <P>&nbsp;</P> <H2>Overal architecture</H2> <P align="center">&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_1-1634221134804.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317357i71230CA72A2798B1/image-size/large?v=v2&amp;px=999" role="button" title="RonDagdag_1-1634221134804.png" alt="RonDagdag_1-1634221134804.png" /></span></P> <P><BR />When the Camera detects a worker that is in the Dangerous Area, an alarm would sound and send telemetry messages to the cloud. Azure Percept Device Kit contains a people detection model that processes frames from the camera. When a worker is detected; it sends a message to the Arduino device to set the buzzer sound and sends the message to IoT Hub.&nbsp;</P> <P>&nbsp;</P> <H2>Instructions</H2> <P>&nbsp;</P> <H3>Set People Detection Model</H3> <P>Once the Azure Percept Device Kit is connected to the cloud, we can specify to use People Detection Model by going to Vision Tab -&gt; Deploy a Sample Model</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_2-1634221134846.png" style="width: 819px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317358iE57B6A252F79D526/image-dimensions/819x336?v=v2" width="819" height="336" role="button" title="RonDagdag_2-1634221134846.png" alt="RonDagdag_2-1634221134846.png" /></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_3-1634221134806.png" style="width: 607px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317361iCA893D62126E92DA/image-dimensions/607x499?v=v2" width="607" height="499" role="button" title="RonDagdag_3-1634221134806.png" alt="RonDagdag_3-1634221134806.png" /></span></P> <H3><BR />Set up Arduino module</H3> <P>I followed the instructions on .NET IoT github page.</P> <P><A href="#" target="_blank" rel="noopener">https://github.com/dotnet/iot/tree/main/src/devices/Arduino#quick-start</A></P> <P>&nbsp;</P> <P>You can download the Firmata firmware that I used for this project at:</P> <P><A href="#" target="_blank" rel="noopener">https://github.com/rondagdag/arduino-percept/blob/main/firmata/percept-uno/percept-uno.ino</A></P> <P>&nbsp;</P> <P>Here are the steps:</P> <UL> <LI>Open the Arduino IDE</LI> <LI>Go to the library manager and check that you have the "ConfigurableFirmata" library installed</LI> <LI>Open "Percept-Uno.ino" from the device binding folder or go to <A href="#" target="_blank" rel="noopener">http://firmatabuilder.com/</A> to create your own custom firmata firmware. Make sure you have at least the features checked that you will need.</LI> <LI>Upload this sketch to your Arduino.</LI> </UL> <H3><BR />Send alert to arduino module</H3> <P>Open the Percept Edge Solution project in Visual Studio Code</P> <P><A href="#" target="_blank" rel="noopener">https://github.com/rondagdag/arduino-percept/tree/main/PerceptEdgeSolution</A></P> <P>&nbsp;</P> <P>This module can run locally on your machine if you have Azure IoT Edge Dev tool and also Azure IoT Tools Extensions for VS Code</P> <P>&nbsp;</P> <P>To run it locally on your machine, you may have to change this module.json</P> <P><A href="#" target="_blank" rel="noopener">https://github.com/rondagdag/arduino-percept/blob/main/PerceptEdgeSolution/modules/ArduinoModule/module.json</A></P> <P>&nbsp;</P> <P>Change repository to your Dockerhub username.&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">"repository": "rondagdag/arduino-percept-module"</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN style="font-family: inherit;"><BR />The Arduino Module is a C# application that controls the arduino device. It’s a docker container that uses .NET IoT bindings. Here are the </SPAN><A style="font-family: inherit; background-color: #ffffff;" href="#" target="_blank" rel="noopener">Nuget packages</A><SPAN style="font-family: inherit;"> I used.&nbsp;</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="markup"> &lt;PackageReference Include="Microsoft.Azure.Devices.Client" Version="1.38.0" /&gt; &lt;PackageReference Include="Microsoft.Extensions.Configuration" Version="5.0.0" /&gt; &lt;PackageReference Include="Microsoft.Extensions.Configuration.Abstractions" Version="5.0.0" /&gt; &lt;PackageReference Include="Microsoft.Extensions.Configuration.Binder" Version="5.0.0" /&gt; &lt;PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="5.0.0" /&gt; &lt;PackageReference Include="Microsoft.Extensions.Configuration.FileExtensions" Version="5.0.0" /&gt; &lt;PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="5.0.0" /&gt; &lt;PackageReference Include="System.Runtime.Loader" Version="4.3.0" /&gt; &lt;PackageReference Include="System.IO.Ports" Version="5.0.1" /&gt; &lt;PackageReference Include="System.Device.Gpio" Version="1.5.0" /&gt; &lt;PackageReference Include="Iot.Device.Bindings" Version="1.5.0" /&gt; &lt;PackageReference Include="Microsoft.Extensions.Logging.Console" Version="5.0.0" /&gt;</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><BR />The module tries to connect to two Arduino USB ports. I noticed that sometimes during reboot it can be either one of these ports that the arduino is connected to.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">/dev/ttyACM0 /dev/ttyACM1</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;In order to run this locally on your machine, you may have to change the port number to what Arduino IDE determines UNO is connected to (COM3 for Windows or /dev/ttyS1 for Mac)</P> <P>&nbsp;</P> <P>It connects to 115200 Baud Rate.</P> <P>The Buzzer is connected to PIN 12 shown below:</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_4-1634221134805.png" style="width: 675px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317362i551EF489FA1A2AF4/image-dimensions/675x473?v=v2" width="675" height="473" role="button" title="RonDagdag_4-1634221134805.png" alt="RonDagdag_4-1634221134805.png" /></span></P> <P><BR />As messages are received from the IoT Edge Hub, it can be processed to detect if a worker has been detected. If the payload contains items in the NEURAL NETWORK node, then we can send the alert to Arduino.&nbsp;&nbsp;&nbsp;</P> <P>&nbsp;</P> <P>Here’s the <A href="#" target="_blank" rel="noopener">code</A> to analyze.&nbsp;</P> <P>&nbsp;</P> <P>It might be tricky to run this locally on your machine and may need some modifications. You may have to modify <A href="#" target="_blank" rel="noopener">deployment template</A> and the code to receive the payload coming from Simulated Temperature.</P> <P>&nbsp;</P> <P>Notice how the ports are mapped on the docker container. <STRONG>/dev/ttyACM0 and /dev/ttyACM1</STRONG></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json"> "ArduinoModule": { "version": "1.0", "type": "docker", "status": "running", "restartPolicy": "always", "settings": { "image": "${MODULES.ArduinoModule}", "createOptions": { "HostConfig": { "Privileged": true, "Devices": [ { "PathOnHost": "/dev/ttyACM0", "PathInContainer": "/dev/ttyACM0", "CgroupPermissions": "rwm" }, { "PathOnHost": "/dev/ttyACM1", "PathInContainer": "/dev/ttyACM1", "CgroupPermissions": "rwm" } ] } } } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Locally the EdgeHub mapping looks like this. The Simulated Temperature sensor is passing the output to the Arduino Module. Then the Arduino Module filters the data, controls the arduino and sends the telemetry up to the cloud.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">"$edgeHub": { "properties.desired": { "schemaVersion": "1.2", "routes": { "sensorToArduinoModule": "FROM /messages/modules/SimulatedTemperatureSensor/outputs/temperatureOutput INTO BrokeredEndpoint(\"/modules/ArduinoModule/inputs/input1\")", "ArduinoModuleToIoTHub": "FROM /messages/modules/ArduinoModule/outputs/* INTO $upstream" }, "storeAndForwardConfiguration": { "timeToLiveSecs": 7200 } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><BR />You can test this out on the Simulator</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_5-1634221134801.png" style="width: 658px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317360iD7106069EBB23E0C/image-dimensions/658x339?v=v2" width="658" height="339" role="button" title="RonDagdag_5-1634221134801.png" alt="RonDagdag_5-1634221134801.png" /></span></P> <H3><BR />Deploy setup</H3> <P>In order to deploy this to Azure Percept, IoT Hub device -&gt; Set Modules -&gt; Add -&gt; IoT Edge Module</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_6-1634221134842.png" style="width: 497px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317364iAAB09214311D54E9/image-dimensions/497x592?v=v2" width="497" height="592" role="button" title="RonDagdag_6-1634221134842.png" alt="RonDagdag_6-1634221134842.png" /></span></P> <P><BR />Fill in Module Name and Image URI.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_7-1634221134851.png" style="width: 554px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317363iB0A263F4AEF842F5/image-dimensions/554x809?v=v2" width="554" height="809" role="button" title="RonDagdag_7-1634221134851.png" alt="RonDagdag_7-1634221134851.png" /></span></P> <P>&nbsp;</P> <P>Also the Container Create Options to map the correct port.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_8-1634221134803.png" style="width: 558px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317365iC98ED637AFE56ACE/image-dimensions/558x646?v=v2" width="558" height="646" role="button" title="RonDagdag_8-1634221134803.png" alt="RonDagdag_8-1634221134803.png" /></span></P> <P><BR />Specify the Routes. In this case we have to push data coming from Azure Eye Module To Arduino Module.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_9-1634221134800.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317366i5B18B893AC0AEE57/image-size/large?v=v2&amp;px=999" role="button" title="RonDagdag_9-1634221134800.png" alt="RonDagdag_9-1634221134800.png" /></span></P> <P>&nbsp;</P> <P>Here’s what the mapping would look like</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">"$edgeHub": { "properties.desired": { "routes": { "AzureSpeechToIoTHub": "FROM /messages/modules/azureearspeechclientmodule/outputs/* INTO $upstream", "AzureEyeModuleToArduinoModule": "FROM /messages/modules/azureeyemodule/outputs/* INTO BrokeredEndpoint(\"/modules/ArduinoModule/inputs/input1\")", "ArduinoModuleToIoTHub": "FROM /messages/modules/ArduinoModule/outputs/* INTO $upstream" },</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><BR />Here’s an explainer video:</P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/UKxLKuMhHr4?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="Using Azure Percept to keep people out of danger zones" widget_referrer="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things"></IFRAME></DIV> <P>&nbsp;</P> <H2><BR />Summary</H2> <P>&nbsp;</P> <P>Keeping workers safe is very important in any job site. Azure Percept can help with people detection and connection with different alerting systems. We’ve demonstrated setting up the Azure Percept Dev Kit to use people detection. We’ve connected Arduino with a buzzer to trigger an audible sound. This actually enables Azure Percept Dev Kit to expand capabilities and broaden use cases. Let me know if this blog helped you in any way by commenting below, I am interested in learning different ideas and use cases.&nbsp;</P> <H2><BR />Resources</H2> <P>&nbsp;</P> <P>Reference Code:</P> <P><A href="#" target="_blank" rel="noopener">https://github.com/rondagdag/arduino-percept</A></P> <P>&nbsp;</P> <P>Getting Started With Azure Percept:</P> <P><A href="#" target="_blank" rel="noopener">https://docs.microsoft.com/azure/azure-percept/</A></P> <P>&nbsp;</P> <H2>Join the adventure, Get your own Percept here</H2> <P>&nbsp;</P> <P>Get Azure</P> <P><A href="#" target="_blank" rel="noopener">https://www.azure.com/account/free</A></P> <P>&nbsp;</P> <P>Purchase Azure Percept</P> <P>Available to customers – <A href="#" target="_blank" rel="noopener">Build your Azure Percept</A></P> <H2><BR />Ron Dagdag</H2> <H4>Lead Software Engineer</H4> <P>During the day, Ron Dagdag is a Lead Software Engineer with 20+ years of experience working on a number of business applications using a diverse set of frameworks and languages. He currently supports developers at Spacee with their IoT, Cloud and ML development. On the side, Ron Dagdag is an active participant in the community as a Microsoft MVP, speaker, maker and blogger. He is passionate about Augmented Intelligence, studying the convergence of Augmented Reality/Virtual Reality, Machine Learning and the Internet of Things. <A href="#" target="_blank" rel="noopener">https://www.linkedin.com/in/rondagdag/</A></P> <P><A href="https://gorovian.000webhostapp.com/?exam=t5/user/viewprofilepage/user-id/114538" target="_blank" rel="noopener">@rondagdag</A></P> Thu, 14 Oct 2021 22:41:50 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/using-azure-percept-to-improve-onsite-workers-safety/ba-p/2846272 Ron Dagdag 2021-10-14T22:41:50Z Windows for IoT Long-term Servicing Channel explained https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/windows-for-iot-long-term-servicing-channel-explained/ba-p/2836254 <P>Specialized systems—such as devices that control medical equipment, point-of-sale systems, and ATMs—often require a longer servicing option because of their purpose. These devices typically perform a single important task and don’t need feature updates as frequently as other devices in the organization. It’s more important that these devices be kept as stable and secure as possible than up to date with user interface changes. The LTSC servicing model prevents Enterprise LTSC devices from receiving the usual feature updates and provides only quality updates to ensure that device security stays up to date. With this in mind, quality updates are still immediately available to Windows 10 IoT Enterprise LTSC devices, but customers can choose to defer them by using one of the servicing tools.</P> <P>&nbsp;</P> <P><STRONG>When to use an LTSC?</STRONG></P> <P>&nbsp;</P> <P>The Long-term Servicing channel is not intended for deployment on most or all the devices in an organization; it should be used only for special-purpose devices such as an IoT device. As a general guideline, a device with Microsoft Office installed is a general-purpose device, typically used by an information worker, and therefore it is better suited for the General Availability channel. Microsoft never publishes feature updates through Windows Update on devices that run Windows 10 IoT Enterprise LTSC. Instead, it typically offers new LTSC releases every 2–3 years, and organizations can choose to install them as in-place upgrades or even skip releases over a 10-year life cycle.</P> <P>&nbsp;</P> <P><STRONG>LTSC Support?</STRONG></P> <P>&nbsp;</P> <P>LTSC releases will support the currently released processors and chipsets at the time of release of the LTSC. As future CPU generations are released, support will be created through future LTSC releases that customers can deploy for those systems. You can find more information on what the latest processor and chipsets are supported on Windows in <A href="#" target="_blank" rel="noopener">Lifecycle support policy FAQ - Windows Products</A>.</P> <P>&nbsp;</P> <P><STRONG>But what about an LTSC release for Windows 11 IoT Enterprise?</STRONG></P> <P>&nbsp;</P> <P>Microsoft currently makes available a new&nbsp;<A href="#" target="_blank" rel="noopener">Windows 10 IoT Enterprise LTSC release</A>&nbsp;approximately every three years. Each Windows 10 IoT Enterprise LTSC release is its own SKU and contains all the new capabilities and support updates included in the Windows 10 IoT Enterprise features updates since the previous LTSC release. To access these feature updates, a new Windows 10 IoT Enterprise LTSC SKU license must be purchased. For example, to get access to the new security, deployment, and management updates and features released since the launch of Windows 10 IoT Enterprise 2016 LTSC, a license for Windows 10 IoT Enterprise 2019 LTSC must be purchased, and an update applied to the device. Please note that due to the long life of the LTSC releases and the benefit of remaining on a specific release for 10 years, an upgrade fee will be charged for customers moving from one LTSC release to another. &nbsp;Our LTSC for this year (2021) is based on Windows 10 because of its maturity and stability and our next LTSC will be based on Windows 11.</P> <P>&nbsp;</P> <P>Please review the&nbsp;<A href="#" target="_blank" rel="noopener">Fixed Lifecycle Policy</A>&nbsp;for more information.</P> <P>&nbsp;</P> <P>Lastly, we would like to invite you to join us at the Windows for IoT Launch Summit on Tuesday, October 19th, 2021. We will be talking in detail about topics such as features and functionalities in the upcoming release, IoT device security, and how you can bring intelligence to the edge with Windows for IoT. This virtual event is open to anyone and will be repeated twice to accommodate our global time zones. The event is&nbsp;free&nbsp;to attend – and we hope to see you there!</P> <P>&nbsp;</P> <P>Session 1: 7 AM PDT, register:&nbsp;<A href="#" target="_blank" rel="noopener">https://aka.ms/WinIoTLaunchSummit</A></P> <P>Session 2: 6 PM PDT, register:&nbsp;<A href="#" target="_blank" rel="noopener">https://aka.ms/WinIoTLaunchSummitAPAC</A></P> <P><STRONG>&nbsp;</STRONG></P> Tue, 12 Oct 2021 18:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/windows-for-iot-long-term-servicing-channel-explained/ba-p/2836254 95twr 2021-10-12T18:00:00Z Azure Sphere MT3620 Insights - September 2021 https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-sphere-mt3620-insights-september-2021/ba-p/2835962 <P>Azure Sphere can do so many things across many domains, but Security is a deep domain, so vast it seems sometimes to impact every line of code ever written. Sometimes security is a big, bright, and showy focal point, other times it sits humbly away from the spotlights; you might not even notice it if you don’t peer into the shadows. Through this series, I intend to share our team perspective and insights to give some context around the work we’re doing for the MT3620.</P> <P>&nbsp;</P> <P>In our 21.09 Azure Sphere OS, we’re shipping a kernel update to version 5.10.60. This update has no new features. So, why take the risk of making the change then?</P> <P>&nbsp;</P> <P>Can a product be known to be secure? Or is a locked-down product one that just has not yet been exploited? Evidence suggests that security is not a binary state but a spectrum of how often and how long a product can establish trust and regain trust once it has been lost. The key difference is time—how long it takes to find and exploit unknown vulnerabilities and how trusted something can be over time. By updating the Azure Sphere OS kernel version, we’re lowering the risk of exploitation. This minor kernel update includes bug fixes and security patches, and its configuration is newer and less familiar to exploits that have been packaged for mass application.</P> <P>&nbsp;</P> <P>Last year we ran a public Azure Sphere research challenge. During the challenge we received a couple of complaints from legitimately annoyed researchers. Though the challenge spanned several months, each month the researchers’ findings were patched, making it hard to string together multiple vulnerabilities into exploits that dig deeper into the system. We had to explain that Azure Sphere’s security isn’t based on one build of its OS. Regular updates are part of the security story of Azure Sphere; making changes that disrupt exploits isn’t a bug, it’s a feature. Azure Sphere’s security is designed to take into account. Changes, patches, fixes, and updates build up a product whose We’ve taken to heart any complaints from participants to better design future challenges that play to researchers’ strengths. But we continue to make updates available for operational devices, like this latest kernel update, because these changes are an essential part of our security story, a story that drives us to renew trust at every opportunity and every interaction with the platform.</P> <P>&nbsp;</P> <P>Azure Sphere defense-in-depth features are designed to mitigate potential impacts. Nominally, by the time a proper exploit has been packaged, Azure Sphere devices will be on a newer version, through another update, maybe even several updates from now. Azure Sphere changes with time. I think that’s worth thinking about when it comes to security.</P> Tue, 12 Oct 2021 16:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-sphere-mt3620-insights-september-2021/ba-p/2835962 josephalloyd 2021-10-12T16:00:00Z General availability: Azure Sphere version 21.10 expected on Oct. 20, 2021 https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/general-availability-azure-sphere-version-21-10-expected-on-oct/ba-p/2819768 <P>Azure Sphere OS version&nbsp;21.10&nbsp;is now available for evaluation in the <STRONG>Retail Eval</STRONG> feed.&nbsp;The retail evaluation period provides 2 weeks for backwards compatibility testing. During this time, please verify that your applications and devices operate properly with this release&nbsp;before it's deployed broadly via the Retail feed. The Retail feed will continue to deliver OS version&nbsp;21.09&nbsp;until we publish 21.10.&nbsp;</P> <P>&nbsp;</P> <P>The evaluation release of version 21.10 includes an OS update only; it does not include an updated SDK. When 21.10 is generally available later in October, an updated SDK will be included.</P> <P class="paragraph">&nbsp;</P> <P class="paragraph"><FONT size="4">Compatibility testing with version 21.10</FONT></P> <P>Areas of special focus for compatibility testing with 21.10 should include apps and functionality using wolfSSL.</P> <P>&nbsp;</P> <P><FONT size="4">Notes about this release</FONT></P> <P>The most recent RF tools (version 21.01) are expected to remain compatible with OS version 21.10.</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>For more information on Azure Sphere OS feeds and setting up an evaluation device group,&nbsp;see&nbsp;<A href="#" target="_blank" rel="noopener" data-cke-saved-href="#">Azure Sphere OS feeds</A> and <A href="#" target="_self" data-cke-saved-href="#">Set up devices for OS evaluation</A>.</P> <P>&nbsp;</P> <P>For self-help technical inquiries, please visit <A href="#" target="_self" data-cke-saved-href="#">Microsoft Q&amp;A</A> or <A href="#" target="_self" data-cke-saved-href="#">Stack Overflow</A>. If you require technical support and have a support plan, please submit a support ticket in <A href="#" target="_self" data-cke-saved-href="#">Microsoft Azure Support</A> or work with your Microsoft Technical Account Manager. If you would like to purchase a support plan, please explore the <A href="#" target="_self" data-cke-saved-href="#">Azure support plans</A>.</P> Thu, 07 Oct 2021 00:01:18 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/general-availability-azure-sphere-version-21-10-expected-on-oct/ba-p/2819768 AzureSphereTeam 2021-10-07T00:01:18Z Rapidly equip a drone with AI using the Azure Percept DK https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/rapidly-equip-a-drone-with-ai-using-the-azure-percept-dk/ba-p/2779509 <H2>Find out how to add AI to aerial use cases using the Azure Percept DK and a drone</H2> <P>&nbsp;</P> <P>As a commercial FAA Part 107 drone pilot I look for various ways to use drones to perform tasks that are dangerous, repetitive, or otherwise not optimal for humans to perform. <BR />One of these dangerous tasks is inspecting rooftops for damage or deficiencies. <BR />During regular routine inspections the task can be dangerous, but after damage from a storm or other event where the structure of the roof is in question then a drone is a great fit to get eyes on the rooftop.</P> <P>&nbsp;</P> <P>I frequently fly drones using an autonomous flight path and I was interested in adding artificial intelligence to the drone to identify damage in real-time. <BR />The Azure Percept DK is purpose built for rapid prototyping and I found that by using the Azure Percept DK and an Azure Custom Vision model I was able to use artificial intelligence to recognize damage on a roof in under a week. <BR />Using the Azure Percept DK was by far the most rapid prototyping experience and quite frankly, the simplest path from concept to execution. <BR />Learn more about Azure Percept <A title="Learn more about Azure Percept" href="#" target="_blank" rel="noopener">here</A>.</P> <P>&nbsp;</P> <P>There are commercially available Artificial Intelligence systems that are used with drones, but the focus has been on improving flight autonomy.<BR />Unmanned Aerial Systems, typically known as UAS or “drone” are the go-to platform for gathering environment data. <BR />This data can be video, photos, or other environment properties via a sensor array. <BR /><BR />As UAS use cases expand, the need for intelligent systems to assist UAS operators continues to grow.<BR /><BR />I’ve created a walkthrough that you can use to get started with the Azure Percept DK and Azure Custom Vision to recognize objects.<BR />I’ll also demonstrate the Azure Percept DK mounted to a drone performing a real inspection of a roof using the Azure Custom Vision model I created using this walkthrough.</P> <P>&nbsp;</P> <H3>Introducing the Azure Percept DK and AI at the Edge</H3> <P>&nbsp;</P> <P>The Azure Percept DK is a development kit that can rapidly accelerate prototyping of AI at the Edge solutions.<BR />AI at the Edge is a concept where all processing of gathered data happens on a device. <BR />There is not a need for an uplink to another location to process the gathered data. <BR />Typically, the data processing happens in real time on the device.</P> <P>&nbsp;</P> <H3>How using an Azure Percept DK can speed up the adoption of AI at the Edge in a UAS use case</H3> <P>&nbsp;</P> <P>What I’ve done is mounted an Azure Percept DK to a custom built UAS platform.<BR />You will see that the Azure Percept DK is modular and fits on a small footprint.<BR />The UAS is a medium sized 500mm platform very similar to a <A title="Holybro x500" href="#" target="_blank" rel="noopener">HolyBro x500</A>.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="onlandignpad.png" style="width: 488px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314465iA0636B7996F87648/image-size/large?v=v2&amp;px=999" role="button" title="onlandignpad.png" alt="Image: Mounted Azure Percept DK on a custom UAS platform" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: Mounted Azure Percept DK on a custom UAS platform</span></span></P> <P>&nbsp;</P> <H3>Typical UAS operation</H3> <P>&nbsp;</P> <H5>USE CASE: Residential Roof Inspection</H5> <P>&nbsp;</P> <P>There are two typical ways a UAS will fly over a target area such as a residential roof.<BR /><BR /></P> <OL> <LI>Using a Ground Control Station that runs software such as <A title="QGroundControl" href="#" target="_blank" rel="noopener">QGroundControl</A> or <A title="Mission Planner" href="#" target="_blank" rel="noopener">ArduPilot Mission </A><BR /><A title="Mission Planner" href="#" target="_blank" rel="noopener">Planner</A> a flight path is created then uploaded to the UAS flight computer. This flight path is then <BR />flown by the UAS autonomously. <BR />Additional vendors such as <A title="Pix4D software" href="#" target="_blank" rel="noopener">Pix4D</A> have published their own versions of Ground Control Station software for multiple brands and configuration of UAS.<BR />Other companies such as <A title="Auterion" href="#" target="_blank" rel="noopener">Auterion</A> have built flight control computers and specialized Ground Control Station software for industry verticals.<BR /><BR /></LI> <LI>Using a radio transmitter, the UAS operator controls the flight path over the target area themselves.<BR /><BR /></LI> </OL> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="mpgrid.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314466iFFEB25E95037BAAE/image-size/large?v=v2&amp;px=999" role="button" title="mpgrid.png" alt="Image: The route I created to inspect a residential roof using ArduPilot Mission Planner" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: The route I created to inspect a residential roof using ArduPilot Mission Planner</span></span></P> <P>&nbsp;</P> <P>Using either of the operation methods above a flight for inspecting a residential roof the UAS operator will make several passes over the target area to look for defects or damage in the surface or structure of the roof.<BR /><BR />Typically, the operator will take photos or a video of the roof as the UAS passes overhead. <BR />The visual data would then be processed later and an inspection report would be delivered to the customer highlighting what the inspection found.<BR /><BR /></P> <P>The time spent reviewing the visual data to create an inspection report can take a long time to complete. <BR />Additionally, the expertise to spot damage and defects can take a long time to acquire.</P> <P>&nbsp;</P> <P>Now imagine if you can collect visual data and have your UAS identify and catalog damage and defects on a residential roof as the UAS passes over the roof.</P> <P>&nbsp;</P> <P>Using an Azure Percept DK mounted on your UAS you can bring AI at the Edge to your inspection project to spot and highlight damage and defects without specialized AI skills.</P> <P>&nbsp;</P> <P>Using Azure Percept Studio, UAS operators can explore the library of pre-built AI models or build custom models themselves without coding.</P> <P>&nbsp;</P> <H3>But How Easy is it?</H3> <P>&nbsp;</P> <P>Let’s walk through the steps to create a custom vision model to identify defects on a residential roof.<BR />You will see that there is no need for specialized AI skills, just the knowledge of how to tag photos with attributes.</P> <P>&nbsp;</P> <OL> <LI><EM>Prerequisites<BR /><BR /></EM><A title="Purchase an Azure Percept DK" href="#" target="_blank" rel="noopener">Purchase an Azure Percept DK</A><BR /><A title="Set up the Azure Percept DK device" href="#" target="_blank" rel="noopener">Set up the Azure Percept DK device</A><BR />Of course, you must start with purchasing an Azure Percept DK.<BR />You will also need an Azure subscription and using the Azure Percept DK setup experience you connected to a wi-fi network, created an Azure IoT Hub, and connected the Azure Percept DK to the IoT Hub.<BR /><BR /></LI> <LI><EM>Azure Percept Studio<BR /><BR /></EM><A title="Overview" href="#" target="_self">Overview</A><BR />After completing the prerequisites and you have opened the Azure Portal you can then open the Azure Percept Studio. Click the link above to learn more about the Azure Percept Studio and how to access it from the Azure Portal and how to get started using it.<BR /><BR /></LI> <LI><EM>Custom vision prototype<BR /><BR /></EM><A title="Create a no-code vision solution in Azure Percept Studio" href="#" target="_blank" rel="noopener">Create a no-code vision solution in Azure Percept Studio</A><BR />Next you can follow the tutorial to create a no-code vision solution in Azure Percept Studio.<BR />I will continue to highlight how to complete a residential roof inspection project.<BR /><BR />For the Residential Roof Inspection project we are performing object detection, we are looking for&nbsp;defects and training the AI model to highlight these defects.<BR /><BR />The optimization setting is best set for balanced for this project, more information can be found if you hover over the information pop-up, or research more in the tutorial.<BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio1.png" style="width: 775px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314467i70486D7CC8559488/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio1.png" alt="Image: Start with naming your Residential Roof Inspection project" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: Start with naming your Residential Roof Inspection project</span></span></LI> <LI><EM>Image Capture<BR /></EM>At this point we will not capture pictures via Automatic image capture or via the device stream. <BR />Make sure you select the IoT Hub you created in the Prerequisites step.<BR />Also select the Device you setup in the Prerequisites step and will work with for this Custom Vision project.<BR />We can move to the next screen by clicking the Next: Tag images and model training button.<BR /><BR /> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio2.png" style="width: 772px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314468i256B85C229839F16/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio2.png" alt="Image: Select the IoT Hub and Device you setup in the Prerequisites step before clicking the Next: Tag images and model training button" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: Select the IoT Hub and Device you setup in the Prerequisites step before clicking the Next: Tag images and model training button</span></span></P> <P>&nbsp;</P> </LI> <LI><EM>Tag images and model training<BR /><BR /></EM><A title="Custom Vision Overview" href="#" target="_blank" rel="noopener">Custom Vision Overview</A><BR /><A title="Custom Vision Projects" href="#" target="_blank" rel="noopener">Custom Vision Projects</A><BR /><BR />On this screen we will Open the project in Custom Vision in a new browser window.<BR /><BR /><STRONG>TIP: Leave the Azure Percept Studio page open, we will return to it soon<BR /></STRONG><BR />The Custom Vision service uses a machine learning algorithm to analyze images. <BR />You submit groups of images that feature and lack the characteristics in question. <BR />You label the images yourself at the time of submission. <BR />Then, the algorithm trains to this data and calculates its own accuracy by testing itself on those same images.<BR />In our example we will add images to the Custom Vision project by uploading images we&nbsp;gathered using the UAS.<BR />We will then label the uploaded images by creating a box around characteristics such as “lifting”,&nbsp;“scrape” and “discoloration”.<BR />After we have labeled the images we will train the Custom Vision algorithm to recognize the characteristics in images based on the tags.<BR /><BR />In this example I selected Advanced Training, more information on the choices and the difference between them can be found via the information pop-ups or via the Custom Vision Overview article.<BR />For most Custom Vision Projects the Quick Training is sufficient for getting started.<BR /><BR /> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio3.png" style="width: 773px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314469i4D6BAFA1607AEE73/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio3.png" alt="Image: Tag images and model training screen, click Open project in Custom Vision" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: Tag images and model training screen, click Open project in Custom Vision</span></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio4.png" style="width: 780px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314470iE9BC46434A83D64E/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio4.png" alt="Image: Select Add images to start uploading your collected images" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: Select Add images to start uploading your collected images</span></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio5.png" style="width: 778px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314471i7242C513D3E39133/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio5.png" alt="Image: Preview of your images, in this example I’m uploading 156 images" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: Preview of your images, in this example I’m uploading 156 images</span></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio6.png" style="width: 781px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314472i63283059A6D54599/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio6.png" alt="Image: The image upload process will show the progress, time will vary based on the number of images you upload" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: The image upload process will show the progress, time will vary based on the number of images you upload</span></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio7.png" style="width: 780px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314473i367D03AE33E316AC/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio7.png" alt="Image: The image upload process results, I successfully uploaded 131 images with 25 duplicates" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: The image upload process results, I successfully uploaded 131 images with 25 duplicates</span></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio8.png" style="width: 771px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314474iE779DF7CBD13950A/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio8.png" alt="Image: Highlighting and labeling a characteristic found in the image, in this example the characteristic of lifting is found and tagged" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: Highlighting and labeling a characteristic found in the image, in this example the characteristic of lifting is found and tagged</span></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio9.png" style="width: 770px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314475i7A60469EBEF73C78/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio9.png" alt="Image: In this example a scrape is found in the image and the characteristic is highlighted and tagged" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: In this example a scrape is found in the image and the characteristic is highlighted and tagged</span></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio10.png" style="width: 766px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314476i950906BF41CA9E0F/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio10.png" alt="Image: Last example is discoloration is found in the image, it is highlighted and tagged" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: Last example is discoloration is found in the image, it is highlighted and tagged</span></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio11.png" style="width: 780px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314477iF9927ECD4F6C63E2/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio11.png" alt="Image: After tagging the collection of images you will see how many tags were created" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: After tagging the collection of images you will see how many tags were created</span></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio12.png" style="width: 769px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314478i3EDB09AD7F22B4AA/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio12.png" alt="Image: Next is to select the button Train to start algorithm training" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: Next is to select the button Train to start algorithm training</span></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio13.png" style="width: 780px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314479i3C535309F533301B/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio13.png" alt="Image: For most projects the Quick Training selection is sufficient, click Train to start training the algorithm" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: For most projects the Quick Training selection is sufficient, click Train to start training the algorithm</span></span></P> <P>&nbsp;</P> </LI> <LI><EM>Evaluate the Detector<BR /><BR /></EM><A title="Evaluate the detector" href="#" target="_blank" rel="noopener">Evaluate the detector</A><BR /><BR />After training has completed, the model's performance is calculated and displayed. <BR />The Custom Vision service uses the images that you submitted for training to calculate precision, recall, and mean average precision. Precision and recall are two different measurements of the effectiveness of a detector:<BR /><BR />• <STRONG>Precision</STRONG> indicates the fraction of identified classifications that were correct.<BR />For example, if the model identified 100 images as dogs, and 99 of them were actually of dogs, then the precision would be 99%.<BR />• <STRONG>Recall</STRONG> indicates the fraction of actual classifications that were correctly identified. <BR />For example, if there were actually 100 images of apples, and the model identified 80 as apples, the recall would be 80%.<BR />• <STRONG>Mean average precision</STRONG> is the average value of the average precision (AP). <BR />AP is the area under the precision/recall curve (precision plotted against recall for each prediction made).<BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio14.png" style="width: 771px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314481i828C378C277018CC/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio14.png" alt="Image: The results I came up with after training 3 iterations with a mix of Advanced and Quick Training" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: The results I came up with after training 3 iterations with a mix of Advanced and Quick Training</span></span> <P>&nbsp;</P> </LI> <LI><EM>Evaluate and Deploy in Azure Percept Studio<BR /><BR /></EM>Return to Azure Percept Studio and click the Evaluate and deploy link.<BR />This screen shows a composite view of all the tasks you have completed up to this point.<BR />You have connected your Azure Percept DK device and you should see the device connected now.<BR />You have uploaded images and tagged them<BR />You have trained an algorithm and received results on the model’s performance <P>&nbsp;</P> <H3>What is left to do?</H3> <P>Deploy the Custom Vision model to your Azure Percept DK device.<BR />Make sure you select the IoT Hub you created in the Prerequisite step.<BR />Select the Azure Percept DK device you are using<BR />Verify and select the Model Iteration you wish to deploy onto the Azure Percept DK device<BR />Click the Deploy model button<BR />Verify the Device deployment is successful by watching for the Azure Portal notification stating that the deployment is successful<BR /><BR /></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio15.png" style="width: 779px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314482i0796A4C86C9D721B/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio15.png" alt="Image: The Evaluate and deploy page for your project in Azure Percept Studio with the Deploy model button highlighted" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: The Evaluate and deploy page for your project in Azure Percept Studio with the Deploy model button highlighted</span></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio16.png" style="width: 533px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314483i390B8DEBB041CB54/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio16.png" alt="Image: The Azure Portal notification that the Device deployment is successful" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: The Azure Portal notification that the Device deployment is successful</span></span></P> <P>&nbsp;</P> </LI> <LI><EM>View the device stream to see real time identification of the characteristics you tagged via&nbsp;</EM><EM>inference<BR /><BR /></EM>Access the device stream via the Vision section of Azure Percept Studio<BR />Click on the link View Stream in the View your device stream section<BR />Watch for the popup in the Azure Portal notification area that says your stream is ready<BR />Click the View stream link to open the Webstream Video webpage<BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio17.png" style="width: 759px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314484i44C1E500D28E11B3/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio17.png" alt="Image: On the Vision page click the link View stream" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: On the Vision page click the link View stream</span></span> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio18.png" style="width: 547px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314485i6644D1BA1B0518B0/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio18.png" alt="Image: Click the View stream link to open the Webstream Video webpage" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: Click the View stream link to open the Webstream Video webpage</span></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="perceptstudio19.png" style="width: 684px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314486iFCD8364D429E5182/image-size/large?v=v2&amp;px=999" role="button" title="perceptstudio19.png" alt="Image: The Webstream Video webpage contains real time inference using the tags you create" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image: The Webstream Video webpage contains real time inference using the tags you create</span></span></P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/Nk60OeIZvYQ?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="Webstream inference output from Azure Percept DK" widget_referrer="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things"></IFRAME></DIV> <P>&nbsp;</P> </LI> <LI><EM>Here is the flight plan I created using ArduPilot Mission Planner</EM> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/EpNF9D0XHsY?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="Roof grid survey using Mission Planner" widget_referrer="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things"></IFRAME></DIV> </LI> </OL> <P>&nbsp;</P> <H3>Conclusion: Using Azure Percept DK along with your UAS the time to prototype the use of AI is reduced.</H3> <P>&nbsp;</P> <P>What I just demonstrated is the capability to identify three characteristics of roof damage or deficiencies. <BR />This identification was done in real-time using Artificial Intelligence without coding, a team of Data Scientists, or a purpose-built companion computer. <BR />The great thing it you can improve the identification of characteristics in data as time goes on by tagging additional images and re-training the algorithm and re-deploying the model to the Azure Percept DK.</P> <P>&nbsp;</P> <P>Using Azure Percept DK to rapidly prototype UAS use cases that can take advantage of Artificial Intelligence will put you at an advantage when incorporating new capabilities into your workflow.</P> <P>&nbsp;</P> <P>Think about this rapid prototype and how easy it is to incorporate Artificial Intelligence into your systems and workflow.</P> <P>&nbsp;</P> <H3>UAS use cases where Artificial Intelligence could be used</H3> <P>&nbsp;</P> <P>Asset Inspection<BR />Autonomous mapping<BR />Package delivery<BR />Monitoring and Detection<BR />Pedestrian and vehicle counting</P> <P>&nbsp;</P> <H3>You now can get started with Azure Percept DK:</H3> <P>&nbsp;</P> <P><A title="Purchase an Azure Percept DK" href="#" target="_blank" rel="noopener">Purchase an Azure Percept DK</A><BR /><A title="Learn more about Azure Percept DK sample AI models" href="#" target="_blank" rel="noopener">Learn more about Azure Percept DK sample AI models</A><BR /><A title="Learn more about Azure Cognitive Services – Custom Vision" href="#" target="_blank" rel="noopener">Learn more about Azure Cognitive Services – Custom Vision</A><BR /><A title="Dive deeper with Industry Use Cases and Community Projects" href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/bg-p/IoTBlog/label-name/Azure%20Percept" target="_blank" rel="noopener">Dive deeper with Industry Use Cases and Community Projects</A></P> Tue, 05 Oct 2021 15:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/rapidly-equip-a-drone-with-ai-using-the-azure-percept-dk/ba-p/2779509 nealmcfee 2021-10-05T15:00:00Z Azure Sphere SDK compatibility with Windows 11 https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-sphere-sdk-compatibility-with-windows-11/ba-p/2800910 <P>Windows 11 will be available starting October 5, 2021. Currently, Windows 11 is not a <EM><A href="#" target="_blank" rel="noopener">supported </A></EM><A href="#" target="_blank" rel="noopener">platform</A> for the Azure Sphere SDK, although in early tests the latest SDK does install and operate on Windows 11. The Azure Sphere team is working on integrating Windows 11 into our test framework, which is a prerequisite for declaring Windows 11 a supported OS. We will publish an announcement on this blog when Windows 11 is supported.</P> <P>&nbsp;</P> <P>Right now, we do not recommend that Azure Sphere customers use Windows 11 for production infrastructure (e.g. devops or manufacturing operations). We welcome testing with the Azure Sphere SDK on Windows 11 in non-production settings and would like customer feedback about any issues encountered.</P> <P>&nbsp;</P> <UL> <LI><STRONG>Important: When testing the Azure Sphere SDK on Windows 11, use <A href="#" target="_blank">Azure Sphere SDK 21.07 Update 2 for Windows</A> or any later release.</STRONG> The 21.07 Update 2 SDK fixes an installer bug that prevents installation and uninstallation of the SDK on Windows 11.</LI> </UL> <P>&nbsp;</P> <P>To provide feedback, please visit <A href="#" target="_blank" rel="noopener">the support feedback forum</A>.</P> <P>&nbsp;</P> <P>For self-help technical inquiries, please visit <A href="#" target="_blank" rel="noopener">Microsoft Q&amp;A</A> or <A href="#" target="_blank" rel="noopener">Stack Overflow</A>. If you require technical support and have a support plan, please submit a support ticket in <A href="#" target="_blank" rel="noopener">Microsoft Azure Support</A> or work with your Microsoft Technical Account Manager. If you would like to purchase a support plan, please explore the <A href="#" target="_blank" rel="noopener">Azure support plans</A>.</P> Thu, 30 Sep 2021 20:56:09 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-sphere-sdk-compatibility-with-windows-11/ba-p/2800910 AzureSphereTeam 2021-09-30T20:56:09Z Deploying Azure Percept with the Intel OpenVINO toolkit https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/deploying-azure-percept-with-the-intel-openvino-toolkit/ba-p/2790429 <P>Developers are experimenting with the features of Azure Percept and <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/discover-the-possibilities-with-azure-percept/ba-p/2733947" target="_blank" rel="noopener">discovering more possibilities</A> for how they can use the Azure Percept development kit since Microsoft unveiled it six months ago.</P> <P>&nbsp;</P> <P>Many also are learning firsthand how the <A href="#" target="_blank" rel="noopener">Intel® Distribution of OpenVINO™</A> toolkit can optimize the setup and operation of <A href="#" target="_blank" rel="noopener">Azure Percept</A>. The OpenVINO toolkit enables users to optimize, tune, and run comprehensive AI inferencing using its model optimizer and runtime development tool.</P> <P>&nbsp;</P> <P>Paired with the Azure Percept development kit, OpenVINO enables developers to build applications more easily for vision and other use cases. It has the unique capability to provide high-performance solutions for inference workloads, so it can be used in real time with image and sound processing solutions. Listed in the next section are the steps to follow if you're looking to use Intel OpenVINO with Azure Percept.</P> <P>&nbsp;</P> <H2>Employing OpenVINO to enable Azure Percept applications</H2> <P>&nbsp;</P> <P>Using OpenVINO with Azure Percept for a range of applications requires either building a model or choosing a pre-built, pre-trained one for the use case you want to address. These can be scenarios that take advantage of the Azure Percept Vision camera or its audio capabilities to detect anomalies, monitor everything from people in checkout lines to inventory, and more.</P> <P>&nbsp;</P> <P>To use Azure Percept with OpenVINO, it is recommended you use the following steps:</P> <OL> <LI>Azure Percept supports TensorFlow and ONNX models. So, first identify a TensorFlow or ONNX model you want to use. OpenVINO also has scores of pre-trained models available for use with Azure Percept.</LI> <LI>Convert your TensorFlow or ONNX model by running it through the OpenVINO model optimizer. Once you’ve converted a model with OpenVINO, you receive an intermediate representation (IR) file.</LI> <LI>Use the IR file with the OpenVINO model compiler to generate blobs compatible with the Azure Percept VPU. As with the model optimizer, this step can be done either online or offline. Note that models should not be large, as those larger than 100 MB (after conversion to .blob format) sometimes have trouble loading.</LI> <LI>Once the blobs are generated and loaded onto the <A href="#" target="_blank" rel="noopener">Intel® Movidius™ Myriad™ X VPU</A>, the Azure Percept camera can be triggered and captured frames can be sent to the VPU for image processing and inference.</LI> </OL> <P>The application can then publish results of image analysis, store the information, and publish it at the edge. In addition, the images and analysis can be upstreamed to Azure services for storage in the cloud, where results also can be displayed.</P> <P>&nbsp;</P> <H2>Designed for differing technical levels</H2> <P>&nbsp;</P> <P>Azure Percept and its dev kit are designed to bring together Azure Percept fully managed services and be <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/under-the-hood-with-azure-percept-deploy-edge-ai-to-iot-devices/ba-p/2144442" target="_blank" rel="noopener">deployed in a matter of minutes</A> rather than hours. The <A href="#" target="_blank" rel="noopener">Azure Percept Studio</A> hosts prebuilt models and workflows that integrate numerous Azure services. OpenVINO similarly offers features such as intuitive graph APIs designed for no- or low-code scenarios up to advanced, customizable situations. To implement a no-code graph API, the user doesn’t have to do anything other than click on it to run a model.</P> <P>&nbsp;</P> <P>The goal is to make the features and applications of Azure Percept and OpenVINO accessible to both beginners working on simple applications and advanced developers working to deploy new solutions. In particular, the Azure Percept dev kit is designed to run AI and computer vision at the edge even without a connection to the cloud excluding complexity or overly complicated steps.</P> <P>&nbsp;</P> <P>With the idea of advanced capabilities and user simplicity in mind, the Azure Percept development kit uses a custom Intel Myriad Development Kit (MDK) stack for both image signal processing pipelines and AI inferencing instead of standard OpenVINO, which only supports AI inferencing. The kit also incorporates a Movidius™ Myriad™ X machine learning accelerator from Intel.</P> <P>&nbsp;</P> <H2>Learn more about using OpenVINO and Azure Percept</H2> <P>&nbsp;</P> <P>Intel and Microsoft are continually innovating on how Azure Percept and OpenVINO can work together as the technology behind both the platforms evolve. In the meantime, you can get started by <A href="#" target="_blank" rel="noopener">downloading the Intel Distribution of OpenVINO toolkit</A>.</P> <P>&nbsp;</P> <P>If you are looking to dive deeper, you can <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-dk-the-one-about-ai-model-possibilities/ba-p/2579617" target="_blank" rel="noopener">install both mandatory and optional development tools for Azure Percept</A>, including OpenVINO, as well as view a list that can help you navigate known issues. You also can read more <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-dk-the-one-about-ai-model-possibilities/ba-p/2579617" target="_blank" rel="noopener">about pre-trained models that work with the Azure Percept development kit</A>, as well as<SPAN data-ogsc="rgb(36, 36, 36)" data-ogsb="yellow">&nbsp;<A href="#" target="_blank" rel="noopener noreferrer" data-auth="Verified" data-linkindex="3" data-ogsc="">how to build and train your own vision models</A>. You can also check out this repo with&nbsp;</SPAN><SPAN data-ogsb="yellow"><A href="#" target="_blank" rel="noopener noreferrer" data-auth="Verified" data-linkindex="4" data-ogsc="">reference solutions for Percept</A>.</SPAN></P> Thu, 30 Sep 2021 16:41:51 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/deploying-azure-percept-with-the-intel-openvino-toolkit/ba-p/2790429 David Kurth 2021-09-30T16:41:51Z Expanding Azure Support for Constrained Devices: Azure IoT middleware for FreeRTOS https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/expanding-azure-support-for-constrained-devices-azure-iot/ba-p/2782396 <P>Here is a new open-source middleware ready for production that makes connecting your FreeRTOS device to Azure IoT simpler.</P> <P>&nbsp;</P> <P>We want to meet developers where they are, and many developers are using FreeRTOS.&nbsp;While it was already possible to connect FreeRTOS devices to Azure IoT, it became less challenging thanks to this new middleware.</P> <P>&nbsp;</P> <P>The Azure IoT middleware for FreeRTOS provides fully tested APIs, documentation, and sample implementations on popular embedded platforms.&nbsp;Features provided by the middleware include establishing a secure MQTT connection to Azure IoT services, provisioning via the Device Provisioning Service, sending telemetry messages, receiving commands, and interacting as an <A href="#" target="_blank" rel="noopener">IoT Plug and Play device</A>.&nbsp; You get both the benefits of open source – available on GitHub under the MIT license – and a set of libraries supported by Microsoft.</P> <P>&nbsp;</P> <P>Some of the hardware platforms with out-of-box support include <A href="#" target="_blank" rel="noopener">Espressif ESP32</A>, STMicroelectronics <A href="#" target="_blank" rel="noopener">STM32L475</A>, <A href="#" target="_blank" rel="noopener">STM32L4+</A>, <A href="#" target="_blank" rel="noopener">STM32H745</A>; <A href="#" target="_blank" rel="noopener">NXP1060</A> and simulators for both <A href="#" target="_blank" rel="noopener">Linux</A> and <A href="#" target="_blank" rel="noopener">Windows</A>.</P> <P>&nbsp;</P> <P>If you’re a FreeRTOS developer connecting devices to Azure IoT, we look forward to supporting you with our middleware.</P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/PNykfuJ3VDs?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="PUT YOUR VIDEO TITLE HERE"></IFRAME></DIV> <P>&nbsp;</P> <H2>Middleware Layer</H2> <P>&nbsp;</P> <P>To optimize it for FreeRTOS and your embedded scenarios, we made specific architectural choices:</P> <P>&nbsp;</P> <UL> <LI>The Azure IoT middleware for FreeRTOS is non-allocating: You must allocate data structures and then pass them into the middleware functions.&nbsp;</LI> <LI>The middleware operates at the MQTT level: you can leverage the middleware APIs to establish MQTT connection and disconnection, subscribing and unsubscribing from topics, and sending and receiving of messages.</LI> <LI>You are in control of the TLS/TCP connection to the endpoint: This allows for flexibility between software or hardware implementations of either.</LI> <LI>There are no background threads created by the middleware: Messages are sent and received synchronously.</LI> </UL> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="freertos-architecture.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/312690i068DDE2FCE5FD0C2/image-size/large?v=v2&amp;px=999" role="button" title="freertos-architecture.png" alt="freertos-architecture.png" /></span></P> <P>&nbsp;</P> <P>The diagram above shows the middleware components used in our samples for MQTT, TLS and TCP – which you can adapt and change to your own requirements. It also highlights that the Embedded C SDK, FreeRTOS middleware and MQTT abstraction (in green) is provided and supported by Microsoft. The remaining blocks (in blue) of the solution components are 3<SUP>rd</SUP> party and you are free to choose whichever works better with your platform.</P> <P>&nbsp;</P> <H2>How to get started?</H2> <P>&nbsp;</P> <P>We created samples to help you get started with the Azure IoT middleware for FreeRTOS, and they are divided into:</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px"><STRONG>Azure IoT Central</STRONG>: These samples leverage Azure DPS (Device Provisioning Service), Azure IoT Central and, IoT Plug and Play. Azure subscription is not required during the initial 7 days of the <A href="#" target="_blank" rel="noopener">IoT Central Free Trial</A>.</P> <UL> <LI>Espressif: <A href="#" target="_blank" rel="noopener">ESP32-Azure IoT Kit</A></LI> <LI>STMicroelectronics: <A href="#" target="_blank" rel="noopener">B-L475E-IOT01A</A></LI> </UL> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px"><STRONG>Azure IoT Hub</STRONG>: These samples can connect directly to IoT Hub or connect first to DPS for provisioning and then connecting to IoT Hub (which is configurable)</P> <UL> <LI>ESPRESSIF: <A href="#" target="_blank" rel="noopener">ESP32</A>,&nbsp;<A href="#" target="_blank" rel="noopener">ESP32-Azure Iot Kit</A></LI> <LI>NXP: <A href="#" target="_blank" rel="noopener">MIMXRT1060-EVK</A></LI> <LI>STMicroelectronics: <A href="#" target="_blank" rel="noopener">B-L475E-IOT01A</A>, <A href="#" target="_blank" rel="noopener">B-L4S5I-IOT01A</A> , <A href="#" target="_blank" rel="noopener">STM32H745I-DISCO</A></LI> <LI>PC Simulation: <A href="#" target="_blank" rel="noopener">Linux</A>, <A href="#" target="_blank" rel="noopener">Windows</A></LI> </UL> <P>&nbsp;</P> <H2>Let us know what you think!</H2> <P>&nbsp;</P> <P>As we continue to work on improving the experience for IoT developers, we encourage you to join this open source project on <A href="#" target="_blank" rel="noopener">GitHub</A> , provide your feedback, file issues, contribute with your own pull requests and stay tuned on&nbsp;<A href="#" target="_blank" rel="noopener">Azure Updates</A>&nbsp;for any new Azure IoT SDK announcements.</P> <P>&nbsp;</P> <P>Middleware samples repo: <A href="#" target="_blank" rel="noopener">https://github.com/Azure-Samples/iot-middleware-freertos-samples</A></P> <P>Middleware source code: <A href="#" target="_blank" rel="noopener">https://github.com/Azure/azure-iot-middleware-freertos</A></P> <P>Middleware source code documentation: <A href="#" target="_blank" rel="noopener">https://azure.github.io/azure-iot-middleware-freertos/</A></P> Tue, 28 Sep 2021 15:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/expanding-azure-support-for-constrained-devices-azure-iot/ba-p/2782396 wduraes 2021-09-28T15:00:00Z Azure Percept and beyond: How certification makes building IoT solutions easier https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-and-beyond-how-certification-makes-building-iot/ba-p/2759143 <P>So you’ve been seeing a lot of posts about Azure Percept and a whole lot of<A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/bg-p/IoTBlog/label-name/Azure%20Percept" target="_self"> amazing Edge AI projects</A> that have come from the developer community from it. But did you know that the Azure Percept is just one of over a thousand products that are Microsoft-approved to make the IoT solution building process easier? When building with Azure, <A href="#" target="_blank" rel="noopener">the <STRONG>Azure Certified Device </STRONG>program</A> makes device selection simple, helping solution builders get started faster and with less investment time by promoting high-performing devices such as Azure Percept.</P> <P>&nbsp;</P> <H2>What does “certified” mean?</H2> <P>&nbsp;</P> <P>An Azure Certified device has gone through standardized testing for connectivity to Azure that will ensure a high-quality performance when implementing IoT solutions. All Azure Certified devices have demonstrated successful connectivity to IoT Hub using the Device Provisioning Service (DPS) and produce compatible telemetry. Simply, any Azure Certified Device can connect to Azure services and successfully communicate information through the cloud.</P> <P>&nbsp;</P> <P>The <A href="#" target="_self"><STRONG>Azure Percept Devkit (DK)</STRONG></A>, the pilot-ready kit with camera-enabled Azure Percept Vision, is one great example of a certified device. But the Azure Percept DK is not just an Azure Certified Device – it’s gone extra steps to prove that it can connect to Azure and use the IoT Edge runtime. This enables simple module deployment to run apps and containers like AI models at the edge, earning it an additional certification acknowledgement, known as the Edge Managed badge. By qualifying for this additional title, the Azure Percept DK has demonstrated that it is easy to set up and comes pre-installed with IoT Edge runtime, features that establish the Percept as an elite device for IoT solution building purposes. Learn more about Azure Percept <A href="#" target="_self">here</A>.&nbsp;</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="percept tile.png" style="width: 153px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310996i44C83E547C40B945/image-size/small?v=v2&amp;px=200" role="button" title="percept tile.png" alt="percept tile.png" /></span></P> <P>&nbsp;</P> <H2>But wait – there’s more!</H2> <P>&nbsp;</P> <P>The Azure Percept DK is only one example of over a thousand devices that Microsoft partners have certified through this program to share with customers looking to optimize their IoT solution building journey. These devices vary from tiny MCUs and development kits, to sensors and edge gateways, all running on a varied set of operating systems, with a gamut of other capabilities geared toward customer need. All of these devices are easily found on the <A href="#" target="_blank" rel="noopener">Azure Certified Device catalog</A>, where customers can use the filter experience to find the perfect device for their solution.</P> <P>&nbsp;</P> <P>Azure Percept signifies Microsoft’s entry into the beginning of a vast ecosystem of IoT partners that are using Azure to build great IoT solutions. With the Azure Certified logo, you can easily pick out devices like Azure Percept to work together with Microsoft services quickly and with confidence. <STRONG>You can even view and buy Azure Percept DK&nbsp;<A href="#" target="_blank" rel="noopener">today</A> from our catalog!</STRONG></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="acdblogstats.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310992i465F8E6F7FD59715/image-size/large?v=v2&amp;px=999" role="button" title="acdblogstats.png" alt="acdblogstats.png" /></span></P> <P>&nbsp;</P> <H2>Next steps</H2> <P>&nbsp;</P> <P>Your IoT solution is only a couple clicks away from getting started! Whether it’s with Azure Percept DK or another Azure Certified Device, we hope to see you get building in Azure. For more information and on the certification program and capabilities of certified devices, visit our&nbsp;<A href="#" target="_blank" rel="noopener">Microsoft Docs page</A>.</P> Thu, 23 Sep 2021 14:51:11 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-and-beyond-how-certification-makes-building-iot/ba-p/2759143 nkuntjoro 2021-09-23T14:51:11Z General availability: Azure Sphere OS version 21.09 https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/general-availability-azure-sphere-os-version-21-09/ba-p/2776484 <P><SPAN>Azure Sphere OS version 21.09&nbsp;is now available in the <STRONG>Retail </STRONG>feed</SPAN><SPAN>.&nbsp;</SPAN>This quality release includes bug fixes in the Azure Sphere OS; it does not include an updated SDK. If your devices are connected to the internet, they will receive the updated OS from the cloud.</P> <H3>&nbsp;</H3> <H4>Updates to the Azure Sphere OS include:</H4> <UL> <LI>Upgraded Linux Kernel to 5.10.60.</LI> <LI>Improvements to crash handling to prevent hangs.</LI> </UL> <P>&nbsp;</P> <H4>New and updated Gallery projects for 21.09</H4> <UL> <LI><A href="#" target="_blank" rel="noopener">RS-485 real-time driver</A>&nbsp;&nbsp;demonstrates how to use an M4F core on MT3620 to implement reliable RS-485 communication with inter-core communication to the high-level app on the A7 core.</LI> </UL> <P>&nbsp;</P> <P>For more information on documentation updates, see <A href="#" target="_blank" rel="noopener">New and revised documentation in the 21.09 release</A>.</P> <P><SPAN>&nbsp;</SPAN></P> <P><SPAN>For more information on Azure Sphere OS feeds and setting up an evaluation device group,&nbsp;see&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN>Azure Sphere OS feeds</SPAN></A><SPAN> and </SPAN><A href="#" target="_blank" rel="noopener">Set up devices for OS evaluation</A><SPAN>.</SPAN></P> <P>&nbsp;</P> <P>For self-help technical inquiries, please visit <A href="#" target="_blank" rel="noopener">Microsoft Q&amp;A</A><SPAN> o</SPAN>r <A href="#" target="_blank" rel="noopener">Stack Overflow</A>. If you require technical support and have a support plan, please submit a support ticket in <A href="#" target="_blank" rel="noopener">Microsoft Azure Support</A> or work with your Microsoft Technical Account Manager. If you would like to purchase a support plan, please explore the <A href="#" target="_blank" rel="noopener">Azure support plans</A>.</P> Wed, 22 Sep 2021 22:58:34 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/general-availability-azure-sphere-os-version-21-09/ba-p/2776484 AzureSphereTeam 2021-09-22T22:58:34Z Discover the possibilities with Azure Percept for IoT solutions https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/discover-the-possibilities-with-azure-percept-for-iot-solutions/ba-p/2762264 <P><SPAN data-contrast="auto"><STRONG>Experimenting&nbsp;with technology&nbsp;and applying it to new scenarios&nbsp;expands our skills, feeds our ingenuity, and&nbsp;stretches&nbsp;the limits of what’s possible—something we’re always doing here at Microsoft.</STRONG>&nbsp;Today, the intelligent edge is transforming business&nbsp;across every industry. As&nbsp;we evolve from connected assets to connected environments and ultimately, to entire connected ecosystems,&nbsp;Microsoft continues to invest heavily&nbsp;to&nbsp;accelerate&nbsp;innovation&nbsp;of&nbsp;edge AI.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Azure Percept</SPAN></A><SPAN data-contrast="none">&nbsp;</SPAN><SPAN data-contrast="none">is&nbsp;an end-to-end, low-code platform that streamlines the creation of highly secure artificial intelligence (AI) solutions at the edge.</SPAN><SPAN data-contrast="none">&nbsp;The&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Azure Percept development kit</SPAN></A><SPAN data-contrast="auto">&nbsp;includes pre-built AI models that can detect people, vehicles, general objects, and even products on a shelf. The best part is that this end-to-end, low-code platform for building edge AI solutions has almost <STRONG>endless possibilities for how it can be used across industries: manufacturing, retail, smart cities, smart buildings, and even transportation.</STRONG></SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN><SPAN data-contrast="auto">From helping keep people safer during the pandemic by offering insights into pedestrian movement and density to helping keep your favorite products in stock and on shelves, there’s no end to the innovative use cases for Azure Percept. It’s even being used to help interpret sign language.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><STRONG>Discover the possible ways you could&nbsp;modernize and&nbsp;transform your business and operations when you start using Azure Percept in your end-to-end IoT solutions.&nbsp;<A href="#" target="_blank" rel="noopener">Read the full blog post</A></STRONG><SPAN data-contrast="auto">&nbsp;for current use case examples and useful resources that show how others are already getting started with the&nbsp;Azure Percept&nbsp;dev kit.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> Tue, 21 Sep 2021 18:07:37 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/discover-the-possibilities-with-azure-percept-for-iot-solutions/ba-p/2762264 Amiyouss 2021-09-21T18:07:37Z InnovateFPGA contest seeks sustainable solutions using Microsoft Azure and Intel IoT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/innovatefpga-contest-seeks-sustainable-solutions-using-microsoft/ba-p/2758507 <P>Microsoft and Intel empower technologies that can build a more environmentally sustainable future and reduce the demand we place on Earth's resources. That's why Microsoft <A href="#" target="_blank" rel="noopener">Azure IoT and Intel</A> are partnering with the <A href="#" target="_blank" rel="noopener">InnovateFPGA Design Contest</A>, where developers can harness Azure cloud computing and Intel® FPGA technology to create solutions that reduce environmental impacts.</P> <P>&nbsp;</P> <P>The design contest is looking for inventive developers from around the globe who can create cutting-edge solutions. Internet of Things (IoT) solutions are already reducing climate impact: from smart lighting that turns off when not needed to transportation systems that reduce pollution-exacerbating car congestion. New IoT solutions could address unique challenges, such as reducing the waste of resources such as water, tracking ocean life, or increasing protection of endangered species.</P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">FPGA Cloud Connectivity Kits</A>, provided to qualified teams in the InnovateFPGA contest, show how an Intel® Edge-Centric FPGA (field programmable gate array) can connect seamlessly with Azure IoT. In addition, development teams will have a choice to use Azure IoT Central or IoT Hub to connect their solution to the Azure cloud. Developers interested in building sustainable solutions using this technology can register their team at <A href="#" target="_blank" rel="noopener">www.innovatefpga.com</A>.</P> <P>&nbsp;</P> <H2>Working jointly to encourage innovative developers</H2> <P>&nbsp;</P> <P>Azure IoT and Intel already <A href="#" target="_blank" rel="noopener">work jointly together</A> in advanced solutions, complementing each other's technology to address challenges in a variety of industries. The FPGA Cloud Connectivity Kit enables innovative developers to perform new tasks and solve more challenges, as real-time handling of Azure workloads is easier with a dedicated FPGA-based hardware accelerator. FPGA circuits also can be configured specifically for various workloads to provide better performance and flexibility.</P> <P>&nbsp;</P> <P>Additionally, Azure IoT Central, IoT Hub, and IoT Plug and Play help builders and developers connect devices to the cloud. <A href="#" target="_blank" rel="noopener">IoT Central</A> offers a secure platform, centralized management to reconfigure and update devices, and a growing selection of IoT app templates for common scenarios. <A href="#" target="_blank" rel="noopener">IoT Hub</A>, meanwhile, enables secure and reliable communication via the cloud between IoT applications and the devices they manage. Finally, <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/iot-plug-and-play-simplifies-iot-adoption-enabling-intelligent/ba-p/2167059" target="_blank" rel="noopener">IoT Plug and Play</A> seeks to remove technical barriers, reducing development time, cost, and complexity.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Competition-Allup-Microsoft-InnovateFGPA-LI-Twitter-Option1 V5.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310905iCCEC2EF8841599B8/image-size/large?v=v2&amp;px=999" role="button" title="Competition-Allup-Microsoft-InnovateFGPA-LI-Twitter-Option1 V5.jpg" alt="Competition-Allup-Microsoft-InnovateFGPA-LI-Twitter-Option1 V5.jpg" /></span></P> <P>&nbsp;</P> <H2>How Microsoft Azure and Intel can reduce environmental impacts</H2> <P>&nbsp;</P> <P>Sustainability and environmental efforts by Microsoft and Intel go beyond their support of the InnovateFPGA contest. As part of our commitment to put sustainable technologies at the heart of our innovation, we're focused on <A href="#" target="_blank" rel="noopener">four key areas</A> of environmental impact to local communities: carbon, water, waste, and ecosystems. We're striving to use 100% renewable energy by 2025, replenish more water than we consume by 2030, create zero waste annually by that same year, and achieve net-zero deforestation from new construction. Enterprises looking to understand and potentially reduce their environmental impacts can use the <A href="#" target="_blank" rel="noopener">Sustainability Calculator</A>.</P> <P>&nbsp;</P> <P>Just as Azure cloud computing can help reduce energy use, Intel FPGA-enabled architecture used with the cloud is economical and power-efficient, making high-performance computing more sustainable. Microsoft is also <A href="#" target="_blank" rel="noopener">researching new approaches</A> for improving computing and cloud data handling while simultaneously using less energy and more sustainable materials.</P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">Project 15 from Microsoft</A> also is helping to accelerate conservation organizations and scientific teams reduce costs and complexity in deploying IoT technologies through its <A href="#" target="_blank" rel="noopener">open-source software platform</A><I>. </I>The <A href="#" target="_blank" rel="noopener">Global Environment Facility Small Grants Programme (GEF SGP)</A>, implemented by the United Nations Development Program, is collaborating with the InnovateFPGA contest to task teams to create solutions for biodiversity, sustainable agriculture, and marine conservation. The SGP has shared <A href="#" target="_blank" rel="noopener">use cases from the Project 15 initiative</A> on the contest website.</P> <P>&nbsp;</P> <H2>Submit design proposals to the InnovateFPGA Design Contest</H2> <P>&nbsp;</P> <P>Regional teams of university students, builders, and professional engineers with the creativity and ingenuity to design solutions for challenging environmental issues are encouraged to participate in the InnovateFPGA Design Contest. The deadline for teams to <A href="#" target="_blank" rel="noopener">submit their solution proposals</A> has been extended to Oct. 31, 2021. After registering, free FPGA Cloud Connectivity Kits will be shipped to teams whose proposals are selected to advance forward.</P> <P>&nbsp;</P> <P>Qualifying teams will develop solutions for sustainability until Spring 2022, where they will compete in their respective regional competition. Teams at this level will have a chance to win cash prizes and an invitation to the Grand Finale event at Intel headquarters in San Jose, California, in 2022. For more details, rules, and an FAQ page, go to <A href="#" target="_blank" rel="noopener">www.innovatefpga.com</A>.</P> Tue, 05 Oct 2021 18:37:07 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/innovatefpga-contest-seeks-sustainable-solutions-using-microsoft/ba-p/2758507 Justin_Slade 2021-10-05T18:37:07Z A Visual Guide to Azure Percept https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/a-visual-guide-to-azure-percept/ba-p/2747730 <P>&nbsp;</P> <P>Welcome to an illustrated guide to <A href="#" target="_blank" rel="noopener">Azure Percept</A> – a new end-to-end <A href="#" target="_blank" rel="noopener">edge AI</A> platform from Microsoft that helps&nbsp;IoT practitioners go seamlessly from silicon to services when developing &amp; deploying intelligent edge applications. The guide gives an overview of the Azure Percept platform capabilities and components, explains how it solves a key problem for edge AI developers today, and concludes with a list of relevant resources to jumpstart your own prototyping journey.<BR /><BR /></P> <H3>About Visual Guides:</H3> <P>Did you know <A href="#" target="_blank" rel="noopener">65% of us are visual learners</A>? Our brains are wired to absorb information from visual cues and use them to detect patterns and make connections faster, and with better recall. Visual guides offer a “big picture” summary of the topic that you can use as a <EM>pre-read (</EM>before diving into documentation) or a <EM>post-recap resource</EM> (to identify gaps in learning or coverage). Download a <A href="#" target="_blank" rel="noopener">high-resolution image</A> of the visual guide to Azure Percept and use it as desktop wallpaper or print it out as a handy reference to support your learning journey.<BR /><BR /></P> <H3>About AI at the Edge:</H3> <P><A href="#" target="_blank" rel="noopener">Edge Computing</A> defines a distributed architecture with compute resources placed closer to information gathering resources to reduce network latency and bandwidth usage in cloud computing. By pairing an intelligent edge with an intelligent cloud, we get faster decision making, offline operation, optimized network usage, and data privacy protections.&nbsp;<A href="#" target="_blank" rel="noopener">Edge AI</A> uses edge compute resources to run machine learning and data analytics processes on-device (e.g., for real-time insights, intelligent decision-making, and workflow automation solutions) making such platforms critical to&nbsp;<A href="#" target="_blank" rel="noopener">hybrid cloud</A> strategies.</P> <P>&nbsp;</P> <H2>Why Azure Percept?</H2> <P>&nbsp;</P> <P>Many edge AI solutions today must be built <EM>from the ground up&nbsp;</EM>using diverse hardware (devices) and software (services) that need to be integrated and managed manually, creating workflow complexity for developers. Creating and deploying AI models also assumes a level of data science and machine learning expertise that many traditional IoT developers lack.<BR /><BR /></P> <P><A href="#" target="_blank" rel="noopener">Azure Percept</A> is an end-to-end technology platform from Microsoft that was designed to tackle these challenges, making it easier for IoT practitioners to rapidly prototype, deploy, and manage, their edge AI solutions. Azure Percept has three core aspects:</P> <UL> <LI><A href="#" target="_blank" rel="noopener"><EM>E2E edge AI platform</EM></A>&nbsp;with hardware accelerators, integrated <A href="#" target="_blank" rel="noopener">Azure AI</A>&nbsp;&amp;&nbsp;<A href="#" target="_blank" rel="noopener">Azure IoT</A> services.</LI> <LI><A href="#" target="_blank" rel="noopener"><EM>Pre-built AI models&nbsp;</EM></A>for rapid prototyping using customizable vision and speech solutions.</LI> <LI><A href="#" target="_blank" rel="noopener"><EM>Built-in security measures</EM></A>&nbsp;on-devices &amp; with services, to protect sensitive, high-value assets.</LI> </UL> <P>With Azure Percept, practitioners can build and deploy <STRONG>custom AI models</STRONG>, setup and <STRONG>manage IoT device collections,</STRONG> and integrate seamlessly with a rich set of <STRONG>Azure cloud services</STRONG> – for edge AI application prototyping &amp; deployment at scale. Azure Percept fits seamlessly into familiar <A href="#" target="_blank" rel="noopener">Azure IoT architectures</A>, lowering the learning curve for adoption. It supports rich <A href="#" target="_blank" rel="noopener">tools and documentation</A> for <STRONG>low-code development,&nbsp;</STRONG>so developers can build &amp; deploy edge AI solutions without needing deep data science or cloud computing expertise. Let's explore the visual guide to Azure Percept!<BR /><BR /></P> <H2>A Visual Guide To Azure Percept</H2> <P>&nbsp;</P> <P>The illustrated guide below gives a visual summary of the <A href="#" target="_blank" rel="noopener">Azure Percept Overview</A> documentation. I recommend you&nbsp;<A href="#" target="_blank" rel="noopener">download the high resolution image</A>&nbsp;of this guide and use it as a reference for the rest of this post.</P> <P><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Azure-005-AzurePercept.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310191iA821ABE132D8BE1C/image-size/large?v=v2&amp;px=999" role="button" title="Azure-005-AzurePercept.png" alt="Azure-005-AzurePercept.png" /></span></P> <P>&nbsp;</P> <P>In the next few sections, we’ll explore some sections of the visual guide (with relevant links for self-guided deep dives) with a focus on three aspects: the <STRONG>big picture</STRONG>, the<STRONG> core components</STRONG>, and <STRONG>next steps</STRONG> to get started! Let's dive in!</P> <P>&nbsp;</P> <H3>1. Azure Percept: The Big Picture<BR /><BR /></H3> <TABLE style="border-style: none; width: 100%;" border="0" width="100%"> <TBODY> <TR> <TD width="33.333333333333336%"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nityan_0-1631667650387.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310459i3585E08325AF9636/image-size/medium?v=v2&amp;px=400" role="button" title="nityan_0-1631667650387.png" alt="nityan_0-1631667650387.png" /></span></TD> <TD width="33.333333333333336%"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nityan_1-1631667650397.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310458i7D4A67C488EFD43F/image-size/medium?v=v2&amp;px=400" role="button" title="nityan_1-1631667650397.png" alt="nityan_1-1631667650397.png" /></span></TD> <TD width="33.333333333333336%"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nityan_2-1631667650405.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310460iFE19016DFFE8EE50/image-size/medium?v=v2&amp;px=400" role="button" title="nityan_2-1631667650405.png" alt="nityan_2-1631667650405.png" /></span></TD> </TR> </TBODY> </TABLE> <P><BR />Azure Percept is <A href="#" target="_blank" rel="noopener">a family of hardware, software and services </A>&nbsp;that covers the full stack from silicon to services, <EM>to help customers solve integration challenges of using AI at the edge</EM>, at scale. It tackles three main points of friction for edge AI integrations:</P> <UL> <LI>Selecting the right <STRONG>silicon</STRONG> (hardware integrations) to power the edge AI solution.</LI> <LI>Providing end-to-end<STRONG style="font-family: inherit; background-color: transparent;"> security</STRONG><SPAN style="font-family: inherit; background-color: transparent;"> (hardware, software, models, data) for edge AI.</SPAN></LI> <LI>Building &amp; managing edge AI solutions (device &amp; service deployment) at <STRONG style="font-family: inherit; background-color: transparent;">scale</STRONG><SPAN style="font-family: inherit; background-color: transparent;">.</SPAN></LI> </UL> <H3><BR />2. Azure Percept: Core Components</H3> <P>To achieve this, the Azure Percept platform provides support for <EM>on-device prototyping</EM> (dev kit), <EM>cross-device workflow and solution management </EM>(portal), and <EM>guidance for best practices</EM> in each case. Let’s look at the three aspects briefly:<BR /><BR /></P> <TABLE border="1" width="100%"> <TBODY> <TR> <TD width="33.333333333333336%" height="30px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nityan_0-1631667778270.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310461iBA39AEAA4D088398/image-size/medium?v=v2&amp;px=400" role="button" title="nityan_0-1631667778270.png" alt="nityan_0-1631667778270.png" /></span></TD> <TD width="33.333333333333336%" height="30px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nityan_1-1631667778283.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310462i82D7EAC3CAC44ADB/image-size/medium?v=v2&amp;px=400" role="button" title="nityan_1-1631667778283.png" alt="nityan_1-1631667778283.png" /></span></TD> <TD width="33.333333333333336%" height="30px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="nityan_2-1631667778292.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310463iA5B9086961A47E2F/image-size/medium?v=v2&amp;px=400" role="button" title="nityan_2-1631667778292.png" alt="nityan_2-1631667778292.png" /></span></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <UL> <LI><A href="#" target="_blank" rel="noopener"><STRONG>Azure Percept Development Kit</STRONG></A> – with <EM>flexible hardware options for diverse AI prototyping scenarios using computer vision or speech</EM>. Provides build-in hardware acceleration and trusted security solutions. Supports <A href="#" target="_blank" rel="noopener">80/20 railing system</A> for limitless device mounting configurations. Integrates with Azure AI and Azure IoT services – but also runs AI models on-device without a connection to the cloud, for reliable and efficient real-time insights. It also integrates seamlessly with <A href="#" target="_blank" rel="noopener">Azure Percept Audio</A>, an optional accessory for speech solutions.</LI> <LI><A href="#" target="_blank" rel="noopener"><STRONG>Azure Percept Studio</STRONG></A> – <EM>a single launch and management portal for your edge AI models and solutions.</EM> Access pre-built AI models or develop custom versions for your app requirements. Use guided workflows to seamlessly integrate AI-capable hardware (edge devices) and cloud services (Azure AI and Azure IoT) – using a low code approach to development. Create end-to-end edge AI solutions quickly without extensive data science or cloud computing experience.</LI> <LI><STRONG>AI Hardware Reference Design and Certification Programs</STRONG> – from best practices (e.g., <A href="#" target="_blank" rel="noopener">security recommendations</A>) to device specifications (e.g., datasheets <A href="#" target="_blank" rel="noopener">for Azure Percept Vision</A>, <A href="#" target="_blank" rel="noopener">Azure Percept DK</A>) and support for <A href="#" target="_blank" rel="noopener">firmware updates</A> and <A href="#" target="_blank" rel="noopener">device deployments</A>.</LI> </UL> <H3><BR />3. Azure Percept Studio: Getting Started</H3> <P>To get started prototyping your edge AI applications or solutions, you’ll need access to suitable device hardware and an Azure account for service integration and solution management needs.</P> <UL> <LI>The Azure Percept Development Kit is currently available for purchase <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-dk-and-azure-percept-audio-now-available-in-more/ba-p/2712969" target="_self">here</A> .</LI> <LI>The Azure Percept Studio is accessible<A href="#" target="_blank" rel="noopener"> here</A> (with a valid Microsoft Azure account)</LI> </UL> <P>Start by logging into Azure Percept Studio portal – you'll see an entry page like the one below.</P> <UL> <LI>The sidebar menu shows how the portal supports both&nbsp;<EM>device management</EM> (view and deploy edge devices) and <EM>application management</EM> (create and manage AI vision or speech projects).</LI> <LI>The portal also provides handy links to Vision and Speech <EM>demos &amp; tutorials</EM> for development – with two complete sample applications (<EM>People Counting</EM> showcasing Live Video Analytics, and <EM>Vision On Edge </EM>showcasing end-to-end pipelines with 100+ camera feeds deployed) that can be deployed to Azure with one click, for hands-on experimentation.</LI> <LI>Finally, the portal provides access to advanced tools for cloud development (e.g., <A style="font-family: inherit; background-color: #ffffff;" href="#" target="_blank" rel="noopener">Azure ML Notebooks</A><SPAN style="font-family: inherit;">), local device deployment (e.g., </SPAN><A style="font-family: inherit; background-color: #ffffff;" href="#" target="_blank" rel="noopener">AI dev toolchain</A><SPAN style="font-family: inherit;">) and security measures (e.g., </SPAN><A style="font-family: inherit; background-color: #ffffff;" href="#" target="_blank" rel="noopener">AI model protection</A><SPAN style="font-family: inherit;">). </SPAN></LI> <LI><SPAN style="font-family: inherit;">Plus, access other advanced development content in preview mode </SPAN><A style="font-family: inherit; background-color: #ffffff;" href="#" target="_blank" rel="noopener">on this GitHub repo</A><SPAN style="font-family: inherit;">.<BR /><BR /></SPAN></LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="nityan_1-1631668535536.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310465i8D0252D5E9BD4D11/image-size/large?v=v2&amp;px=999" role="button" title="nityan_1-1631668535536.png" alt="nityan_1-1631668535536.png" /></span></P> <P><BR />The&nbsp;<EM>"Create a Prototype"</EM> option is a good place to start your prototyping journey. The Azure Percept Studio portal provides &nbsp;<A href="#" target="_blank" rel="noopener">computer vision</A> and <A href="#" target="_blank" rel="noopener">speech (voice assistant)</A> tutorials with a <STRONG>low-code approach</STRONG> (using visual interfaces &amp; templates for interactive configuration) that walks you through the process of exploring <A href="#" target="_blank" rel="noopener">sample AI models</A>, creating custom models, and deploying these prototypes to your edge devices – all from one unified interface. Sample models exist for people detection, vehicle detection, general object detection and products-on-shelf detection use Azure <A href="#" target="_blank" rel="noopener">IoT Hub</A> and <A href="#" target="_blank" rel="noopener">Azure IoT Edge</A> service integrations for seamless deployment.</P> <P><STRONG>Interested in deploying AI at the Edge in your solutions and projects</STRONG>? We’d love to hear from you about the application domain and usage scenarios and keep you updated on what’s coming next. <EM>Drop a comment below, subscribe to the blog, stay in touch</EM>!</P> <P>&nbsp;</P> <H3>4. Summary &amp; Next Steps</H3> <P><EM>Thanks for reading!&nbsp;</EM>This was a quick visual introduction to Azure Percept – an end-to-end edge AI platform that supports the “Sense. Know. Act.” requirements for intelligent edge applications!</P> <UL> <LI>Azure Percept consists of an <EM>Azure Percept Development Kit </EM>(for devices), a backend <EM>Azure Percept Studio portal </EM>(for service integrations and solution management), and <EM>sample AI models </EM>(for computer vision and speech) for rapid prototyping of edge AI applications.</LI> <LI>The illustrated guide to Azure Percept summarizes the main takeaways from the&nbsp;<A style="font-family: inherit; background-color: #ffffff;" href="#" target="_blank" rel="noopener">Azure Percept Overview&nbsp;</A><SPAN style="font-family: inherit;">with </SPAN><A style="font-family: inherit; background-color: #ffffff;" href="#" target="_blank" rel="noopener">a downloadable hi-resolution image</A><SPAN style="font-family: inherit;"> for pre-read or post-recap help.</SPAN></LI> </UL> <P>Want to learn more on your own? Here are some relevant resources to get you going:</P> <P>&nbsp;</P> <P><STRONG>Azure IoT Fundamentals:</STRONG></P> <UL> <LI><A href="#" target="_blank" rel="noopener">An Introduction to Azure IoT</A> (8-module learning path)</LI> <LI><A href="#" target="_blank" rel="noopener">Build the intelligent edge with Azure IoT Edge</A> (3-module learning path)</LI> <LI><A href="#" target="_blank" rel="noopener">Securely connect IoT devices (with IoT Hub) to the cloud</A> (6-module learning path)</LI> <LI><A href="#" target="_blank" rel="noopener">Develop IoT solutions with Azure IoT Central</A> (5-module learning path)</LI> <LI><A href="#" target="_blank" rel="noopener">Azure IoT reference architecture</A> (learn how various edge and cloud components interact)<BR /><BR /></LI> </UL> <P><STRONG>Azure Percept:</STRONG></P> <UL> <LI><A href="#" target="_blank" rel="noopener">Azure Percept Product Page</A> – for features and FAQ</LI> <LI><A href="#" target="_blank" rel="noopener">Azure Percept Documentation</A> – for developers</LI> <LI><A href="#" target="_blank" rel="noopener">Azure Percept Visual Guide</A> - downloadable hi-res image</LI> <LI><A href="#" target="_blank" rel="noopener">Azure Percept on YouTube</A> – video playlists (Architecture, How-to)</LI> <LI><A href="#" target="_blank" rel="noopener">Azure Percept Sample AI Models</A> – for prototyping</LI> <LI><A href="#" target="_blank" rel="noopener">Azure Percept Advanced Development</A> – for samples &amp; previews</LI> <LI><A href="#" target="_blank" rel="noopener">Advanced Development (contd.)</A> - via Jupyter Lab &amp; Azure Machine Learning</LI> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/bg-p/IoTBlog/label-name/Azure%20Percept" target="_blank" rel="noopener">Azure Percept Community</A> – learn more about community projects</LI> <LI><A href="#" target="_blank" rel="noopener">New innovations bring AI to the edge</A> – Microsoft Ignite 2021 announcement (<A href="#" target="_blank" rel="noopener">blog</A>)</LI> </UL> <P>&nbsp;</P> <P>&nbsp;</P> <P><BR /><BR /></P> Thu, 16 Sep 2021 15:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/a-visual-guide-to-azure-percept/ba-p/2747730 nityan 2021-09-16T15:00:00Z You can now keep a close eye on your IoT Edge devices health directly from IoT Central https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/you-can-now-keep-a-close-eye-on-your-iot-edge-devices-health/ba-p/2698676 <P data-unlink="true">Having visibility into production edge devices and corresponding deployments is critical for device operators. We previously <A href="#" target="_self">shared</A>&nbsp; IoT Edge's integration with Azure Monitor which is in public preview. Today we are pleased to unlock the same benefits for IoT Central customers using IoT Edge.</P> <P data-unlink="true">&nbsp;</P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/Az1dehqLAug?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="IoT Show with Olivier Bloch and Ranga Vadlamudi discussing and demoing IoT Edge Device observability in IoT Central"></IFRAME></DIV> <P>&nbsp;</P> <P><SPAN>To enable this capability on your device, add the metrics-collector module to your deployment and configure it to collect and transport metrics to your Azure IoT Central Application or Azure Monitor .</SPAN></P> <P>&nbsp;</P> <H2><FONT size="5">#1 - Enable edge device observability for device operator to have a single pane of glass</FONT></H2> <P>&nbsp;</P> <P><SPAN>Device operators and developers can choose to&nbsp;configure deployments to send edge device observability metrics to IoT Central application.&nbsp;For operator to have a single pane of glass you can transport metrics collected from edge modules to Azure IoT Central for operator to visualize the metrics data within IoT Central application.</SPAN></P> <P>&nbsp;</P> <P><SPAN>IoT Central provides a standard metrics interface to map the data flowing into IoT Central. Cloud developers can build dashboards based off the capabilities listed in the metrics interface to provide a single pane of glass for operators to manage the edge device and view metrics all from IoT Central application.</SPAN></P> <P>&nbsp;</P> <P><SPAN>To view the metrics from your IoT Edge device in your IoT Central application:</SPAN></P> <OL> <LI><SPAN>Add the IoT Edge Metrics standard interface as an inherited interface to your device template<span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="add-metrics-interface.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/306681i87FE24CA29BCF853/image-size/large?v=v2&amp;px=999" role="button" title="add-metrics-interface.png" alt="add-metrics-interface.png" /></span></SPAN></LI> <LI><SPAN>Use the telemetry values defined in the interface to build any dashboards you need to monitor your IoT Edge devices<span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="iot-edge-metrics-telemetry.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/306682i03E9062BD1DD7B17/image-size/large?v=v2&amp;px=999" role="button" title="iot-edge-metrics-telemetry.png" alt="iot-edge-metrics-telemetry.png" /></span></SPAN></LI> <LI><SPAN>Create Dashboards in IoT Central to visualize the metrics</SPAN></LI> </OL> <P class="lia-indent-padding-left-30px"><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="metricsiotc.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309945i960E544CF554A884/image-size/large?v=v2&amp;px=999" role="button" title="metricsiotc.png" alt="metricsiotc.png" /></span></SPAN></P> <P><SPAN>&nbsp;</SPAN></P> <H2><SPAN><FONT size="5">#2 - Optionally use Azure Monitor for unified monitoring&nbsp;</FONT></SPAN></H2> <P>&nbsp;</P> <P><SPAN>Your company operations team can choose to configure the edge devices to collect and transport metrics to Azure Monitor to workbooks to unify cloud and edge data for rich visualization of the metrics data.&nbsp;&nbsp;</SPAN><SPAN><U>Note</U>: IoT Central currently does not support alerts in the Azure monitor workbooks for metrics solution.&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN>You can enable Azure Monitor workbooks from the Azure portal. Find your IoT Central application in Azure portal and navigate to Workbook section from the left navigation. We have enabled public workbooks.&nbsp;</SPAN></P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="centralportal.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310178i6C45969E2E29C7C3/image-size/large?v=v2&amp;px=999" role="button" title="centralportal.png" alt="centralportal.png" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>Azure Monitor workbooks&nbsp;curated visualizations enable you to quickly and easily analyze the efficiency of your solution</SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="metricsportal1.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/306674i4DFAC7439AE05EEA/image-size/large?v=v2&amp;px=999" role="button" title="metricsportal1.png" alt="metricsportal1.png" /></span></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="metricsportal2.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/306676iB280F100E7C5B719/image-size/large?v=v2&amp;px=999" role="button" title="metricsportal2.png" alt="metricsportal2.png" /></span></P> <H2 id="toc-hId-127084153">Additional Resources</H2> <P>&nbsp;</P> <P>Following are learning resources.</P> <OL> <LI><A href="#" target="_self">Docs: Collect and transport metrics (Preview)</A></LI> <LI><A href="#" target="_blank" rel="noopener nofollow noreferrer">Create your IoT Central App</A></LI> <LI><A href="#" target="_blank" rel="noopener noreferrer">IoT Central GitHub Page</A></LI> </OL> <P>&nbsp;</P> Wed, 15 Sep 2021 15:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/you-can-now-keep-a-close-eye-on-your-iot-edge-devices-health/ba-p/2698676 Ranga Vadlamudi 2021-09-15T15:00:00Z Connect an ESP32 To Azure IoT with .NET nanoFramework https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/connect-an-esp32-to-azure-iot-with-net-nanoframework/ba-p/2731691 <H1 id="you-dont-need-a-big-device-to-connect-to-iothub"><FONT size="5">You don’t need a big device to connect to Azure IoT Hub</FONT></H1> <P>When you are looking for IoT devices to connect to Azure IoT Hub most are minicomputers or devkits that are cheap but limited in what they can do or just very power hungry. But what about using a Microprocessor I hear you say, well these are great, but you are then normally limited to programming in C which we all know isn’t great.</P> <P>&nbsp;</P> <P>The Azure SDK for Embedded C is designed to allow small embedded (IoT) devices to communicate with Azure services and can be found&nbsp;<A href="#" target="_blank" rel="noopener">here&nbsp;</A>but again it’s C you are forced to use, and it’s targeted at larger device as mentioned before due to the memory required.</P> <P>&nbsp;</P> <P>So what options do we have for us .NET developers that is smaller than a Raspberry Pi and allows us to use C#. Well, we have the awesome&nbsp;<A href="#" target="_blank" rel="noopener">Wilderness Labs F7&nbsp;</A>but what about even lower powered off the shelf from Amazon devices like the<SPAN>&nbsp;ESP32-CAM-MB</SPAN>that are just £10/$13 each or even cheaper if you shop around.</P> <P>&nbsp;</P> <P>This blog post is hopefully going to show you how to take one of these super cheap and easy to purchase devices and with your .NET skills connect to Azure IoT Hub all using C#. For this magic to happen I will be using the .NET&nbsp;<A href="#" target="_blank" rel="noopener">nanoFramework&nbsp;</A>which makes it easy to write C# code for a select number of IoT Devices.</P> <P>&nbsp;</P> <P>Let’s dive in…</P> <P>&nbsp;</P> <H1 id="device-being-used"><FONT size="5">Device being used</FONT></H1> <P>The device I am using is an ESP32-CAM and you can get them online just search your favourite online store for <EM><STRONG>ESP32-CAM-MB</STRONG></EM> boards.</P> <P>&nbsp;</P> <P><EM><STRONG><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-24-22-29-38.png" style="width: 782px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308869iE7DB445EFBA5A9CA/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-24-22-29-38.png" alt="2021-07-24-22-29-38.png" /></span></STRONG></EM></P> <H1 id="connecting-the-device"><FONT size="5">Connecting the device</FONT></H1> <P>When you first connect the device to your PC/Laptop you will hear the Windows Bing-bong but looking in the Device Manager you will notice that it appears listed under the&nbsp;<CODE>Other Devices&nbsp;</CODE>section.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-24-22-34-04.png" style="width: 609px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308870iC53BE31A938B00C6/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-24-22-34-04.png" alt="2021-07-24-22-34-04.png" /></span></P> <P>&nbsp;</P> <P>To use the ESP32-CAM-MB you need the CH340 Drivers otherwise it will not connect as a USB device. You can get installation software for the drivers from&nbsp;<A href="#" target="_blank" rel="noopener">Sparkfun</A> as this is the quickest and easiest way to install them.</P> <P>&nbsp;</P> <P>After you have the driver installed the device then appears as an Com device in the Device Manager:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-24-22-36-36.png" style="width: 632px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308871iB3F281FCC675931F/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-24-22-36-36.png" alt="2021-07-24-22-36-36.png" /></span></P> <P>&nbsp;</P> <H1 id="set-up-visual-studio"><FONT size="5">Set-up Visual Studio</FONT></H1> <P>First I need to point out that you can use ANY version of Visual Studio to follow along, from the FREE Community Edition all the way up to the Enterprise version. You can grab the FREE Community edition&nbsp;<A href="#" target="_blank" rel="noopener">Here</A></P> <P>&nbsp;</P> <P>We now need to install the Nano Framework VS2019 extention this is a simple installation from either the&nbsp;<A href="#" target="_blank" rel="noopener">MarketPlace&nbsp;</A>or using the Extention Manager in VS19 and searching for&nbsp;<CODE>nanoFrameWork</CODE></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-10-17-24-15.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308873iA34A0918C1EEBD28/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-10-17-24-15.png" alt="2021-08-10-17-24-15.png" /></span></P> <P>&nbsp;</P> <H1 id="configure-the-esp32-board"><FONT size="5">Configure the ESP32 Board</FONT></H1> <P>Now that the VS19 Extention is installed you need to flash the .NET nanoFramework Firmware onto the ESP32 board as this will include the required nanoFramework Firmware and Bootloaders etc so that you can connect the board to the PC with Visual Studio and target the board for development.</P> <P>&nbsp;</P> <P>The tool need is&nbsp;<A href="#" target="_blank" rel="noopener">Open Source here on Github&nbsp;</A>from the nanoFramework team if you want to read up on it.</P> <P>&nbsp;</P> <P>From here you can see that you need to install the CLI tools which is really easy using the&nbsp;<CODE>dotnet CLI&nbsp;</CODE>so within your Terminal program like&nbsp;<A href="#" target="_blank" rel="noopener">WindowsTerminal&nbsp;</A>run this command.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="powershell">dotnet tool install -g nanoff</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN>Now that&nbsp;</SPAN><CODE>nanoff</CODE><SPAN>&nbsp;is installed you can connect your board and find it’s Com Port in your Device Manager, right click the windows logo and select&nbsp;</SPAN><CODE>Device Manager</CODE><SPAN>&nbsp;I find is the quickest way, under the&nbsp;</SPAN><CODE>Ports (Com &amp; LPT)</CODE><SPAN>&nbsp;you will see your device if you have multiple then unplug and see which one disappears is a good way to work out which port you need. Make a note of the COM number so for me it’s COM6.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-24-22-36-36.png" style="width: 632px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308874i3A6AAEB0413936E4/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-24-22-36-36.png" alt="2021-07-24-22-36-36.png" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>Now you have the Com port you can flash the firmware so back to the Terminal program and run this command, changing the digit after the&nbsp;<CODE>COM</CODE>&nbsp;to match your device COM Port</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="powershell">nanoff --target ESP32_WROOM_32 --serialport COM6 --update --preview </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN>You should see it whirl away for a minute or two as it erases the flash and then burns the new firmware.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-24-22-44-46.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308875i4269F96CD0062B9D/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-24-22-44-46.png" alt="2021-07-24-22-44-46.png" /></span></SPAN></P> <P>&nbsp;</P> <H1 id="create-the-project"><FONT size="5">Create the Project</FONT></H1> <P>We are now ready to create our project, so within Visual Studio create a new project on opening and in the search box enter&nbsp;<CODE>nano&nbsp;</CODE>and you will see that you can create a Blank Application, give it a name and save location as you normally do in Visual Studio.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-15-14-38-29.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308876i367E314BD41D2E55/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-15-14-38-29.png" alt="2021-07-15-14-38-29.png" /></span></P> <P>&nbsp;</P> <P><SPAN>When you have the default blank app showing you now need to install some&nbsp;</SPAN><CODE>NuGet packages</CODE><SPAN>&nbsp;so that you can use the GPIO of the device and these can be found by typing&nbsp;</SPAN><CODE>nanoFramework.Windows.Devices.Gpio</CODE><SPAN>&nbsp;into the search box, you may also need to select the ‘Include Pre-release’ box as I did at the time of writing.</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="powershell">GPIO = General Purpose Input Output</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-24-22-47-57.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308877iDA3E41BE980CFD3C/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-24-22-47-57.png" alt="2021-07-24-22-47-57.png" /></span></P> <P>&nbsp;</P> <P><SPAN>We will also need the following NuGets so may as well install them now but be warned these are currently Preview packages so be sure to tick the ‘Include Pre-release` check box:</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="powershell">AMQPNetLite.nanoFramework nanoFramework.NetWorkHelper</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <H1 id="write-some-code"><FONT size="5">Write some code.</FONT></H1> <P>I have based this project on the AMQP examples on the&nbsp;<A href="#" target="_blank" rel="noopener">nanoFramework Github page&nbsp;</A>but as I don’t like just reporting Temperature and prefer reporting something a little more fun and useful here is my version. I am generating a Latitude and Longitude based off a random bearing and distance from the last waypoint to simulate tracking a device in the real world. This code you can cut and paste into Visual Studio and have a read over but there are a few more steps we need to complete before we can upload the code to the device.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="csharp">using Amqp; using nanoFramework.Networking; using System; using System.Diagnostics; using System.Text; using System.Threading; using AmqpTrace = Amqp.Trace; namespace ConnectESP32ToIoTHub { public class Program { // Set-up Wifi Credentials so we can connect to the web. private static string Ssid = "CAS-Wifi"; private static string WifiPassword = "JacobAgius0406!"; // Azure IoTHub settings const string _hubName = "CAS-Learning-IoTHub"; const string _deviceId = "nanoFramework-Device1"; const string _sasToken = "SharedAccessSignature sr=CAS-Learning-IoTHub.azure-devices.net%2Fdevices%2FnanoFramework-Device1&amp;sig=0eGgNE7BqjzWKfeffojGJYaTEQgFV72bTytkUCkU8qQ%3D&amp;se=1631051254"; // Lat/Lon Points static double Latitude; static double Longitude; const double radius = 6378; // Radius of earth in Kilometers at the equator, yes it's a big planet. Fun Fact it's 6356Km pole to pole so the planet is an oblate spheroid or a squashed ball. private static Random _random = new Random(); static bool TraceOn = false; public static void Main() { // Set-up first Point and I have chossen to use the great Royal Observatory, Greenwich, UK where East meets West. Latitude = 51.476852; Longitude = 0.0; Debug.WriteLine("Waiting for network up and IP address..."); bool success = false; CancellationTokenSource cs = new(60000); success = NetworkHelper.ConnectWifiDhcp(Ssid, WifiPassword, setDateTime: true, token: cs.Token); if (!success) { Debug.WriteLine($"Can't get a proper IP address and DateTime, error: {NetworkHelper.ConnectionError.Error}."); if (NetworkHelper.ConnectionError.Exception != null) { Debug.WriteLine($"Exception: {NetworkHelper.ConnectionError.Exception}"); } return; } else { Debug.WriteLine($"YAY! Connected to Wifi - {Ssid}"); } // setup AMQP // set trace level AmqpTrace.TraceLevel = TraceLevel.Frame | TraceLevel.Information; // enable trace AmqpTrace.TraceListener = WriteTrace; Connection.DisableServerCertValidation = false; // launch worker thread new Thread(WorkerThread).Start(); Thread.Sleep(Timeout.Infinite); } private static void WorkerThread() { try { // parse Azure IoT Hub Map settings to AMQP protocol settings string hostName = _hubName + ".azure-devices.net"; string userName = _deviceId + "@sas." + _hubName; string senderAddress = "devices/" + _deviceId + "/messages/events"; string receiverAddress = "devices/" + _deviceId + "/messages/deviceBound"; Connection connection = new Connection(new Address(hostName, 5671, userName, _sasToken)); Session session = new Session(connection); SenderLink sender = new SenderLink(session, "send-link", senderAddress); ReceiverLink receiver = new ReceiverLink(session, "receive-link", receiverAddress); receiver.Start(100, OnMessage); while (true) { string messagePayload = $"{{\"Latitude\":{Latitude},\"Longitude\":{Longitude}}}"; // compose message Message message = new Message(Encoding.UTF8.GetBytes(messagePayload)); message.ApplicationProperties = new Amqp.Framing.ApplicationProperties(); // send message with the new Lat/Lon sender.Send(message, null, null); // data sent Debug.WriteLine($"*** DATA SENT - Lat - {Latitude}, Lon - {Longitude} ***"); // update the location data GetNewDestination(); // wait before sending the next position update Thread.Sleep(5000); } } catch (Exception ex) { Debug.WriteLine($"-- D2C Error - {ex.Message} --"); } } private static void OnMessage(IReceiverLink receiver, Message message) { try { // command received Double.TryParse((string)message.ApplicationProperties["setlat"], out Latitude); Double.TryParse((string)message.ApplicationProperties["setlon"], out Longitude); Debug.WriteLine($"== Received new Location setting: Lat - {Latitude}, Lon - {Longitude} =="); } catch (Exception ex) { Debug.WriteLine($"-- C2D Error - {ex.Message} --"); } } static void WriteTrace(TraceLevel level, string format, params object[] args) { if (TraceOn) { Debug.WriteLine(Fx.Format(format, args)); } } // Starting at the last Lat/Lon move along the bearing and for the distance to reset the Lat/Lon at a new point... public static void GetNewDestination() { // Get a random Bearing and Distance... double distance = _random.Next(10); // Random distance from 0 to 10km... double bearing = _random.Next(360); // Random bearing from 0 to 360 degrees... double lat1 = Latitude * (Math.PI / 180); double lon1 = Longitude * (Math.PI / 180); double brng = bearing * (Math.PI / 180); double lat2 = Math.Asin(Math.Sin(lat1) * Math.Cos(distance / radius) + Math.Cos(lat1) * Math.Sin(distance / radius) * Math.Cos(brng)); double lon2 = lon1 + Math.Atan2(Math.Sin(brng) * Math.Sin(distance / radius) * Math.Cos(lat1), Math.Cos(distance / radius) - Math.Sin(lat1) * Math.Sin(lat2)); Latitude = lat2 * (180 / Math.PI); Longitude = lon2 * (180 / Math.PI); } } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <H1 id="fill-in-the-blanks"><FONT size="5">Fill in the Blanks.</FONT></H1> <P>As you can see at the top of the Class there are some placeholders where you need to fill in with some data, the easy ones are the Wifi credentials these you hopefully should know for your local network and they will allow the ESP32 module to connect to the internet so that it can communicate with the Azure IoT Hub.</P> <P>&nbsp;</P> <P>Next we have some Azure IoT Hub settings that are needed and this is so that the ESP32 knows which Hub and device it is as well as the connection string or SAS Token it should use to find the IoT Hub. So where do we get these from? The Azure IoT Hub name and DeviceID we can get from the Azure IoT Hub on the Azure portal so lets head out to out Azure IoT Hub and add a device, if you want more detail on creating your first Azure IoT Hub and adding a device there is no better place than&nbsp;<A href="#" target="_blank" rel="noopener">Docs.Microsoft.com</A></P> <P>&nbsp;</P> <P>When in the<A href="#" target="_blank" rel="noopener">Azure Portal</A>and you have found or created your IoT Hub on the left menu under the Explorer tab click&nbsp;<CODE>IoT Devices&nbsp;</CODE>and then the&nbsp;<CODE>+ Add Device</CODE>.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-17-10-53-07.png" style="width: 930px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308878i464B181189D11C3D/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-17-10-53-07.png" alt="2021-08-17-10-53-07.png" /></span></P> <P>&nbsp;</P> <P><SPAN>On the new pane enter a meaningful name for your device and ensure that&nbsp;</SPAN><CODE>Symmetric Keys</CODE><SPAN>&nbsp;is selected and when your happy click the&nbsp;</SPAN><CODE>Save</CODE><SPAN>&nbsp;button at the bottom of the page.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-17-10-57-53.png" style="width: 743px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308879i1F9D262013AFC923/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-17-10-57-53.png" alt="2021-08-17-10-57-53.png" /></span></SPAN></P> <P>&nbsp;</P> <P>This means we now have the&nbsp;<CODE>_hubname&nbsp;</CODE>which is the name of your IoT Hub and will be at the top of the page as you can see in the first image in this section so for me it’s&nbsp;<CODE>CAS-Learning-IoTHub</CODE>.</P> <P>&nbsp;</P> <P>Enter that into the placeholder in the code, next is the device name you just created so you know this already and can enter that for the placeholder of <CODE>_deviceId</CODE> but what about that <CODE>_sasToken</CODE> that one is a little more tricky.</P> <H1 id="creating-the-sas-token"><FONT size="5">Creating the SAS Token.</FONT></H1> <P>You can read all about SAS Tokens on the <A href="#" target="_blank" rel="noopener">Docs website here</A> but how do we go about creating the Token for our device.</P> <P>&nbsp;</P> <P>There are a few ways, as the docs suggest we can use the CLI extension command <A href="#" target="_blank" rel="noopener">az iot hub generate-sas-token</A>, or the <A href="#" target="_blank" rel="noopener">Azure IoT Tools for Visual Studio Code</A></P> <H1 id="lets-try-the-cli"><FONT size="5">Lets try the CLI</FONT></H1> <P>In the Azure portal in the top right where your account details are shown you can click to open the Azure CLI:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-17-14-37-17.png" style="width: 255px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308884iADB51AC8EEF34215/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-17-14-37-17.png" alt="2021-08-17-14-37-17.png" /></span></P> <P><SPAN>This will open the Azure CLI at the bottom of the browser window and you can do the normal resize things and drag to increase the size or use the icons on the top right of that window.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-17-14-39-20.png" style="width: 860px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308885iC42E461F5CE34090/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-17-14-39-20.png" alt="2021-08-17-14-39-20.png" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>Now we can use the&nbsp;<A href="#" target="_blank" rel="noopener">az iot hub CLI commands</A>&nbsp;to request a SAS Token for our device and this will look something like this:</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="powershell">az iot hub generate-sas-token --hub-name CAS-Learning-IoTHub --device-id nanoFramework-Device1 --login 'HostName=CAS-Learning-IoTHub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=&lt;SECRET KEY&gt;'</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN>As you can see you need to use the Hub Name and the Device ID which you already have from the previous steps but what about that login part? Well this we get from the Azure IoT Hub menu in the portal and click the ‘Shared Access Policies’ then under the list of policies click&nbsp;</SPAN><CODE>iothubowner</CODE><SPAN>&nbsp;and finally copy the ‘Primary Connection String’ and it’s this you need to use as the login in the CLI, just beware that you have to surround the string in the CLI with SINGLE quote marks for it to be accepted and used.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-17-14-56-16.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308886i1F087D8997C28906/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-17-14-56-16.png" alt="2021-08-17-14-56-16.png" /></span></SPAN></P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-17-14-51-17.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308887iDD7976E9C6CA33F5/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-17-14-51-17.png" alt="2021-08-17-14-51-17.png" /></span></SPAN></P> <P>&nbsp;</P> <P>Now we have the connection string we can copy this into out place holder in our code, obviously I have blurred the parts that are my keys but you hopefully get the idea here.</P> <H1 id="what-about-vscode"><FONT size="5">What about VSCode</FONT></H1> <P>If you have not discovered already there is a fantastic <A href="#" target="_blank" rel="noopener">Azure IoT extention for VSCode</A> and it’s this that will be used here.</P> <P>&nbsp;</P> <P>Install the extention and follow the set-up instructions on the marketplace so that you can connect to your Azure Account, personally I use the 2nd option to sign in to my account when doing this on a new machine as it’s easier.</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px"><STRONG><EM>Setup Azure IoT Hub through Sign in to Azure</EM></STRONG></P> <P>&nbsp;</P> <P><SPAN>Now your connected in your explorer you will see you ‘AZURE IoT HUB’ tab which for me is at the bottom, open this and select your IoT Hub and then Devices.</SPAN></P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-17-15-05-39.png" style="width: 411px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308888iB20B67A27FE08FB6/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-17-15-05-39.png" alt="2021-08-17-15-05-39.png" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>Now right clicking on the device will show a context menu where you can just click the ‘Generate SAS token’</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-17-15-07-15.png" style="width: 515px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308889i5B63650329D7F0A2/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-17-15-07-15.png" alt="2021-08-17-15-07-15.png" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>The VSCode Command Palette will then show asking for the number of&nbsp;<CODE>HOURS</CODE>&nbsp;that you wish the token to live for so enter a number.</SPAN></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px"><EM><STRONG>Just beware that this is in hours and not seconds like the CLI so it makes the maths in your head a little easier...</STRONG></EM></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-17-15-08-14.png" style="width: 606px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308890i12DAAF23963AE636/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-17-15-08-14.png" alt="2021-08-17-15-08-14.png" /></span></P> <P>&nbsp;</P> <P><SPAN>Once you enter this you will notice in the VSCode output window at the bottom your new SASToken will be displayed (CTRL+ALT+O if the window not displayed).</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-17-15-11-58.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308891iE8F9A2BAFF389B9C/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-17-15-11-58.png" alt="2021-08-17-15-11-58.png" /></span></SPAN></P> <P>&nbsp;</P> <P>Again take this and place it into the placeholder in the code.</P> <H1 id="root-certificate"><FONT size="5">Root Certificate</FONT></H1> <P>This one got me at first as it’s a very easy thing to miss so I hope me putting it here helps you or even future ME! get this right.</P> <P>&nbsp;</P> <P>We need to ensure that the device is connecting to the web and thus IoT Hub with the correct certificate, but they are currently in the process of switching them over from the Baltimore CyberTrust to DigiCert Global G2 now rather than me try to explain this <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-iot-tls-critical-changes-are-almost-here-and-why-you/ba-p/2393169" target="_blank" rel="noopener">here is a blog post all about it</A></P> <P>&nbsp;</P> <P>So at the time of writing this post Azure IoT will only use the Baltimore CyberTrust so lets use that, the process is the same for both anyway. First we need to download the Certificate and there is a page in the <A href="#" target="_blank" rel="noopener">Docs</A> that has the links.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-17-15-32-28.png" style="width: 989px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308892i39A349D0C3B3E30A/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-17-15-32-28.png" alt="2021-08-17-15-32-28.png" /></span></P> <P>&nbsp;</P> <P><SPAN>Once downloaded we need to upload this file to the ESP32 module and this is where the awesome nanoFramework team have us covered. Connect your device to your PC/Laptop again and in Visual Studio open the nanoFramework Device Explorer</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-17-15-41-46.png" style="width: 984px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308893i683363AEAD2F8290/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-17-15-41-46.png" alt="2021-08-17-15-41-46.png" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>In here we should see our device showing up, if not click the magnifying glass to scan and be sure that the cable you are using is a data cable and not a charging only cable (I throw charging only out now I have wasted too many hours of my life being caught by them…). Once your device is shown click the ‘Edit Network Configuration’ icon as shown boxed in the screen grab.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-17-15-45-41.png" style="width: 739px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308894i971C27D5FC52C214/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-17-15-45-41.png" alt="2021-08-17-15-45-41.png" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>This opens a dialog box and clicking the last tab ‘General’ we can see the ‘Root CA’ where we can browse for the Cert we downloaded and have it uploaded to the ESP32 device.</SPAN></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px"><EM><STRONG>Note: </STRONG></EM></P> <P class="lia-indent-padding-left-30px"><EM><STRONG>There is no feedback sadly that the certificate is selected to be uploaded so you will have to trust in yourself that you did it right, or like me do it twice to be sure.</STRONG></EM></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-08-17-15-49-24.png" style="width: 802px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308895i4F8CC56C7D374BB9/image-size/large?v=v2&amp;px=999" role="button" title="2021-08-17-15-49-24.png" alt="2021-08-17-15-49-24.png" /></span></P> <P>&nbsp;</P> <H1 id="upload-our-code"><FONT size="5">Upload our code</FONT></H1> <P>Now we have all the placeholders filled in and the Root cert prepared we are ready to upload and test, now as a .NET developer what is your muscle memory for running code…. Yep click F5… How awesome is that.</P> <P>&nbsp;</P> <P>The first time you do this it takes a minute or two but subsequent builds are much faster.</P> <P>&nbsp;</P> <P>Don’t forget you have the full power of the Visual Studio debugger so feel free to add a breakpoint or two and step through the code to watch the magic.</P> <P>&nbsp;</P> <P>We now have the code running on out little ESP device and we can step through and check the variables etc but how can we see the data arriving at the IoT Hub? Well that can be done very easily again using either the VSCode extention or the Azure CLi.</P> <H1 id="monitor-using-the-cli"><FONT size="5">Monitor using the CLI</FONT></H1> <P>Using the same method as shown above open the Azure CLI and use the <A href="#" target="_blank" rel="noopener">Azure CLI Command</A></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="powershell">az iot hub monitor-events --output table --hub-name {YourIoTHubName} --login 'HostName=CAS-Learning-IoTHub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=&lt;SECRET KEY&gt;'</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN>You will then see the data arriving and displayed inside the CLI, the cool advantage of using the CLI commands is that once you get used to them you can use the Mobile App on your Android/IoS device which is a Xamarin App by the way to query your Azure IoT Hub and devices.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-09-07-18-06-20.png" style="width: 473px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308896i0507FAE7716E703F/image-size/large?v=v2&amp;px=999" role="button" title="2021-09-07-18-06-20.png" alt="2021-09-07-18-06-20.png" /></span></SPAN></P> <P>&nbsp;</P> <H1 id="monitor-using-vscode"><FONT size="5">Monitor using VSCode</FONT></H1> <P>This is by far the easier way and you right click on the device like you did to generate the SAS Token but now you select the <CODE>Start Monitoring Built-In Event Endpoint</CODE> from the context menu.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-09-07-17-40-02.png" style="width: 441px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308897i044532ACC2BF84F9/image-size/large?v=v2&amp;px=999" role="button" title="2021-09-07-17-40-02.png" alt="2021-09-07-17-40-02.png" /></span></P> <P>&nbsp;</P> <P><SPAN>You should now see in the Output Window at the bottom the data will start to arrive.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-09-07-17-41-42.png" style="width: 865px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308898iEF7F8F0FA1A3E36F/image-size/large?v=v2&amp;px=999" role="button" title="2021-09-07-17-41-42.png" alt="2021-09-07-17-41-42.png" /></span></SPAN></P> <P>&nbsp;</P> <H1 id="consuming-messages"><FONT size="5">Consuming Messages</FONT></H1> <P>If like me you have an <CODE>S1 Standard</CODE> tier IoT Hub then you have a daily limit of 400,000 message which sounds a lot but now lets think about the number of messages this actually gives. Our Device is sending a message every 5 seconds so given the 24 hrs in the day that’s:</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px"><EM>24 * 60 * 60 = 86,400 seconds per day</EM></P> <P class="lia-indent-padding-left-30px"><EM> Divide by 5 gives - 17,280 messages per day. </EM></P> <P class="lia-indent-padding-left-30px"><EM>But with a limit of 400,000 per day that is only 23 devices before they are all used up?</EM></P> <P>&nbsp;</P> <P>Therefore it’s always a good idea when planning your system to think about the number of devices and also the number of messages each will send. Do you really need an update every 5 seconds for example or will every 30 seconds do? That would change this from 23 devices to 138 devices as an example.</P> <P>&nbsp;</P> <P>Of course if you do need more you can step up to S2 giving 6 million or and S3 giving 300 million but it’s a higher cost.</P> <P>&nbsp;</P> <P>The other alternative is to add more S1 units and get multiples of the 400,000. It’s always worth doing the maths and picking the correct tier rather than having your boss ask why the Azure bill just jumped, you can looking at the pricing more <A href="#" target="_blank" rel="noopener">Here</A></P> <P>&nbsp;</P> <P>You can check you message usage on the Overview page of the IoT Hub and look at the usage graphs as well.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-09-07-17-56-05.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/308899i7D626918758FC964/image-size/large?v=v2&amp;px=999" role="button" title="2021-09-07-17-56-05.png" alt="2021-09-07-17-56-05.png" /></span></P> <P>&nbsp;</P> <H1 id="conclusion"><FONT size="5">Conclusion</FONT></H1> <P>So that’s it’s congratulations you now have a cheap £10/$13 ESP32 IoT device connecting over WiFi to Azure IoT Hub and sending messages, all using C# and .NET and I hope you agree it opens the IoT space to .NET developers which is awesome.</P> <P>&nbsp;</P> <P>Connect a sensor or two to the Board and there are millions of sensors to choose from out there even some Temp and Humidity sensors if you really have to, and you have an IoT Solution for your business or your Home Automation and much cheaper and easier than other methods.</P> <P>&nbsp;</P> <P>Those of you that didn’t just Cut&amp;Paste the code from above like you do over at Stack Overflow will have noticed I added an <CODE>OnMessage()</CODE> method and this I will use in the next post I write where I will show that you can also use nanoFramework and IoT Hub to send Cloud-2-Device messages. These are messages that are sent from the Azure Cloud down to the device to change setting or configure a value, in this case I will be editing the Lat/Lon among other things as well as plotting the points on a Bing Map using PowerBI and Stream Analytics if you can’t wait you can take a quick peak at the <A href="#" target="_blank" rel="noopener">Microsoft Docs</A> for that part.</P> <P>&nbsp;</P> <P>It just leaves me to say thanks to the <A href="#" target="_blank" rel="noopener">amazing team behind nanoFramework</A> and also to the <A href="#" target="_blank" rel="noopener">.NET Foundation</A> for supporting and helping the project grow.</P> <P>&nbsp;</P> <P>Lastly if you have any projects that need support and help from a Freelance .NET IoT and Mobile developer or just want to chat, geek out or comment on this blog post you can find me over at <A href="#" target="_blank" rel="noopener">Twitter @CliffordAgius</A> DM’s are open.</P> <P>&nbsp;</P> <P>Happy Coding!</P> Mon, 13 Sep 2021 18:24:30 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/connect-an-esp32-to-azure-iot-with-net-nanoframework/ba-p/2731691 Clifford Agius 2021-09-13T18:24:30Z Scale an Azure Percept DK configuration to multiple devices https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/scale-an-azure-percept-dk-configuration-to-multiple-devices/ba-p/2743865 <P>Azure Percept DK is an easy-to-use platform for creating edge AI solutions. It combines different Azure services (e.g., Azure Cognitive Services and Machine Learning) to deliver real-time audio and vision insights. The device comes pre-built with Azure Percept Studio, a service which makes deploying on this device even easier to do. To find out more about the Azure Percept DK visit <A href="#" target="_blank" rel="noopener">this page</A>.</P> <P>&nbsp;</P> <P>Previously, we successfully managed <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/deploying-azure-stream-analytics-edge-as-a-module-on-azure/ba-p/2708359" target="_blank" rel="noopener">deploying a custom azure stream analytics edge module on the Azure Percept DK</A>. Now, <STRONG>our question was – how do we replicate the modules with their configuration on multiple Azure Percept DK at large scale?</STRONG> One way is to deploy and configure each module on devices separately - but this would be very time consuming. <STRONG>The answer to this is: Azure IoT Edge automatic deployments. </STRONG></P> <P>&nbsp;</P> <P>Azure IoT Edge provides two different methods to configure modules for IoT Edge devices. The method we are outlining today is to create a deployment manifest and then apply the configuration to a particular device by its name. The configuration can be applied on multiple devices at the same time by using device twin tags.</P> <P>&nbsp;</P> <P><STRONG>All you need to do is to provide the device connection string (from IoT Hub) to your end Azure Percept DK user. </STRONG>You can go through the Azure Percept DK setup and use this connection string to get connected to Azure IoT hub. The reference architecture containing the AzureEye, StreamAnalytics, HostIp, ImageCapturing and WebStream modules is showed in the diagram below:</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_1-1631451963726.jpeg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309855iC53B30BF181ED50A/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_1-1631451963726.jpeg" alt="Sargithan_Senthil_1-1631451963726.jpeg" /></span></P> <P>&nbsp;</P> <P>We can talk hours about the features of Azure Percept DK, but now it’s time to get things working. In order to get started, <STRONG>log onto the Azure portal, navigate to your IoT Hub resource and follow the below instructions:</STRONG></P> <P>&nbsp;</P> <OL> <LI><STRONG>In IoT Hub main page, under Automatic Device Management section click on ‘IoT Edge’ </STRONG></LI> <LI><STRONG>Select ‘IoT Edge Deployments’ tab and click on ‘Add Deployment’</STRONG></LI> <LI><STRONG>Specify the deployment name and add labels if needed</STRONG></LI> </OL> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_2-1631451963707.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309856iF5533E7AEC0243FF/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_2-1631451963707.png" alt="Sargithan_Senthil_2-1631451963707.png" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">4. <STRONG>Go on the ‘modules’ tab</STRONG></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px">5. <STRONG>Click on Runtime Settings button and configure the Edge Agent and Edge Hub, if not already configured by default (configuration template available at the end of the article)</STRONG></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px"><STRONG><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="edge agent.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310290iEF71080AF7EE4549/image-size/large?v=v2&amp;px=999" role="button" title="edge agent.PNG" alt="edge agent.PNG" /></span></STRONG></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px"><STRONG><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="edgehub.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310291i79A523CC50518215/image-size/large?v=v2&amp;px=999" role="button" title="edgehub.PNG" alt="edgehub.PNG" /></span></STRONG></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px">6. <STRONG>Under IoT Edge Modules click on add and select ‘IoT Edge Module’. Give it a name, in this case: azureeyemodule and paste the Image URI under ‘Module Settings’ tab (links available at the end of the document)</STRONG></P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="Sargithan_Senthil_3-1631451963710.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309854i182F3635153FC1F8/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_3-1631451963710.png" alt="Sargithan_Senthil_3-1631451963710.png" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">7. <STRONG>Select ‘Container Create Option’ tab and paste the Docker container configuration</STRONG></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_4-1631451963712.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309857i29DD5CC8F6B748E4/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_4-1631451963712.png" alt="Sargithan_Senthil_4-1631451963712.png" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">8. <STRONG>Select the ‘Module Twin Settings’ tab and paste the module twin desired properties. Set the property to “properties.desired”</STRONG></P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_5-1631451963715.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309858iC763B2E0BAD3B90B/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_5-1631451963715.png" alt="Sargithan_Senthil_5-1631451963715.png" /></span></P> <P class="lia-indent-padding-left-30px">9. <STRONG>Save the newly created IoT Edge Module by selecting the ‘add’ button</STRONG></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px">10. <STRONG>If deploying ASA Edge follow this step, otherwise skip this step: Add the Azure Stream Analytics Edge module by selecting the ‘Azure Stream Analytics’ Module option, select the Edge Job you want to deploy and click ‘Save’</STRONG></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_6-1631451963730.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309859iAC5032B12E2034A3/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_6-1631451963730.png" alt="Sargithan_Senthil_6-1631451963730.png" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">11. <STRONG>Add the ImageCapturingModule by following the same steps shown for azureyeemodule</STRONG></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_22-1631452367759.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309875iEC7F3159CCE1A874/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_22-1631452367759.png" alt="Sargithan_Senthil_22-1631452367759.png" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">12. <STRONG>Select the ‘Environment Variables’ tab and specify 3 variables as shown in the screenshot below</STRONG></P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_23-1631452399029.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309876iDCA3F430F1A158D4/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_23-1631452399029.png" alt="Sargithan_Senthil_23-1631452399029.png" /></span></P> <P class="lia-indent-padding-left-30px">13. <STRONG>Press on ‘Add’ to save the module</STRONG></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">14. <STRONG>Add the HostIpModule by following the same steps as for the ImageCapturingModule</STRONG></P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_24-1631452505829.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309877iC959F10B4C712E9C/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_24-1631452505829.png" alt="Sargithan_Senthil_24-1631452505829.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">15. <STRONG>Select ‘Container Create Option’ tab and paste the Docker container configuration</STRONG></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_25-1631452538523.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309878iCC5376A243F835C2/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_25-1631452538523.png" alt="Sargithan_Senthil_25-1631452538523.png" /></span></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px">16. <STRONG>Click ‘Add’</STRONG></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">17. <STRONG>Add the WebStreamModule by following the same steps as shown above</STRONG></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_26-1631452601495.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309880iE5960E7049E04FA7/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_26-1631452601495.png" alt="Sargithan_Senthil_26-1631452601495.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_27-1631452640364.png" style="width: 973px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309881i1EF9C0CE8D7FBC3A/image-dimensions/973x661?v=v2" width="973" height="661" role="button" title="Sargithan_Senthil_27-1631452640364.png" alt="Sargithan_Senthil_27-1631452640364.png" /></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_28-1631452657741.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309882i4D8F68161394B34A/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_28-1631452657741.png" alt="Sargithan_Senthil_28-1631452657741.png" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">18. <STRONG>Click ‘Add’ to save the module</STRONG></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">19. <STRONG>Select the ‘Routes’ tab to specify how the device will route the data (we are specifying two routes – one route from azureeyemodule to the ASA job and the second route from ASA job to IoT Hub). In case you are planning to receive telemetry from the azureeyemodule only, use the third route only which is shown below (AzureEyeModuleToIoTHub)</STRONG></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="routes.article.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/310285i061CE0C97653F8EE/image-size/large?v=v2&amp;px=999" role="button" title="routes.article.PNG" alt="routes.article.PNG" /></span></P> <P class="lia-indent-padding-left-30px">20. <STRONG>Optionally, you can specify some metrics to monitor IoT Edge deployments. In our case we have specified one metric to state if the device has been successfully configured</STRONG></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_31-1631452847182.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309886i19CB717D7AB0E5D6/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_31-1631452847182.png" alt="Sargithan_Senthil_31-1631452847182.png" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">21. <STRONG>Click on ‘Target Devices’ tab and specify the priority number for the deployment (higher values indicate higher priority) and the target condition by specifying the device/s you want to apply the deployment manifest on (in our case deviceid = ‘Percept_Excalibur’). You can click on view devices to check if the target condition is correct</STRONG></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_32-1631452879816.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309887i05607678A42A7C64/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_32-1631452879816.png" alt="Sargithan_Senthil_32-1631452879816.png" /></span></P> <P class="lia-indent-padding-left-30px">22. <STRONG>Select ‘Review + create’ tab – if the validation passed, you are ready to create the deployment manifest by clicking on the create button</STRONG></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_33-1631452959323.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309888i06EF650E4D209CEA/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_33-1631452959323.png" alt="Sargithan_Senthil_33-1631452959323.png" /></span></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P class="lia-indent-padding-left-30px">23. <STRONG>Now you can get the IoT Edge Device Connection string, using Azure IoT Hub, and pass it on to the Azure Percept DK user. The user will need to configure the Azure Percept DK to use the connection string</STRONG></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">24. <STRONG>You should see the newly created deployment listed in ‘IoT Edge Deployments’ tab – wait a few minutes for the deployment manifest to be applied to the target device</STRONG></P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_35-1631453067468.png" style="width: 698px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309890i573059BF1AB034DF/image-dimensions/698x64?v=v2" width="698" height="64" role="button" title="Sargithan_Senthil_35-1631453067468.png" alt="Sargithan_Senthil_35-1631453067468.png" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">25. <STRONG>If successfully, the system metrics will report success</STRONG></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_37-1631453124423.png" style="width: 694px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309892i88E0967DE0CD004A/image-dimensions/694x58?v=v2" width="694" height="58" role="button" title="Sargithan_Senthil_37-1631453124423.png" alt="Sargithan_Senthil_37-1631453124423.png" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">26.<STRONG> The modules should be deployed on the device up and running showing no errors</STRONG></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_38-1631453158983.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/309893i482488AC35CFE332/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_38-1631453158983.png" alt="Sargithan_Senthil_38-1631453158983.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>--------------------------------------------------------------------------------------------------------------------------------------------------------</P> <P>&nbsp;</P> <P><STRONG>Edge deployment configuration template:</STRONG></P> <P><STRONG>Edge Hub settings</STRONG></P> <P>Image URI:&nbsp;mcr.microsoft.com/azureiotedge-hub:1.1</P> <P>Environment variables:&nbsp;</P> <P>OptimizeForPerformance (Text) - False<BR />mqttSettings__ThreadCount (Text) - 4<BR />SslProtocols (Text) - tls1.2</P> <P>&nbsp;</P> <P><STRONG>Container Create Options:</STRONG></P> <P>&nbsp;</P> <LI-CODE lang="json">{ "HostConfig": { "PortBindings": { "443/tcp": [ { "HostPort": "443" } ], "5671/tcp": [ { "HostPort": "5671" } ], "8883/tcp": [ { "HostPort": "8883" } ] } } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><STRONG>Edge Agent settings</STRONG></P> <P>Image URI:&nbsp;mcr.microsoft.com/azureiotedge-agent:1.0</P> <P>Environment Variables:</P> <P>BackupConfigFilePath (Text) -&nbsp;/tmp/edgeAgent/backup.json</P> <P>&nbsp;</P> <P><STRONG>Azureeyemodule settings</STRONG></P> <P>IoT Edge Module Name: azureeyemodule</P> <P>Image URI: mcr.microsoft.com/azureedgedevices/azureeyemodule:preload-devkit</P> <P>&nbsp;</P> <P><STRONG>Container Create Options:</STRONG></P> <P>&nbsp;</P> <LI-CODE lang="json">{ "ExposedPorts": { "8554/tcp": {} }, "HostConfig": { "Binds": [ "/dev/bus/usb:/dev/bus/usb" ], "DeviceCgroupRules": [ "c 189:* rmw" ], "PortBindings": { "8554/tcp": [ { "HostPort": "8554" } ] } } } </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><STRONG>Module Twin Settings:</STRONG></P> <P>&nbsp;</P> <LI-CODE lang="json">{ "ExposedPorts": { "8554/tcp": {} }, "HostConfig": { "Binds": [ "/dev/bus/usb:/dev/bus/usb" ], "DeviceCgroupRules": [ "c 189:* rmw" ], "PortBindings": { "8554/tcp": [ { "HostPort": "8554" } ] } } } </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><STRONG>ImageCapturingModule</STRONG></P> <P>IoT Edge Module Name: ImageCapturingModule</P> <P>Image URI: mcr.microsoft.com/azureedgedevices/imagecapturingmodule:latest-arm64v8</P> <P>Environment Variables:</P> <P class="lia-indent-padding-left-30px">RTSP_IP – azureeyemodule</P> <P class="lia-indent-padding-left-30px">RTSP_PORT – 8554</P> <P class="lia-indent-padding-left-30px">RTSP_PATH – raw</P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P><STRONG>HostIpModule </STRONG></P> <P>IoT Edge Module Name: HostIpModule</P> <P>Image URI: mcr.microsoft.com/azureedgedevices/hostipmodule:latest-arm64v8</P> <P>&nbsp;</P> <P><STRONG>Container Create Options:</STRONG></P> <P>&nbsp;</P> <LI-CODE lang="json">{ "NetworkingConfig": { "EndpointsConfig": { "host": {} } }, "HostConfig": { "NetworkMode": "host" } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><STRONG>WebStreamModule</STRONG></P> <P>IoT Edge Module Name: WebStreamModule</P> <P>Image URI: mcr.microsoft.com/azureedgedevices/webstreammodule:preload-devkit</P> <P>Environment Variables:</P> <P class="lia-indent-padding-left-30px">RTSP_IP – azureeyemodule</P> <P class="lia-indent-padding-left-30px">RTSP_PORT – 8554</P> <P class="lia-indent-padding-left-30px">RTSP_PATH – raw</P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P><STRONG>Container Create Options:</STRONG></P> <P>&nbsp;</P> <LI-CODE lang="json">{ "ExposedPorts": { "2999/tcp": {}, "3000/tcp": {}, "3002/tcp": {}, "3004/tcp": {}, "3006/tcp": {}, "3008/tcp": {}, "3010/tcp": {} }, "HostConfig": { "PortBindings": { "2999/tcp": [ { "HostPort": "2999" } ], "3000/tcp": [ { "HostPort": "3000" } ], "3002/tcp": [ { "HostPort": "3002" } ], "3004/tcp": [ { "HostPort": "3004" } ], "3006/tcp": [ { "HostPort": "3006" } ], "3008/tcp": [ { "HostPort": "3008" } ], "3010/tcp": [ { "HostPort": "3010" } ] } } } </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><STRONG>Deployment routes</STRONG></P> <P>&nbsp;</P> <LI-CODE lang="json">{ "routes": { "telemetrytoAsa": "FROM /messages/modules/azureeyemodule/* INTO BrokeredEndpoint(\"/modules/perceptasa/inputs/edgeinput\")", "asatoiothub": "FROM /messages/modules/perceptasa/* INTO $upstream", "AzureEyeModuleToIoTHub": "FROM /messages/modules/azureeyemodule/outputs/* INTO $upstream" } }</LI-CODE> <P>&nbsp;</P> <P>--------------------------------------------------------------------------------------------------------------------------------------------------------</P> <P aria-level="2"><STRONG>Now Try out Azure Percept yourself</STRONG></P> <P><LI-WRAPPER></LI-WRAPPER></P> <P>&nbsp;</P> <P>To get started on building your own solutions with Azure Percept, it’s easy to <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-dk-and-azure-percept-audio-now-available-in-more/ba-p/2712969" target="_blank" rel="noopener">purchase a developer kit </A>to try out pilot projects before deciding to deploy at scale. You also can visit the <A href="#" target="_blank" rel="noopener">Azure Percept YouTube channel</A> for videos about getting started with the developer kit.</P> Tue, 14 Sep 2021 14:58:23 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/scale-an-azure-percept-dk-configuration-to-multiple-devices/ba-p/2743865 Sargithan_Senthil 2021-09-14T14:58:23Z Building multi-tenant solutions with Azure IoT Central https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/building-multi-tenant-solutions-with-azure-iot-central/ba-p/2617416 <P><EM>In just minutes, scale your IoT solutions from one customer to many with <STRONG>organizations</STRONG> in IoT Central, the new, easy way to extend your solutions while maintaining flexible access control.</EM></P> <P>&nbsp;</P> <P>Scale your IoT solutions beyond a single tenant—simply and securely. <STRONG>Organizations</STRONG>, a new feature from IoT Central helps customers control and manage access to devices, users, and experiences—in an interface as familiar as folder management. &nbsp;</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="org-tree.png" style="width: 916px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/303053iA2D3697E8C552F6A/image-size/large?v=v2&amp;px=999" role="button" title="org-tree.png" alt="Depiction of an IoT project’s organization structure with three customers and a few sub-customers/departments" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Depiction of an IoT project’s organization structure with three customers and a few sub-customers/departments</span></span></P> <P>Because most IoT solution architectures today consist of a single cloud ingestion endpoint, scaling for multiple tenants is challenging. Currently, customers either spin up an instance of their IoT solution for each customer, or ingest all the data through a single endpoint, applying access control at the application layer. Both approaches increase the management overhead and create opportunities for security breaches. <STRONG>Organizations</STRONG> makes it easier to build, maintain and extend your IoT solution without additional work – or new vulnerabilities.</P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/uuVrosZOD8E?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="IoTShow episode about Azure IoT Central Organizations feature"></IFRAME></DIV> <P>&nbsp;</P> <P>An <STRONG>organization</STRONG> can represent an entity, a physical location, or your organizational structure. Once configured, devices and experiences – like dashboards, device groups, and jobs – are assigned to an organization. Invited users can access one or more organizations in the application – each with a different role, if desired. The application administrator manages devices across all organizations – ensuring end users see only the devices and experiences they’re authorized to access.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="side-by-side-device-views.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/303064i52BA5AC1FA8730D3/image-size/large?v=v2&amp;px=999" role="button" title="side-by-side-device-views.png" alt="Left: Administrator view showing all devices in their solution. Right: End user with access to only one organization showing only a subset of the devices in the IoT solution." /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Left: Administrator view showing all devices in their solution. Right: End user with access to only one organization showing only a subset of the devices in the IoT solution.</span></span></P> <P>&nbsp;</P> <P>“<EM>The ability to define organizations opens up a new approach to implement IoT Central for large scale environments</EM>”, said Bernard Lenssens, Founder of Codit, a solution integrator with a strong focus on Azure IoT and Edge and an early adopter of organizations in IoT Central. Lenssens adds that “<EM>this new feature has been the missing piece to unlock certain types of projects. It allows customers to simplify complexity, gain velocity and lower costs when building multi-tenant solutions</EM>.”</P> <P>&nbsp;</P> <P>John Rogan, President of CEI, an IT/Technology consultant and services provider, and another early adopter, is using <STRONG>organizations</STRONG> to help a client “<EM>monitor and control HVAC and lighting at sites spread over the U.S., ensuring that users can only monitor sites assigned to them</EM>.”</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="admin-org-view.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/303063i959EEAB9BF09ABE5/image-size/large?v=v2&amp;px=999" role="button" title="admin-org-view.png" alt="An organization structure in IoT Central with three organizations and some sub-organizations." /><span class="lia-inline-image-caption" onclick="event.preventDefault();">An organization structure in IoT Central with three organizations and some sub-organizations.</span></span></P> <P>&nbsp;</P> <P><STRONG>Ready to start using organizations?</STRONG> <BR />Existing customers can head to <STRONG>Administration</STRONG> &gt; <STRONG>Organizations</STRONG> in the IoT Central app.&nbsp;<A href="#" target="_self">Read more</A> about configuring organizations.</P> <P>&nbsp;</P> <P><STRONG>Still haven’t tried IoT Central?</STRONG> <BR />See how Fortune 500 companies are using Azure to invent with purpose. Visit <U><A href="#" target="_self">www.azureiotcentral.com</A></U> to start your 7-day free trial.</P> Thu, 09 Sep 2021 20:30:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/building-multi-tenant-solutions-with-azure-iot-central/ba-p/2617416 lmasieri 2021-09-09T20:30:00Z General availability: Azure Sphere OS version 21.09 expected on Sept 22 https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/general-availability-azure-sphere-os-version-21-09-expected-on/ba-p/2733496 <P><SPAN>Azure Sphere OS version 21.09 is now available for evaluation in the <STRONG>Retail Eval</STRONG> feed</SPAN><SPAN>.&nbsp;The retail evaluation period provides 14 days for backwards compatibility testing. During this time, please verify that your applications and devices operate properly with this release&nbsp;before it is deployed broadly to devices in the Retail feed. The Retail feed will continue to deliver OS version&nbsp;21.08&nbsp;until we publish 21.09 there&nbsp;in two weeks.&nbsp;</SPAN></P> <P>&nbsp;</P> <P>This evaluation release of 21.09 includes enhancements and bug fixes for the OS only; it does not include an updated SDK.</P> <P>&nbsp;</P> <P>Areas of special focus for compatibility testing with the 21.09 release include:</P> <UL> <LI>Apps and functionality utilizing crash handling</LI> <LI>Apps and functionality utilizing kernel services and device drivers</LI> </UL> <P>&nbsp;</P> <P><SPAN>For more information on Azure Sphere OS feeds and setting up an evaluation device group,&nbsp;see&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN>Azure Sphere OS feeds</SPAN></A><SPAN> and </SPAN><A href="#" target="_blank" rel="noopener">Set up devices for OS evaluation</A><SPAN>.</SPAN></P> <P>&nbsp;</P> <P>For self-help technical inquiries, please visit <A href="#" target="_blank" rel="noopener">Microsoft Q&amp;A</A><SPAN> o</SPAN>r <A href="#" target="_blank" rel="noopener">Stack Overflow</A>. If you require technical support and have a support plan, please submit a support ticket in <A href="#" target="_blank" rel="noopener">Microsoft Azure Support</A> or work with your Microsoft Technical Account Manager. If you would like to purchase a support plan, please explore the <A href="#" target="_blank" rel="noopener">Azure support plans</A>.</P> Wed, 08 Sep 2021 23:20:24 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/general-availability-azure-sphere-os-version-21-09-expected-on/ba-p/2733496 AzureSphereTeam 2021-09-08T23:20:24Z Innodisk and Azure Sphere win gold at Computex Taipei 2021 https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/innodisk-and-azure-sphere-win-gold-at-computex-taipei-2021/ba-p/2712864 <P>The Innodisk InnoAGE SSD with Azure Sphere won the Golden Award in the <A href="#" target="_blank" rel="noopener">Computex 2021 Best Choice Awards</A>, a recognition as the world’s first flash storage product designed for AIoT (Artificial Intelligence of Things) architecture. This solution enables multifunctional management: smart data analysis and updates, data security, and remote control through the cloud, while benefitting from the power of the Azure Sphere to guarantee secured communications.</P> <P><BR />The InnoAGE SSD solves the conflict of management versus function in applications for which compute resources are vital and security is required. Let’s consider a large-scale example: the datacenters&nbsp; and compute infrastructure that comprise some of the world’s most intelligent systems are—ironically—difficult to manage with the very same cloud and AI benefits these systems provide. If we look at the same conflict in IoT deployments, it’s even more difficult to manage the compute resources distributed around the world in connected edge machines, devices, and experiences. Directly instrumenting these edge applications either diminishes their flexibility or takes precious resources away from their primary tasks. Out-of-band management can allow hardware components to be monitored independent from their function, however this process and the data it produces are critically sensitive, making it difficult to add value without compromising security or trapping data on-premises.</P> <P><BR />To address this, each InnoAGE SSD can provide wired and even wireless out-of-band management in industrial applications in a manner that is secured at every device by Azure Sphere’s hardware root of trust and managed identity; enabling secure, confidential transmission of telemetry to the cloud with hardened end-to-end encryption. With an InnoAGE SSD, compute infrastructure from datacenters to IoT devices in the field can now be connected to cloud-based AI for health and performance monitoring in a trusted, confidential, and secure manner. This innovation provides the best of out-of-band management not just to compute infrastructure, but to any device that can be fielded with an SSD.</P> <P><BR />The InnoAGE SSD enables management of the SSD via smart data analysis and updates while benefitting from the security of Azure Sphere for device-to-cloud communication, as well as cloud-to-device communication. Azure Sphere and Azure IoT enable Innodisk to deliver a customized cloud management platform that allows SSD debugging, monitoring of Read/Write patterns and reversion to a default setting. By monitoring the Read/Write patterns, the storage lifespan of the SSD can be increased. Furthermore, the ability to remotely execute debugging messages and revert to a default setting in the case of a device or system crash can be crucial for scenarios that require both in-band and out-of-band management.</P> <P><BR />For any device deployed in the field, the InnoAGE SSD delivers encrypted device-to-cloud communication and over-the-air updates that can help keep remote devices secured over time from emergent threats. Azure resources like IoT Hub, Deployment Planning Services (DPS), and Azure Web Apps receive telemetry from the SSD and host the management console. Remote recovery and management commands can also be delivered from this console which is received and executed by the Azure Sphere . For example, if there is data corruption or a device is misbehaving, devices can be updated or restored to original state remotely.</P> <P>&nbsp;</P> <P><LI-VIDEO vid="https://www.youtube.com/watch?v=_sfp8AWTb9c&amp;feature=emb_imp_woyt&amp;ab_channel=InnodiskCorporation" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/_sfp8AWTb9c/hqdefault.jpg" external="url"></LI-VIDEO></P> <P><BR />Innodisk customers are using the InnoAGE SSD in a variety of applications, including vending machines, factories, and improving shopping experiences. Collaborating with ecosystem partners DFI and Supermicrco, Innodisk created a smart factory concept which allows factory staff to manage devices without being onsite. Furthermore, in the event of the failure of a component, devices are still manageable remotely through out-of-band management. In retail scenarios, the InnoAGE SSD integrates into Point-of-Sale (POS) systems, and embedded PCs used in display kiosks. Azure Sphere ensures the highest level of security for managing devices while Innodisk’s high-performing DRAM offers the performance for advanced AI features including seamless facial recognition.</P> <P><BR />For anyone interested in learning, please visit the product documentation available here:</P> <UL> <LI>Azure Sphere documentation: <A href="#" target="_self">Azure Sphere Documentation | Microsoft Docs</A></LI> <LI>Getting started with the InnoAGE SSD: <A href="#" target="_self">InnoAGE™ 2.5” SATA SSD 3TI7 | Industrial Grade SSD | Solutions – Innodisk</A></LI> <LI>InnoAGE has also acquired <A href="#" target="_self">Azure IoT Plug and Play certification</A>, making it even easier to work with the Azure IoT ecosystem.</LI> </UL> Tue, 14 Sep 2021 15:52:58 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/innodisk-and-azure-sphere-win-gold-at-computex-taipei-2021/ba-p/2712864 suhuruli 2021-09-14T15:52:58Z Azure Percept DK and Azure Percept Audio now available in more regions! https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-dk-and-azure-percept-audio-now-available-in-more/ba-p/2712969 <P>Percept is a comprehensive, easy-to-use platform with added security for creating edge AI solutions. Start your proof of concept in minutes with hardware accelerators built to integrate seamlessly with Azure AI and Azure IoT services. Azure Percept works out of the box with Azure Cognitive Services, Azure Machine Learning, and other Azure services to deliver vision and audio insights in real time. Learn about the Azure Percept development kit here: <A href="#" target="_blank" rel="noopener">http://aka.ms/getazurepercept</A></P> <P>&nbsp;</P> <P>The Azure Percept Devkit and Percept Audio SoM are currently available in the following 16 markets:</P> <TABLE> <TBODY> <TR> <TD width="114px"> <P><STRONG>Country</STRONG></P> </TD> <TD width="180px"> <P><STRONG>Azure Percept Dev Kit</STRONG></P> </TD> <TD width="186px"> <P><STRONG>Azure Percept Audio SoM</STRONG></P> </TD> </TR> <TR> <TD width="114px"> <P>United States</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>Canada</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>France</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>Germany</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>Netherlands</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>Sweden</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>United Kingdom</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>Australia</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>Austria</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>Hungary</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>Ireland</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>Japan</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>New Zealand</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>Portugal</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>Spain</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> <TR> <TD width="114px"> <P>Taiwan</P> </TD> <TD width="180px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> <TD width="186px"> <P><A href="#" target="_blank" rel="noopener">order now</A></P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P><STRONG>Want to learn more about Azure Percept, check out the following resources:</STRONG></P> <UL> <LI><STRONG>Azure Percept Studio</STRONG>, which allows you to connect, build, customize and manage your Azure Percept edge solutions is described here: <A href="#" target="_blank" rel="noopener">https://aka.ms/azureperceptbuild</A>.</LI> </UL> <P>&nbsp;</P> <UL> <LI><STRONG>Azure Percept Devices</STRONG> come with built-in security on every device and information about it can be found here: <A href="#" target="_blank" rel="noopener">https://aka.ms/azureperceptsecure</A>.<BR /><BR /></LI> <LI><STRONG>Azure Percept library of AI models</STRONG> is available here: <A href="#" target="_blank" rel="noopener">https://aka.ms/azureperceptexplore</A>.<BR /><BR /></LI> <LI><STRONG>Architecture and Technology</STRONG> <A href="#" target="_blank" rel="noopener">B</A><A href="#" target="_blank" rel="noopener">uild &amp; Deploy to edge AI devices in minutes</A><SPAN><BR /><BR /></SPAN></LI> <LI><STRONG>Industry Use Cases and Community Projects</STRONG> <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/bg-p/IoTBlog/label-name/Azure%20Percept" target="_blank" rel="noopener">Internet of Things - Microsoft Tech Community</A></LI> </UL> <P>&nbsp;</P> <P>&nbsp;</P> Thu, 02 Sep 2021 18:48:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-dk-and-azure-percept-audio-now-available-in-more/ba-p/2712969 TheOtherDave 2021-09-02T18:48:00Z Deploying Azure Stream Analytics Edge as a module on Azure Percept DK https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/deploying-azure-stream-analytics-edge-as-a-module-on-azure/ba-p/2708359 <P class="lia-align-justify">This article describes how to control the number of messages being sent by Azure Percept DK and how to deploy a Stream Analytics Edge module to Azure Percept DK.&nbsp;Azure Percept DK can consume a lot of messages in Azure IoT very fast. How can this be controlled? Can a custom logic be deployed on Azure Percept DK?</P> <P class="lia-align-justify">&nbsp;</P> <P class="lia-align-justify">Those were the initial questions that we asked once we understood how the Azure Percept DK worked. Very quickly we noticed the Azure Percept DK was even sending messages to Azure IoT Hub when no objects were detected with label = null. I still remember we consumed 8000 messages in around 2 hours with the Azure Percept DK (Azure IoT Hub free tier) – daily limit. We upgraded to S1 Tier but then thought what if we were to scale our solution to include multiple Azure Percept DKs? How do we ensure we do not use up the daily limit on this tier again? (To learn more about Azure Percept DK check out this link: <A href="#" target="_self">https://azure.microsoft.com/en-gb/services/azure-percept/</A>)</P> <P class="lia-align-justify">&nbsp;</P> <P class="lia-align-justify">As a starter, we can control the number of messages that are being sent to IoT Hub through Azure Percept DK’s<SPAN>&nbsp;</SPAN><STRONG>azureeyemodule</STRONG><SPAN>&nbsp;</SPAN>module identity twin. The<SPAN>&nbsp;</SPAN><STRONG>azureeyemodule</STRONG><SPAN>&nbsp;</SPAN>is responsible to analyze the unencoded video frames with the deployed machine learning model and send the results to other modules. Controlling how often azureeyemodule sends messages to other modules, e.g. edgeHub, can limit the number of messages being sent to Azure IoT Hub.&nbsp;</P> <P>&nbsp;</P> <P>To do this:</P> <OL> <LI>Navigate to your Azure IoT Edge device through Azure IoT Hub</LI> <LI>Under Modules section, select ‘azureeyemodule’</LI> <LI>Select ‘Module Identity Twin’</LI> <LI>In the module device twin json, find the property named ‘TelemetryIntervalNeuralNetworkMs’:</LI> </OL> <DIV class="slate-resizable-image-embed slate-image-embed__resize-middle lia-indent-padding-left-180px"><span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="Sargithan_Senthil_12-1630511312520.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307331i9446842F5BBDB2D7/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_12-1630511312520.png" alt="Sargithan_Senthil_12-1630511312520.png" /></span> <P>&nbsp;</P> <P>&nbsp;</P> </DIV> <P class="lia-indent-padding-left-30px">5. This value determines how often messages are being sent from the neural network. We can increase this value to limit the number of messages being sent out (<STRONG>remember it is in milliseconds!</STRONG>)</P> <P class="lia-indent-padding-left-30px">6. Then make sure you press ‘Save’</P> <P class="lia-indent-padding-left-30px">7. Your Azure Percept DK will now sync up with the module and read in your updated TelemetryIntervalNeuralNetworkMs desired property value. After few seconds, you should see less frequent messages arriving in IoT Hub.</P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <P>On point 7, you can easily view what telemetries are being sent (including the text) using Azure IoT explorer (<A href="#" target="_blank" rel="nofollow noopener">https://docs.microsoft.com/en-us/azure/iot-fundamentals/howto-use-iot-explorer</A>).</P> <P>&nbsp;</P> <P class="lia-align-justify">The method described above is the easiest way to control the frequency of messages arriving in Azure IoT Hub. With some quick math, we can calculate the ideal 'Telemetry IntervalNeuralNetworkMs' for x number of devices. The downside of this approach: the higher the number, the less ‘real-time’ it becomes.</P> <P>&nbsp;</P> <P class="lia-align-justify">Our requirement was to send telemetries to Azure IoT Hub as soon as possible once azureeyemodule detects an object. So, the above solution was not suitable for us. This is where Azure Stream Analytics Edge came into the picture…</P> <P>&nbsp;</P> <P class="lia-align-justify">Azure Stream Analytics edge is designed for low latency, resiliency, efficient use of bandwidth, and compliance. The ability to implement custom logic, including only to send messages to IoT Hub when objects are detected was appealing. &nbsp;As part of the implementation, we will need to modify the routes on the Azure Percept DK from module-to-module in a specific way as shown below:</P> <P class="lia-align-justify">&nbsp;</P> <P class="lia-indent-padding-left-30px"><span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="Sargithan_Senthil_16-1630511515737.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307332iF97AA083044A4961/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_16-1630511515737.png" alt="Sargithan_Senthil_16-1630511515737.png" /></span></P> <P>Follow these steps to achieve this:</P> <P class="lia-indent-padding-left-30px">1.&nbsp;&nbsp;&nbsp;&nbsp;Create Stream Analytics job on edge: Go on stream analytics jobs main page and click the ‘Create’ button</P> <P class="lia-indent-padding-left-30px">2.&nbsp;&nbsp;&nbsp;&nbsp;Give it a name, choose the location and make sure the<STRONG><SPAN>&nbsp;</SPAN>Hosting environment – edge is selected</STRONG>. Select ‘Create’</P> <DIV class="slate-resizable-image-embed slate-image-embed__resize-middle lia-indent-padding-left-300px"><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_20-1630511633346.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307336iFB64136101361344/image-size/medium?v=v2&amp;px=400" role="button" title="Sargithan_Senthil_20-1630511633346.png" alt="Sargithan_Senthil_20-1630511633346.png" /></span></DIV> <P class="lia-indent-padding-left-30px">3.&nbsp;&nbsp;&nbsp;&nbsp;Under Job Topology section click on ‘Inputs’:<span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_28-1630512127860.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307344iE8CE7A52F284DCE2/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_28-1630512127860.png" alt="Sargithan_Senthil_28-1630512127860.png" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">4.&nbsp;&nbsp;&nbsp;&nbsp;Click on ‘Add stream input’ and select ‘Edge Hub’</P> <P class="lia-indent-padding-left-30px">5.&nbsp;&nbsp;&nbsp;&nbsp;Enter an input alias name and click 'Save':</P> <DIV class="slate-resizable-image-embed slate-image-embed__resize-middle lia-indent-padding-left-300px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_21-1630511695802.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307337i32CC273E0936E97D/image-size/medium?v=v2&amp;px=400" role="button" title="Sargithan_Senthil_21-1630511695802.png" alt="Sargithan_Senthil_21-1630511695802.png" /></span> <P>&nbsp;</P> </DIV> <P class="lia-indent-padding-left-30px">6.&nbsp;&nbsp;&nbsp;&nbsp;Select ‘Outputs’, click on ‘Add’ and select ‘Edge Hub’</P> <P class="lia-indent-padding-left-30px">7.&nbsp;&nbsp;&nbsp;&nbsp;Enter an output alias name and click ‘Save’</P> <P class="lia-indent-padding-left-30px">8.&nbsp;&nbsp;&nbsp;&nbsp;Click on Query</P> <P class="lia-indent-padding-left-30px">9.&nbsp;&nbsp;&nbsp;&nbsp;Write the SQL query for filtering the telemetry messages and save the query (you may want to apply a different logic here):<span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_33-1630512491397.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307349i3E2F13C02876CB93/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_33-1630512491397.png" alt="Sargithan_Senthil_33-1630512491397.png" /></span></P> <P class="lia-indent-padding-left-30px">10.&nbsp;To deploy this logic as an IoT module: Navigate to your Azure Percept DK device in IoT Hub</P> <P class="lia-indent-padding-left-30px">11.&nbsp;Click on ‘Set Modules’ button</P> <P class="lia-indent-padding-left-30px">12.&nbsp;Under IoT Edge Modules section click on ‘Add – Azure Stream Analytics Module’:</P> <DIV class="slate-resizable-image-embed slate-image-embed__resize-middle lia-indent-padding-left-300px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_23-1630511717722.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307339i52685564DCA9E7CB/image-size/medium?v=v2&amp;px=400" role="button" title="Sargithan_Senthil_23-1630511717722.png" alt="Sargithan_Senthil_23-1630511717722.png" /></span> <P>&nbsp;</P> </DIV> <P class="lia-indent-padding-left-30px">13.&nbsp;Choose your newly created edge job and click ‘Save’:</P> <DIV class="slate-resizable-image-embed slate-image-embed__resize-middle lia-indent-padding-left-300px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_24-1630511749435.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307340i881880340481A6EC/image-size/medium?v=v2&amp;px=400" role="button" title="Sargithan_Senthil_24-1630511749435.png" alt="Sargithan_Senthil_24-1630511749435.png" /></span></DIV> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">14.&nbsp;The ASA edge module will appear under IoT Edge Modules list:</P> <P>&nbsp;</P> <DIV class="slate-resizable-image-embed slate-image-embed__resize-middle lia-indent-padding-left-300px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_25-1630511762615.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307341iE6D2372ABA5CD99A/image-size/medium?v=v2&amp;px=400" role="button" title="Sargithan_Senthil_25-1630511762615.png" alt="Sargithan_Senthil_25-1630511762615.png" /></span> <P>&nbsp;</P> </DIV> <P class="lia-indent-padding-left-30px">&nbsp;15.&nbsp;Select the ‘Routes’ tab and specify the routes:<span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_30-1630512369557.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307346iF1B176E9B6F78ABB/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_30-1630512369557.png" alt="Sargithan_Senthil_30-1630512369557.png" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">16.&nbsp;Click on ‘Review + Create’ button to set the module on device</P> <P>&nbsp;</P> <P>Now, wait a few seconds (give it a good minute if it’s your first time). To see if things are working, open Azure IoT Explorer and see if you can see telemetry and any additional json properties defined in your SQL syntax (Stream Analytics edge):</P> <P>&nbsp;</P> <P class="lia-indent-padding-left-90px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sargithan_Senthil_27-1630511780982.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307343i1315348F1B28C7B2/image-size/large?v=v2&amp;px=999" role="button" title="Sargithan_Senthil_27-1630511780982.png" alt="Sargithan_Senthil_27-1630511780982.png" /></span></P> <P>&nbsp;</P> <P class="lia-align-justify">If you have any issues, it’s best to use module logs to identify the probable cause. In our experience, the logs were extremely helpful to identify the errors when we hit blockers!</P> <P>&nbsp;</P> <P>Lastly, I would like to thank my company Kagool for providing us with an Azure Percept DK and an Azure Subscription!</P> <P>&nbsp;</P> <P>Learn more about the Azure percept DK here:</P> <P><STRONG><U><SPAN>Learn about Azure Percept&nbsp;</SPAN></U></STRONG><U></U></P> <P><A href="#" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="0"><SPAN>AZURE.COM page</SPAN></A><U><SPAN><BR /></SPAN></U><A href="#" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="1"><SPAN>Product detail pages</SPAN></A><U><SPAN><BR /></SPAN></U><A href="#" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="2"><SPAN>Pre-built AI models</SPAN></A><U><SPAN><BR /></SPAN></U><A href="#" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="3"><SPAN>Azure Percept - YouTube</SPAN></A></P> <P><STRONG><SPAN>Purchase Azure Percept</SPAN></STRONG><SPAN><BR />Available to our customers –&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="4"><SPAN>Build your Azure Percept</SPAN></A></P> Thu, 02 Sep 2021 17:55:38 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/deploying-azure-stream-analytics-edge-as-a-module-on-azure/ba-p/2708359 Sargithan_Senthil 2021-09-02T17:55:38Z Windows Server IoT 2022 now generally available https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/windows-server-iot-2022-now-generally-available/ba-p/2703521 <P>&nbsp;</P> <P>Dedicated-purpose devices have fixed functionality and are built to perform a pre-defined set of tasks. When this is the case, you can license Windows Server IoT 2022 with special dedicated use licensing terms. (Prior versions were referred to as Windows Server for Embedded Systems or Windows Storage Server).&nbsp; With Windows Server IoT 2022, customers can continue to securely run their workloads, enable new hybrid cloud scenarios, and modernize their applications to meet evolving business requirements.&nbsp; Windows Server IoT is an integral part of the Azure Edge Devices and Windows for IoT stack.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="stack.JPG" style="width: 794px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/306980iE8A7DABCEB121537/image-size/large?v=v2&amp;px=999" role="button" title="stack.JPG" alt="stack.JPG" /></span></P> <P>&nbsp;</P> <P><STRONG>What is new in Windows Server IoT 2022?</STRONG></P> <P>&nbsp;</P> <P><STRONG>Security</STRONG></P> <P>Security has always been a cornerstone of Windows Server IoT. With security top of mind for our customers, we are introducing numerous security enhancements in Windows Server IoT 2022. In this release, customers can take advantage of multi-layer security with&nbsp;<A href="#" target="_blank" rel="noopener">Secured-core server</A>&nbsp;and secured connectivity. Secured-core server means our hardware partners have provided hardware, firmware, and drivers to help customers harden the security of their critical systems. It allows IT and SecOps teams to apply comprehensive security broadly in their environment with Secured-core server’s advanced protection and preventive defense across hardware, firmware and virtualization layers.</P> <P>Secured connectivity in Windows Server IoT 2022 adds another layer to security during transport. The new release adds faster and more secure encrypted hypertext transfer protocol secure (HTTPS), and industry-standard AES-256 encryption with support for&nbsp;<A href="#" target="_blank" rel="noopener">server message block</A>&nbsp;(SMB) protocol.</P> <P>&nbsp;</P> <P><STRONG>Hybrid</STRONG></P> <P>Customers are choosing a hybrid and multicloud approach to digitally transform their businesses. They can now take advantage of cloud services with on-premises Windows Server IoT 2022 by connecting with&nbsp;<A href="#" target="_blank" rel="noopener">Azure Arc</A>.</P> <P>Additionally, in Windows Server IoT 2022 customers can take advantage of the File Server enhancements such as&nbsp;<A href="#" target="_blank" rel="noopener">SMB Compression</A>. SMB Compression improves application file transfer by compressing data while in transit over a network. Finally, <A href="#" target="_blank" rel="noopener">Windows Admin Center</A>, a tool loved by admins, brings modern server management experience such as with a new event viewer, and gateway proxy support for Azure connected scenarios.</P> <P>&nbsp;</P> <P><STRONG>Application platform and containers</STRONG></P> <P>Customers who upgrade to Windows Server IoT 2022 can take advantage of scalability improvements such as support for 48TB of memory and 2,048 logical cores running on 64 physical sockets for those demanding Tier1 applications. In this release, customers can also take advantage of advancements to Windows containers. For example, Windows Server IoT 2022 improves application compatibility of Windows containers, introduces HostProcess containers for node configuration, supports IPv6 and dual stack, and enables consistent network policy implementation with Calico.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="MSC17_dataCenter_052.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/307003i02D1C70858CC0318/image-size/large?v=v2&amp;px=999" role="button" title="MSC17_dataCenter_052.jpg" alt="MSC17_dataCenter_052.jpg" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><STRONG>New licensing options now available</STRONG></P> <P>Windows Server IoT 2022 follows different licensing and distribution policies than Windows Server 2022. It’s only licensed through the OEM channel under special dedicated use rights.</P> <P>Fixed-function appliances using Windows Server IoT 2022 will be dedicated to specific information or transaction processing, aggregating data from downstream ‘things’ and analyzing it on-premises at scale; maintaining databases that are too big to transfer to the cloud; serving as a gateway to enterprise IT infrastructures; or leveraging Azure in hybrid scenarios with cloud-native apps managed by <A href="#" target="_blank" rel="noopener">Azure IoT Edge</A>.</P> <P>&nbsp;</P> <P>Microsoft offers six Windows Server IoT editions. The table below can help you identify which one may be right for the specific solution you want to deliver or build.&nbsp; &nbsp;</P> <P>&nbsp;</P> <TABLE style="border-style: solid; width: 889px;" width="889"> <TBODY> <TR> <TD width="525.578px"> <P><STRONG>What type of application&nbsp;</STRONG></P> </TD> <TD width="362.422px"> <P><STRONG>Editions</STRONG></P> </TD> </TR> <TR> <TD width="525.578px"> <P>A dedicated server with Active Directory integration (file, print, networking services) or those requiring a connected keyboard, monitor or mouse to perform its dedicated purpose.&nbsp;</P> </TD> <TD width="362.422px"> <P>Windows Server IoT 2022 Standard</P> </TD> </TR> <TR> <TD width="525.578px"> <P>A turnkey solution for highly virtualized datacenters or cloud environments that can consolidate several complex functions into a single server appliance. The solution may require Storage Spaces Direct.</P> </TD> <TD width="362.422px"> <P>Windows Server IoT 2022 Datacenter</P> </TD> </TR> <TR> <TD width="525.578px"> <P>A dedicated file server appropriate for Network Attached Storage, Storage Area Network Gateway or another storage solution.</P> </TD> <TD width="362.422px"> <P>Windows Server IoT 2022 Storage Standard</P> </TD> </TR> <TR> <TD width="525.578px"> <P>A small storage solution (for 50 users or less) that does NOT require network infrastructure services (file, print, etc.) or a connected keyboard, monitor or mouse.</P> </TD> <TD width="362.422px"> <P>Windows Server IoT 2022 Storage Workgroup</P> </TD> </TR> <TR> <TD width="525.578px"> <P>A specialized telecommunications application such as PBX, IP PBX, Automated Attendant, Interactive Voice Response (IVR) or teleconferencing.</P> </TD> <TD width="362.422px"> <P>Windows Server IoT 2022 Telecommunications</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>Note that we will no longer have an Essentials version for Windows Server IoT 2022. You can learn more about the other available versions of Windows for IoT <A href="#" target="_blank" rel="noopener">here</A>.</P> <P>&nbsp;</P> <P>We’ll dive into Windows Server 2022 and more during the&nbsp;<A href="#" target="_blank" rel="noopener">Windows Server Summit</A>&nbsp;on September 16th, 2021.&nbsp;<A href="#" target="_blank" rel="noopener">Join</A>&nbsp;us at this complimentary digital event.</P> <P>&nbsp;</P> Wed, 01 Sep 2021 16:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/windows-server-iot-2022-now-generally-available/ba-p/2703521 95twr 2021-09-01T16:00:00Z General availability: Azure Sphere OS version 21.08 https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/general-availability-azure-sphere-os-version-21-08/ba-p/2685986 <P><SPAN>Azure Sphere OS version 21.08&nbsp;is now available in the <STRONG>Retail </STRONG>feed</SPAN><SPAN>.&nbsp;</SPAN>This release includes bug fixes in the Azure Sphere OS; it does not include an updated SDK. If your devices are connected to the internet, they will receive the updated OS from the cloud.</P> <P>&nbsp;</P> <P>21.08 includes updates and enhancements in the following areas.</P> <UL> <LI>Security updates</LI> <LI>Improved stability for ethernet support</LI> <LI>Improved stability for I2C devices</LI> <LI>New Azure Sphere Gallery samples</LI> <LI>Documentation updates</LI> </UL> <P>&nbsp;</P> <P>In addition, 21.08 includes updates to mitigate against the following Common Vulnerabilities and Exposures (CVEs):</P> <UL> <LI><SPAN>CVE-2021-22924</SPAN></LI> </UL> <P>See <A href="#" target="_blank" rel="noopener">What’s new in Azure Sphere</A> for more details on the 21.08 release.</P> <P><SPAN>&nbsp;</SPAN></P> <P><SPAN>For more information on Azure Sphere OS feeds and setting up an evaluation device group,&nbsp;see&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN>Azure Sphere OS feeds</SPAN></A><SPAN> and </SPAN><A href="#" target="_blank" rel="noopener">Set up devices for OS evaluation</A><SPAN>.</SPAN></P> <P>&nbsp;</P> <P>For self-help technical inquiries, please visit <A href="#" target="_blank" rel="noopener">Microsoft Q&amp;A</A><SPAN> o</SPAN>r <A href="#" target="_blank" rel="noopener">Stack Overflow</A>. If you require technical support and have a support plan, please submit a support ticket in <A href="#" target="_blank" rel="noopener">Microsoft Azure Support</A> or work with your Microsoft Technical Account Manager. If you would like to purchase a support plan, please explore the <A href="#" target="_blank" rel="noopener">Azure support plans</A>.</P> Wed, 25 Aug 2021 23:00:55 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/general-availability-azure-sphere-os-version-21-08/ba-p/2685986 AzureSphereTeam 2021-08-25T23:00:55Z Design an AI enabled NVR with AVA Edge and Intel OpenVino https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/design-an-ai-enabled-nvr-with-ava-edge-and-intel-openvino/ba-p/2576473 <P>This is the&nbsp;<STRONG>first</STRONG> in a series of articles which explore how to integrate Artificial Intelligence into a video processing infrastructure using off-the-market cameras and Intel <STRONG>O</STRONG>pen<STRONG>V</STRONG>ino <STRONG>M</STRONG>odel <STRONG>S</STRONG>erver running at the edge. In the below sections we will learn some background trivia, hardware/software prerequisites for implementation, and steps to <STRONG>setup</STRONG> a production-ready <STRONG>AI</STRONG> enabled <STRONG>N</STRONG>etwork <STRONG>V</STRONG>ideo <STRONG>R</STRONG>ecorder that has the best of both worlds - Microsoft and Intel.</P> <P>&nbsp;</P> <H4><FONT size="4">What is a video analytics platform</FONT></H4> <P>In the last few years, video analytics, also known as video content analysis or <STRONG>intelligent video analytics</STRONG>, has attracted increasing interest from both industry and the academic world.&nbsp;Video Analytics products add artificial intelligence to cameras by <STRONG>analyzing</STRONG> video content in <STRONG>real-time</STRONG>, extracting <STRONG>metadata</STRONG>, sending out <STRONG>alerts</STRONG> and providing <STRONG>actionable intelligence</STRONG> to security personnel or other systems. Video Analytics can be embedded at the edge (even in-camera), in servers on-premise, and/or on-cloud. They extract only <STRONG>temporal</STRONG> and <STRONG>spatial</STRONG> <STRONG>events</STRONG> in a scene, <STRONG>filtering</STRONG> out <STRONG>noise</STRONG> such as lighting changes, weather, trees and animal movements. Here is a logical flow of how it works.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="KaushikRoy_0-1627062515741.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298073i236B3682CFC67061/image-size/large?v=v2&amp;px=999" role="button" title="KaushikRoy_0-1627062515741.png" alt="KaushikRoy_0-1627062515741.png" /></span>&nbsp; &nbsp;&nbsp;</P> <P>Let’s face it: on-premises and <STRONG>legacy</STRONG> video surveillance infrastructure are still in the <STRONG>dark ages</STRONG>. Physical servers often have limited virtualization integration and support, as well as racks upon racks of servers that clog up performance regardless of whether the data center is using NVR, direct-attached storage, storage area network or hyper-converged infrastructure. It’s been that way for the last 10, if not 20, years.&nbsp;Buying and housing an <STRONG>NVR</STRONG> for five or six cameras is <STRONG>expensive</STRONG> and time-consuming from a management and maintenance point of view. With great improvements in connectivity, compression and data transfer methods, a <STRONG>cloud-native solution</STRONG> becomes an excellent option. Here are some of the popular use cases in this field and digram of a sample deployment for critical infrastructure.</P> <P>&nbsp;</P> <TABLE style="border-style: hidden; width: 100%;" border="1" width="100%"> <TBODY> <TR> <TD width="50%" height="330px"> <P>&nbsp;</P> <UL class="lia-list-style-type-square"> <LI>Motion Detection</LI> <LI>Intrusion Detection</LI> <LI>Line Crossing</LI> <LI>Object Abandoned</LI> <LI>License Plate Recognition</LI> <LI>Vehicle Detection</LI> <LI>Asset Management</LI> <LI>Face Detection</LI> <LI>Baby Monitoring</LI> <LI>Object Counting</LI> </UL> </TD> <TD width="50%" height="330px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="KaushikRoy_0-1627063795302.jpeg" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298079iDC66891ECF422949/image-size/medium?v=v2&amp;px=400" role="button" title="KaushikRoy_0-1627063795302.jpeg" alt="KaushikRoy_0-1627063795302.jpeg" /></span> <P>&nbsp;</P> <P>&nbsp;</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>Common approaches for proposals to clients involve either a&nbsp;<STRONG>new</STRONG> installation (<STRONG>greenfield&nbsp;</STRONG>project) or a<SPAN>&nbsp;</SPAN><STRONG>lift </STRONG>and <STRONG>shift&nbsp;</STRONG>scenario (<STRONG>brownfield&nbsp;</STRONG>project).&nbsp;<SPAN style="font-family: inherit;">&nbsp;Video intelligence is one industry where it becomes important to follow a</SPAN><SPAN style="font-family: inherit;">&nbsp;</SPAN><STRONG style="font-family: inherit;">bluefield<SPAN style="font-family: inherit;">&nbsp;</SPAN></STRONG><SPAN style="font-family: inherit;">approach - meant to describe a <STRONG>combination</STRONG> of both</SPAN><SPAN style="font-family: inherit;">&nbsp;</SPAN><EM style="font-family: inherit;">brownfield</EM><SPAN style="font-family: inherit;">&nbsp;</SPAN><SPAN style="font-family: inherit;">and</SPAN><SPAN style="font-family: inherit;">&nbsp;</SPAN><EM style="font-family: inherit;">greenfield</EM><SPAN style="font-family: inherit;">, where some streams of information are already in motion and some will be new instances of technology. The reason is that the existing hardware and software installations are very expensive and although they are open to new ideas, they want to keep what is already working. The current article is about <STRONG>setting up</STRONG> this new technology in a way so it accepts <STRONG>pipelines</STRONG>, for <STRONG>inference</STRONG> and <STRONG>event generation</STRONG>, on <STRONG>live video</STRONG> for the above use cases in future.</SPAN></P> <P>&nbsp;</P> <H4><FONT size="4">The rise of AI NVRs</FONT></H4> <P data-unlink="true"><SPAN>Video Intelligence&nbsp;</SPAN>was invented in 1942 by German engineer, Walter Bruch, so that he and others could observe the launch of V2 rockets on a private system. While its purpose has not drastically changed in the past 75 years, the system itself has undergone radical changes. Since its development, users’ expectations have evolved exponentially, necessitating the development of faster, better, and more cost-effective technology.&nbsp;</P> <P data-unlink="true">&nbsp;</P> <P>Initially, they could only watch through live streams as they happened— recordings would become available much later (<A title="VCR" href="#" target="_blank" rel="noopener">VCR</A>). Until the recent past these were analog devices using analog cameras and a <STRONG>D</STRONG>igital <STRONG>V</STRONG>ideo <STRONG>R</STRONG>ecorder (<STRONG>DVR</STRONG>). Not unlike your everyday television box which used to run off these DVRs in every home! Recently, these have started getting replaced with <STRONG>P</STRONG>ower <STRONG>O</STRONG>ver <STRONG>E</STRONG>thernet (<STRONG>PoE</STRONG>) enabled counterparts, running off <STRONG>N</STRONG>etwork <STRONG>V</STRONG>ideo <STRONG>R</STRONG>ecorders (<STRONG>NVR</STRONG>). Here is a quick visual showing the difference between DVR and NVR.</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="dvr-vs-nvr.jpeg" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298197iBC4659D97C84C9AE/image-size/medium?v=v2&amp;px=400" role="button" title="dvr-vs-nvr.jpeg" alt="dvr-vs-nvr.jpeg" /></span></P> <P> </P> <P><STRONG>AI NVR</STRONG> Video Analytics System is a plug-and-play turnkey solution, including video search for <STRONG>object detection</STRONG>, high-accuracy intrusion detection, <STRONG>face search</STRONG> and face recognition, <STRONG>license plate</STRONG> and <STRONG>vehicle recognition</STRONG>, <STRONG>people/vehicle counting</STRONG>, and abnormal-activity detection. All functions support <STRONG>live stream</STRONG> and batch mode processing, real-time alerts and <STRONG>GDPR</STRONG>-friendly privacy protection when desired. AI NVR overcomes the challenges of many complex environments and is fully integrated with AI video analytics features for various markets, including <STRONG>perimeter protection</STRONG> of businesses, <STRONG>access controls</STRONG> for campuses and airports, <STRONG>traffic management</STRONG> by law enforcement, and <STRONG>business intelligence</STRONG> for shopping centers. Here is a&nbsp; logical flow of an AI NVR from video capture to data-driven applications.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="crop1.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298297i3FA4E3EB53ADFA4D/image-size/large?v=v2&amp;px=999" role="button" title="crop1.jpg" alt="crop1.jpg" /></span></P> <P>&nbsp;</P> <P>In this article we are going to see how to create such an AI NVR at the edge using <STRONG>A</STRONG>zure <STRONG>V</STRONG>ideo <STRONG>A</STRONG>nalyzer (<STRONG>AVA</STRONG>) and Intel products.</P> <P>&nbsp;</P> <H4>Azure Video Analyzer - a one-stop solution from Microsoft</H4> <P data-unlink="true"><SPAN>Azure Video Analyzer (<STRONG>AVA</STRONG>) is a brand new service to build intelligent video applications that span the edge and the cloud. It offers the capability to <STRONG>capture</STRONG>, <STRONG>record</STRONG>, and <STRONG>analyze</STRONG> live video along with <STRONG>publishing</STRONG> the results - video and/or video analytics. Video can be published to the <STRONG>edge</STRONG> or the Video Analyzer <STRONG>cloud</STRONG> service, while video analytics can be published to Azure services (in the cloud and/or the edge). With Video Analyzer, you can continue to use your existing&nbsp;<A title="video management systems" href="#" target="_blank" rel="noopener">video management systems</A> (VMS)&nbsp;&nbsp;and build video analytics apps independently. AVA can be used in conjunction with computer vision SDKs and toolkits to build cutting edge IoT solutions. The diagram below illustrates this.</SPAN></P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="KaushikRoy_0-1627265834078.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298304i2FDB4E1C55D748E1/image-size/large?v=v2&amp;px=999" role="button" title="KaushikRoy_0-1627265834078.png" alt="KaushikRoy_0-1627265834078.png" /></span></P> <P>&nbsp;</P> <P>This is the <STRONG>most essential</STRONG> component of creating the AI NVR.&nbsp; As you may have guessed, in this article we are going to deploy an <STRONG>AVA module on IoT edge</STRONG> to coordinate between the model server and the video feeds through an http <STRONG>extension</STRONG>. You can <STRONG>'bring you own model'</STRONG> and call it through either <STRONG>http</STRONG> or <STRONG>grpc</STRONG> endpoint.</P> <P>&nbsp;</P> <H4>Intel OpenVINO toolkit</H4> <P>OpenVINO (<STRONG>Open</STRONG> <STRONG>V</STRONG>isual <STRONG>I</STRONG>nference and <STRONG>N</STRONG>eural network <STRONG>O</STRONG>ptimization)&nbsp;is a toolkit provided by Intel to facilitate faster <STRONG>inference</STRONG> of deep learning models. It helps developers to create cost-effective and robust computer vision applications. It enables deep learning inference at the edge and supports heterogeneous execution across computer vision accelerators — CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA. It supports a large number of deep learning models out of the box.</P> <P> </P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="avaopen.jpeg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298502iBA29AE00CFCE9A0D/image-size/large?v=v2&amp;px=999" role="button" title="avaopen.jpeg" alt="avaopen.jpeg" /></span></P> <P> </P> <P>OpenVino uses its own&nbsp;<STRONG>I</STRONG>ntermediate <STRONG>R</STRONG>epresentation (<STRONG>IR</STRONG>) (<A title="link" href="#" target="_blank" rel="noopener">link</A>)(<A title="link" href="#" target="_blank" rel="noopener">link</A>) format similar to <STRONG>ONNX&nbsp;</STRONG>(<A title="link" href="#" target="_blank" rel="noopener">link</A>), and works with all your favourite deep learning tools like Tensorflow, Pytorch etc. You can either <STRONG>convert</STRONG> your resultant model to openvino or <STRONG>use/optimize</STRONG> the available <STRONG>pretained</STRONG> models in the Intel <STRONG>model zoo</STRONG>. In this article we are specifically using the OpenVino Model Server (<STRONG>OVMS</STRONG>) available through <A title="this" href="#" target="_blank" rel="noopener">this</A> Azure marketplace module, Out of the many models in their catalogue I am only using those that count <STRONG>faces</STRONG>, <STRONG>vehicles</STRONG>, and <STRONG>people</STRONG>. These are identified by their call signs - <A title="personDetection" href="#" target="_blank" rel="noopener"><EM>personDetection</EM></A>, <A title="faceDetection" href="#" target="_blank" rel="noopener"><EM>faceDetection</EM></A>, and <A title="vehicleDetection" href="#" target="_blank" rel="noopener"><EM>vehicleDetection</EM></A>.</P> <P> </P> <H4>Prerequisites</H4> <P>There are some hardware and software prerequisites for creating this platform.</P> <OL> <LI>ONVIF PoE camera able to send encoded RTSP streams (<A title="link" href="#" target="_blank" rel="noopener">link</A>)(<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI>Intel edge device with Ubuntu 18/20 (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI>Active Azure subscription</LI> <LI>Development machine with VSCode &amp; IoT Extension</LI> <LI>Working knowledge of Computer Vision &amp; Model Serving</LI> </OL> <H4>&nbsp;</H4> <H5><STRONG>ONVIF PoE camera</STRONG></H5> <P><A title="ONVIF" href="#" target="_blank" rel="noopener">ONVIF</A> (the <STRONG>O</STRONG>pen <STRONG>N</STRONG>etwork <STRONG>V</STRONG>ideo <STRONG>I</STRONG>nterface <STRONG>F</STRONG>orum) is a global and open industry forum with the goal of facilitating the development and use of a global <STRONG>open standard</STRONG> for the interface of physical IP-based security <A title="products" href="#" target="_blank" rel="noopener">products</A>. ONVIF creates a standard for how IP products within video surveillance and other physical security areas can communicate with each other. This is different from propreitary equipment, and you can use all <STRONG>open source</STRONG> libraries with them. A decent quality camera like <A title="Reolink 410" href="#" target="_blank" rel="noopener">Reolink 410</A> is enough. Technically you can use <STRONG>wireless</STRONG> camera but I would <STRONG>not recommend</STRONG> that in a professional setting.</P> <P>&nbsp;</P> <H5><STRONG>Intel edge device with Ubuntu</STRONG></H5> <P>This can be any device with one or more Intel cpu. Intel NUC makes great low cost IoT edge device and even the cheap ones can handle around 10 cameras running at 30 fps. I am using a base <A title="model" href="#" target="_blank" rel="noopener">model</A> with Celeron processor priced at around 130$. The camera(s), device, and some cables are all you need to implement this. Optionally, like me, you may need a PoE <A title="switch" href="#" target="_blank" rel="noopener">switch</A>&nbsp;or network <A title="extender" href="#" target="_blank" rel="noopener">extender</A>&nbsp;to get connected. Check the wattage of the <STRONG>PoE</STRONG> to be at least <STRONG>5 W</STRONG>, and bandwidth to be at least <STRONG>20 mbps</STRONG>&nbsp;<STRONG>per camera</STRONG>. You also need to install Ubuntu Linux.</P> <P>&nbsp;</P> <H5 id="toc-hId--156750218"><STRONG>Active Azure subscription</STRONG></H5> <P>Surely, you will need this one, but as we know Azure has this immense suit of products, and while ideally we want to have everything, it may not be practically feasible. For practical purposes you might have to ask for access to particular services, meaning you have to know ahead exactly which ones you want to use. We will need the following:</P> <UL> <LI>Azure IoT Hub (<A title="link" href="#" target="_blank" rel="noopener noreferrer">link</A>)</LI> <LI>Azure Container Registry (<A title="link" href="#" target="_blank" rel="noopener noreferrer">link</A>)</LI> <LI>Azure Media Services (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI>Azure Video Analyzer (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI>Azure Streaming Analytics (<A title="link" href="#" target="_blank" rel="noopener noreferrer">link</A>)(future article)</LI> <LI>Power BI / React App (<A title="link" href="#" target="_blank" rel="noopener noreferrer">link</A>)(future article)</LI> <LI>Azure Linux VM (<A title="link" href="#" target="_blank" rel="noopener noreferrer">link</A>)(optional)</LI> </UL> <P>&nbsp;</P> <H5><STRONG>Computer Vision &amp; Model Serving</STRONG></H5> <P>Generally this prerequisite takes a lot of engineering and is expensive. Thankfully the OVMS extension from Intel is capable of serving high quality <A title="models" href="#" target="_blank" rel="noopener">models</A> from their zoo, because without this you would have to do the whole flask/socket server thing and it wouldn't be half as good. Whatever models you need you can mention their call sign and it will be served instantly for you at the edge by the extension. We will see more about this in the next article once things are setup. Note: we are making the platform in such a way that you can use <A title="Azure CustomVision" href="#" target="_blank" rel="noopener">Azure CustomVision</A> or <A title="Azure Machine Learning" href="#" target="_blank" rel="noopener">Azure Machine Learning</A> models on this same setup in future with very minimal changes.</P> <P>&nbsp;</P> <H4>Reference Architecture</H4> <P>We are definitely living in interesting times when something as complex as video analytics is almost an OOTB feature!&nbsp;</P> <P>Here I present an <STRONG>alternate</STRONG> architecture that we followed, implemented, and got <STRONG>comparable</STRONG> results to the one above. This is a stripped down version of the official architecture, contains only the <STRONG>necessary components</STRONG> of a MVP for AI NVR, and is much easier to disect.&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="aarch.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298503iB17E3E2F9053D2A0/image-size/large?v=v2&amp;px=999" role="button" title="aarch.png" alt="aarch.png" /></span>Notice it looks somewhat simliar to the logical flow of an AI NVR shown in one of the prior sections.</P> <P>&nbsp;</P> <H4>Inbound feed to the AI NVR</H4> <P>Before we go into the implementation I wanted to mention some aspects about the inputs and outputs of this system.</P> <UL> <LI>Earlier we said the system needs RTSP input, even though there are other forms of streaming protocols such as RTMP(<A title="link" href="#" target="_blank" rel="noopener">link</A>), HTTP etc. However, we choose <STRONG>RTSP</STRONG> mostly because its <STRONG>optimized for&nbsp;viewing experience and scalability</STRONG>.&nbsp;</LI> <LI>For development purpose it is recommended to use the excellent <A title="RTSP Simulator" href="#" target="_blank" rel="noopener">RTSP Simulator</A> provided by Microsoft.</LI> <LI>To display the video being processed use any of the following <A title="players" href="#" target="_blank" rel="noopener">players</A>.</LI> <LI>You can technically use a usb webcam and create your own RTSP stream(<A title="link" href="#" target="_blank" rel="noopener">link</A>)(<A title="link" href="#" target="_blank" rel="noopener">link</A>), However, underneath it uses <A title="GStreamer " href="#" target="_blank" rel="noopener">GStreamer,</A> <A title="RTSPServer" href="#" target="_blank" rel="noopener">RTSPServer</A>, and <A title="pipelines" href="#" target="_blank" rel="noopener">pipelines</A>. From my experience you should be careful using this method, especially since you will need understanding of hardware/software media encoding (e.g. <A title="H.264" href="#" target="_blank" rel="noopener">H.264</A>) and GStreamer dockerization <img class="lia-deferred-image lia-image-emoji" src="https://techcommunity.microsoft.com/html/@B71AFCCE02F5853FE57A20BD4B04EADDhttps://techcommunity.microsoft.com/images/emoticons/cool_40x40.gif" alt=":cool:" title=":cool:" />.</LI> <LI>One very interesting option that I used as a video source was the <A title="RTSP Camera Server app" href="#" target="_blank" rel="noopener">RTSP Camera Server</A>&nbsp;app. This will instantly turn your smartphone camera into an RTSP feed that your AI NVR can consume <img class="lia-deferred-image lia-image-emoji" src="https://techcommunity.microsoft.com/html/@A027B0AAF3CA617A1E2E22C4E761B2FEhttps://techcommunity.microsoft.com/images/emoticons/stareyes_40x40.gif" alt=":stareyes:" title=":stareyes:" />!</LI> <LI>Last, but not the least you should make sure that your <STRONG>incoming</STRONG> feed has the required <STRONG>resolution</STRONG> that your CV algorithms need. The trick is not to use too good cameras. <STRONG>4</STRONG> to <STRONG>5</STRONG> <STRONG>MP</STRONG> is fine for maintaining <STRONG>pixel&nbsp;distribution parity</STRONG> with available pretrained models.</LI> </UL> <P>&nbsp;</P> <H4>Outbound events from the AI NVR</H4> <P><SPAN>In Azure Video Analyzer, each inference object regardless of using HTTP-based contract or gRPC based contract follows the object model described below.</SPAN></P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="object-model.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298770i632433CD3253FCC1/image-size/large?v=v2&amp;px=999" role="button" title="object-model.png" alt="object-model.png" /></span></P> <P> </P> <P><SPAN>The example below contains a single <STRONG>Inference</STRONG> event with <EM>vehicleDetection</EM>. We will see more of these in a future article.</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">{ "timestamp": 145819820073974, "inferences": [ { "type": "entity", "subtype": "vehicleDetection", "entity": { "tag": { "value": "vehicle", "confidence": 0.9147264 }, "box": { "l": 0.6853116, "t": 0.5035262, "w": 0.04322505, "h": 0.03426218 } } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Apart from the inference events there are many other type of events, such as the&nbsp;<STRONG>MediaSessionEstablished</STRONG> event, which happens when you are recording the media either in <A title="File Sink" href="#" target="_blank" rel="noopener">File Sink</A> or <A title="Video Sink" href="#" target="_blank" rel="noopener">Video Sink</A>.</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">[IoTHubMonitor] [9:42:18 AM] Message received from [avasampleiot-edge-device/avaedge]: { "body": { "sdp": "SDP:\nv=0\r\no=- 1586450538111534 1 IN IP4 XXX.XX.XX.XX\r\ns=Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\ni=media/camera-300s.mkv\r\nt=0 0\r\na=tool:LIVE555 Streaming Media v2020.03.06\r\na=type:broadcast\r\na=control:*\r\na=range:npt=0-300.000\r\na=x-qt-text-nam:Matroska video+audio+(optional)subtitles, streamed by the LIVE555 Media Server\r\na=x-qt-text-inf:media/camera-300s.mkv\r\nm=video 0 RTP/AVP 96\r\nc=IN IP4 0.0.0.0\r\nb=AS:500\r\na=rtpmap:96 H264/90000\r\na=fmtp:96 packetization-mode=1;profile-level-id=4D0029;sprop-parameter-sets=XXXXXXXXXXXXXXXXXXXXXX\r\na=control:track1\r\n" }, "applicationProperties": { "dataVersion": "1.0", "topic": "/subscriptions/{subscriptionID}/resourceGroups/{name}/providers/microsoft.media/videoanalyzers/{ava-account-name}", "subject": "/edgeModules/avaedge/livePipelines/Sample-Pipeline-1/sources/rtspSource", "eventType": "Microsoft.VideoAnalyzers.Diagnostics.MediaSessionEstablished", "eventTime": "2021-04-09T09:42:18.1280000Z" } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>The above points are mentioned so as to show how some of the expected outputs look like. After all that, lets see how exactly you can create a foundation for your AI NVR.</P> <P>&nbsp;</P> <H4>Implementation</H4> <P><SPAN>In this section we will see how we can use these tools to our benefit. For the Azure resources I may not go through the entire creation or installation process as there are quite a few articles on the internet for doing those. I shall only mention the main things to look out for. Here is an outline of the steps involved in the implementation.</SPAN></P> <P>&nbsp;</P> <OL> <LI><STRONG>Create a resource group in Azure</STRONG><SPAN>&nbsp;</SPAN>(<A title="link" href="#" target="_blank" rel="noopener noreferrer">link</A>)</LI> <LI><STRONG>Create a IoT hub in Azure</STRONG>&nbsp;(<A title="link" href="#" target="_blank" rel="noopener noreferrer">link</A>)</LI> <LI><STRONG>Create a IoT Edge device in Azure</STRONG>&nbsp;(<A title="link" href="#" target="_blank" rel="noopener noreferrer">link</A>)</LI> <LI><STRONG>Create and name a new user-assigned managed identity</STRONG> (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Create Azure Video Analyzer Account</STRONG> (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Create AVA Edge provisioning token</STRONG></LI> <LI><STRONG>Install Ubuntu 18/20 on the edge device</STRONG></LI> <LI><STRONG>Prepare the device for AVA module</STRONG> (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Use Dev machine to turn on ONVIF camera(s) RTSP</STRONG> (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Set a local static IP for the camera(s)</STRONG> (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Use any of the players to confirm input streaming video</STRONG> (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Note down RTSP url(s), username(s), and password(s)</STRONG></LI> <LI><STRONG>Install docker on the edge device</STRONG>&nbsp;</LI> <LI><STRONG>Install VSCode on development machine</STRONG>&nbsp;</LI> <LI><STRONG>Install IoT Edge runtime on the edge device</STRONG>&nbsp;(<A title="link" href="#" target="_blank" rel="noopener noreferrer">link</A>)</LI> <LI><STRONG>Provision the device to Azure IoT using connection string</STRONG>&nbsp;(<A title="link" href="#" target="_blank" rel="noopener noreferrer">link</A>)</LI> <LI><STRONG>Check IoT edge Runtime is running good on the edge device and portal</STRONG>&nbsp;</LI> <LI><STRONG>Create an IoT Edge solution in VSCode</STRONG>&nbsp;(<A title="link" href="#" target="_blank" rel="noopener noreferrer">link</A>)</LI> <LI><STRONG>Add env file to solution with AVA/ACR/Azure details</STRONG></LI> <LI><STRONG>Add Intel OVMS, AVA Edge, and RTSP Simulator modules to manifest</STRONG>&nbsp;</LI> <LI><STRONG>Create deployment from template</STRONG>&nbsp;(<A title="link" href="#" target="_blank" rel="noopener noreferrer">link</A>)</LI> <LI><STRONG>Deploy the solution to the device</STRONG>&nbsp;</LI> <LI><STRONG>Check Azure portal for deployed modules running</STRONG></LI> </OL> <P>&nbsp;</P> <P>Lets go some of the items in the list in details.</P> <P>&nbsp;</P> <P>Steps <STRONG>1</STRONG> and <STRONG>2</STRONG> are common steps in many use cases and can be done by following this. For <STRONG>3</STRONG> you need to make sure you are creating an '<STRONG>IoT Edge</STRONG>' device and not a simple IoT device. Follow the link for <STRONG>4</STRONG> to create a <STRONG>managed</STRONG> <STRONG>identity</STRONG>. For <STRONG>5</STRONG> use the interface to create an <STRONG>AVA account</STRONG>. E<SPAN>nter a <STRONG>name</STRONG> for your Video Analyzer account. The name must be all <STRONG>lowercase</STRONG> letters or numbers with no spaces, and <STRONG>3 to 24 characters</STRONG> in length. Fill in the proper <STRONG>subscription</STRONG>, <STRONG>resource group</STRONG>, <STRONG>storage account,</STRONG> and <STRONG>identity</STRONG> from previous steps. You should now be having a running AVA account. Use these steps to create '<STRONG>Edge Provisioninig Token</STRONG>' for step <STRONG>6</STRONG>. Remember, this is just for AVA Edge, not to be confused with provisioning through DPS. For <STRONG>7</STRONG>, ubuntu <STRONG>linux</STRONG> is good, the support for this in windows is a work in progress. After you create the account keep the following information on standby.</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">AVA_PROVISIONING_TOKEN="&lt;Provisioning token&gt;" </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN>Step <STRONG>8</STRONG>, although simple, is an <STRONG>important</STRONG> step in the process. All you actually need to do is to run the below command.</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">bash -c "$(curl -sL https://aka.ms/ava-edge/prep_device)"</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>However, underneath this there is a lot going on in preparation for the NVR.&nbsp;<SPAN>The Azure Video Analyzer module should be configured to run on the IoT Edge device with a <STRONG>non-privileged local user</STRONG> account. The module needs certain local <STRONG>folders for storing application configuration data</STRONG>. The RTSP camera <STRONG>simulator</STRONG> module needs video files with which it can synthesize a <STRONG>live video</STRONG> feed.&nbsp;The <STRONG>prep-device script</STRONG> in the above command <STRONG>automates</STRONG> the tasks of creating input and configuration folders, downloading video input files, and creating user accounts with correct privileges.&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN>Steps <STRONG>9</STRONG>,<STRONG>10</STRONG>, and <STRONG>11</STRONG> are for setting up your <STRONG>ONVIF</STRONG> camera(<STRONG>s</STRONG>). Things to note here are that you need to set <STRONG>static class C IP</STRONG> addresses for each camera, and set <STRONG>https</STRONG> protocol along with <STRONG>difficult-to-guess passwords</STRONG>. Again, take extra caution if you are doing this with wireless camera. I use <STRONG>VLC</STRONG> to confirm the live camera feed from each camera. You may think this is obvious or choose to automate this, but I have seen a lot of issues in either. I personally recommend clients to <STRONG>confirm feed/frame-rate from every camera</STRONG> manually using urls. VLC is my player of choice but you have many more choices.&nbsp;</SPAN></P> <P>&nbsp;</P> <P>Before you bring Azure into the picture, you must have all your <STRONG>RTSP urls ready</STRONG> and tested in setp <STRONG>12</STRONG>. Here is an example rtsp url of the <STRONG>main</STRONG> feed. Notice the port number '<STRONG>554</STRONG>' and encoding '<STRONG>h264</STRONG>'.&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="markup">rtsp://username:difficultpassword@192.168.0.35:554//h264Preview_01_main</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>For <STRONG>13</STRONG> to <STRONG>18</STRONG> keep going by the book(links). For step <STRONG>19</STRONG>, fill in your details in the following block and create the '<STRONG>env</STRONG>' file.</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">SUBSCRIPTION_ID="&lt;Subscription ID&gt;" RESOURCE_GROUP="&lt;Resource Group&gt;" AVA_PROVISIONING_TOKEN="&lt;Provisioning token&gt;" VIDEO_INPUT_FOLDER_ON_DEVICE="/home/localedgeuser/samples/input" VIDEO_OUTPUT_FOLDER_ON_DEVICE="/var/media" APPDATA_FOLDER_ON_DEVICE="/var/lib/videoAnalyzer" CONTAINER_REGISTRY_USERNAME_myacr="&lt;your container registry username&gt;" CONTAINER_REGISTRY_PASSWORD_myacr="&lt;your container registry password&gt;"</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>For <STRONG>20</STRONG> add the following <STRONG>module definitions</STRONG> in your deployment json. This will cover Azure <STRONG>AVA</STRONG>, Intel <STRONG>OVMS</STRONG>, and <STRONG>RTSP</STRONG> Simulator. Also follow this for more details.</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">"modules": { "avaedge": { "version": "1.1", "type": "docker", "status": "running", "restartPolicy": "always", "settings": { "image": "mcr.microsoft.com/media/video-analyzer:1", "createOptions": { "Env": [ "LOCAL_USER_ID=1010", "LOCAL_GROUP_ID=1010" ], "HostConfig": { "Dns": [ "1.1.1.1" ], "LogConfig": { "Type": "", "Config": { "max-size": "10m", "max-file": "10" } }, "Binds": [ "$VIDEO_OUTPUT_FOLDER_ON_DEVICE:/var/media/", "$APPDATA_FOLDER_ON_DEVICE:/var/lib/videoanalyzer" ] } } } }, "openvino": { "version": "1.0", "type": "docker", "status": "running", "restartPolicy": "always", "settings": { "image": "marketplace.azurecr.io/intel_corporation/open_vino:latest", "createOptions": { "HostConfig": { "Dns": [ "1.1.1.1" ] }, "ExposedPorts": { "4000/tcp": {} }, "Cmd": [ "/ams_wrapper/start_ams.py", "--ams_port=4000", "--ovms_port=9000" ] } } }, "rtspsim": { "version": "1.0", "type": "docker", "status": "running", "restartPolicy": "always", "settings": { "image": "mcr.microsoft.com/lva-utilities/rtspsim-live555:1.2", "createOptions": { "HostConfig": { "Dns": [ "1.1.1.1" ], "LogConfig": { "Type": "", "Config": { "max-size": "10m", "max-file": "10" } }, "Binds": [ "$VIDEO_INPUT_FOLDER_ON_DEVICE:/live/mediaServer/media" ] } } } } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><STRONG>21</STRONG> to <STRONG>23</STRONG> are again the usual steps for all IoT solutions and once you <STRONG>deploy</STRONG> the template, you should have the following modules <STRONG>running</STRONG> as below.</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="WhatsApp Image 2021-08-20 at 10.20.07 PM.jpeg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304832i914904803F84DD07/image-size/large?v=v2&amp;px=999" role="button" title="WhatsApp Image 2021-08-20 at 10.20.07 PM.jpeg" alt="WhatsApp Image 2021-08-20 at 10.20.07 PM.jpeg" /></span></P> <P>&nbsp;</P> <P>There, we have created the foundation for our Azure IoT Edge device to perform as a powerful AI NVR. Here '<STRONG>avaedge</STRONG>' is the <STRONG>Azure</STRONG> Video Analyzer service, '<STRONG>openvino</STRONG>' provides the <STRONG>model server</STRONG> extension, and '<STRONG>rtspsim</STRONG>' creates the simulated 'live' input <STRONG>video</STRONG> feed. In the next article we will see how we can use this setup to detect faces or maybe cars and stuff.</P> <P>&nbsp;</P> <H4><EM>Future Work</EM></H4> <P>&nbsp;</P> <P>I hope you enjoyed this <SPAN style="font-family: inherit;">article on setting up an AI enabled NVR for video analytics</SPAN><SPAN style="font-family: inherit;">&nbsp;</SPAN><SPAN style="font-family: inherit;">application. We love to share our experiences and get feedback from the community as to how we are doing. Look out for upcoming articles and have a great time with Microsoft Azure.</SPAN></P> <P>To learn more about Microsoft apps and services, contact us at<SPAN>&nbsp;</SPAN><A title="contact@abersoft.ca" href="https://gorovian.000webhostapp.com/?exam=mailto:contact@abersoft.ca" target="_blank" rel="noopener nofollow noreferrer">contact@abersoft.ca</A><SPAN>&nbsp;</SPAN>or 1-833-455-1850!</P> <P>&nbsp;</P> <P>Please follow us here for regular updates:<SPAN>&nbsp;</SPAN><A title="https://lnkd.in/gG9e4GD" href="#" target="_blank" rel="noopener nofollow noreferrer">https://lnkd.in/gG9e4GD</A><SPAN>&nbsp;</SPAN>and check out our website<SPAN>&nbsp;</SPAN><A title="https://abersoft.ca/" href="#" target="_blank" rel="noopener nofollow noreferrer">https://abersoft.ca/</A><SPAN>&nbsp;</SPAN>for more information!</P> Mon, 13 Sep 2021 15:01:25 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/design-an-ai-enabled-nvr-with-ava-edge-and-intel-openvino/ba-p/2576473 KaushikRoy 2021-09-13T15:01:25Z The Moment: Azure Percept, Stream Analytics, and PowerBI all work together! https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/the-moment-azure-percept-stream-analytics-and-powerbi-all-work/ba-p/2660912 <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/SanX1mVc5oY?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="#PowerBI Stream data from #AzurePercept # IOT thru #Stream Analytics"></IFRAME></DIV> <H1>&nbsp;</H1> <H1>Making a Positive Impact in the World?&nbsp; How the Azure Percept made this possible!</H1> <P>&nbsp;</P> <P>About 4 months ago I was introduced to the Azure Percept, and I first learned about Edge Intelligence in a meaningful way.&nbsp; Before this, Edge Intelligence was just one of those wonderful ideas that I couldn't figure out an application for.&nbsp; Wow did things change for me!</P> <P>&nbsp;</P> <P>You can read about the Azure Percept and purchase a kit here :&nbsp;<A href="#" target="_blank" rel="noopener">The Azure Percept Page</A>&nbsp;. But I am a visual learner and I like to learn by actually applying the technology to real life projects and solve real life problems. So what is the Azure Percept for me?&nbsp; I had three visions in mind.&nbsp;</P> <P>&nbsp;</P> <P>My first vision is to <STRONG>take data from an IoT device, stream the data in real time to a dashboard, and then watch the data update in real time</STRONG>. The Azure Percept allowed me to this without having coding/developer background. This system allowed me to learn about so many Azure resources and connect them together.</P> <P>&nbsp;</P> <P>My second vision is to <STRONG>learn about Edge Intelligence and Vision AI</STRONG> and apply these concepts. At the end of this article, I have videos about two of my projects where I solve real world problems with the Azure Percept. The Maria Project, I am especially proud of because the project creates a customized sign language interpreter for people with range of motion constraints. The ABBy Project is my bold venture to bridge an Ancient Art Form (Ballet) with cutting edge technology (Vision AI/Edge Intelligence) and create an educational Art Exhibit.</P> <P>&nbsp;</P> <P>My third vision is to <STRONG>collect better data</STRONG>.&nbsp; After 3 years working with PowerBI, building star schema data models, and building beautiful data visualizations, I realize that the data collection side of things is not as developed as the output/visualization side.&nbsp; There is a lot of dirty data out there and a lot of data is stored and never looked at again.&nbsp; My dream is to collect better data (send alerts on the most important data), reduce storage size (ie not store video but store insights about the video), allow data to be collected even when there is no internet (remote areas, low signal areas), and allow faster response time to important trends in the data.&nbsp; Edge Intelligence offered hope to solve these issues!&nbsp; I was hooked and ready to learn more.</P> <P>&nbsp;</P> <P>The biggest issue for me is that I am not a developer, so accomplishing the dreams above was going to be a challenge.&nbsp; How was I going to bridge the gap?&nbsp; Along came the Azure Percept which within a few months, allowed me to apply Edge Intelligence, Stream Analytics, and Real Time PowerBI dashboards.&nbsp; The speed at which I was able to learn the new concepts and apply them is just incredible.&nbsp; I was able to make a positive impact to people's lives, which I never imagined was possible a year ago.&nbsp; I want to give back to the community and let others discover the incredible possibilities made possible by the Azure Percept (even if you aren't a developer).&nbsp; &nbsp;This is my second blog post, here is another one with more videos of my early experiments:&nbsp;<A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/face-your-fears-learn-new-stuff-and-push-out-on-the-edges-with/ba-p/2413280" target="_blank" rel="noopener">Fun Projects with the Azure Percept</A>&nbsp;</P> <P>&nbsp;</P> <H1>High Level Steps: Connect The Azure Percept to PowerBI</H1> <P>&nbsp;</P> <UL> <LI>Steps I took to make this happen: <UL> <LI>Route Azure Percept vision recognition data to an IoT Hub</LI> <LI>Route IoT Hub data through Stream Analytics</LI> <LI>Parse the JSON file into manageable fields using Stream Analytics Query Language</LI> <LI>Feed the parsed data into PowerBI without cost overruns</LI> <LI>And watch data update in PowerBI every second!</LI> </UL> </LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CharlesElwood_0-1629296337663.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304099i714AFFC978901DB2/image-size/large?v=v2&amp;px=999" role="button" title="CharlesElwood_0-1629296337663.png" alt="CharlesElwood_0-1629296337663.png" /></span></P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/mOesH79mqC0?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="The TOP 3 TIPS and TRICKS for Azure Percept and IoT HUB Setup"></IFRAME></DIV> <P>&nbsp;</P> <P>&nbsp;</P> <UL> <LI>My first issue: <UL> <LI>The Azure Precept generates a ton of data but how do you see the data that is being generated? <UL> <LI>I learned that there is a <STRONG>View Live Telemetry view in the Azure Percept Studio</STRONG> to do just that!</LI> </UL> </LI> </UL> </LI> </UL> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CharlesElwood_1-1629296392738.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304100iBBC9DE2F1A076046/image-size/large?v=v2&amp;px=999" role="button" title="CharlesElwood_1-1629296392738.png" alt="CharlesElwood_1-1629296392738.png" /></span></P> <H1>The IoT Hub Connection:</H1> <UL> <LI>Routing Data through IoT Hub: <UL> <LI>Luckily, the Out of Box (Setup Wizard) on the Azure Percept, creates an IoT Hub for you and connects your Edge Device to the Hub.</LI> <LI>Now that the IoT HUB was connected, I need to verify that messages where traveling through the HUB. <UL> <LI>I learned that in the <STRONG>Overview tab, you can scroll down through the overview page and see a chart of messages going from device to the cloud.</STRONG></LI> </UL> </LI> </UL> </LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CharlesElwood_2-1629296438283.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304101i4A70BCF75D804700/image-size/large?v=v2&amp;px=999" role="button" title="CharlesElwood_2-1629296438283.png" alt="CharlesElwood_2-1629296438283.png" /></span></P> <H1>The Stream Analytics Connection:</H1> <UL> <LI>This is now my favorite part of the whole system, but the most difficult for me to learn. <OL> <LI>I created a New Stream Analytics job and now had to connect the input to the IOT HUB. <OL> <LI><STRONG>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </STRONG>This was surprisingly easy to do under the <STRONG>Job Topology -&gt; Inputs -&gt; + Add Stream Input -&gt; IOT HUB</STRONG>&nbsp;</LI> </OL> </LI> </OL> </LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CharlesElwood_3-1629296512257.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304102iA680A3B69EA8ED91/image-size/large?v=v2&amp;px=999" role="button" title="CharlesElwood_3-1629296512257.png" alt="CharlesElwood_3-1629296512257.png" /></span></P> <OL> <LI>Now I need to connect the Output side to a Workspace in PowerBI <OL> <LI>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Similar steps as above: <STRONG>Job Topology -&gt; Outputs -&gt; + Add -&gt; PowerBI</STRONG></LI> </OL> </LI> </OL> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CharlesElwood_4-1629296543484.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304103i92B2C6218E81585A/image-size/large?v=v2&amp;px=999" role="button" title="CharlesElwood_4-1629296543484.png" alt="CharlesElwood_4-1629296543484.png" /></span></P> <P>&nbsp;</P> <OL> <LI>Now I wanted to see if all the data would travel between the input and outputs <OL> <LI>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; In the Query editor: <OL> <LI>I used the FROM section to define the input</LI> <LI>I used the INTO section to define the output</LI> <LI>And to test I selected everything from the input by using * in the INPUT section</LI> </OL> </LI> <LI>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; I then turned on my Percept and waited for the Input Preview section to show incoming data.</LI> </OL> </LI> </OL> <UL> <LI>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Then I used the <STRONG>Test Query button and then clicked on Test Results to see output data</STRONG> from the query =)</img></LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CharlesElwood_5-1629296669197.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304105i00FF95B24D09DFE3/image-size/large?v=v2&amp;px=999" role="button" title="CharlesElwood_5-1629296669197.png" alt="CharlesElwood_5-1629296669197.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <OL> <LI>Let’s see what is being pushed into PowerBI <OL> <LI>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Open up your workspace</LI> <LI>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Look in Datasets, and when Data starts pushing into the Dataset, the set will magically appear.</LI> </OL> </LI> </OL> <UL> <LI>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Then Create a report from the dataset and drag all the fields into your PowerBI canvas.</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CharlesElwood_6-1629296725338.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304106i4A233CDF1825BFA5/image-size/large?v=v2&amp;px=999" role="button" title="CharlesElwood_6-1629296725338.png" alt="CharlesElwood_6-1629296725338.png" /></span></P> <H1>Stream Analytics: The Cool Part:</H1> <UL> <LI>Now for the cool part!&nbsp; <UL> <LI>If we passed everything (*) into PowerBI, some of the data shows up as “array”</LI> <LI>So, I wanted to Parse the JSON to break out the individual recognized pieces of data. <UL> <LI>Note: a year ago I didn’t have any idea how JSON worked, so if I can figure it out, you can too.</LI> </UL> </LI> <LI>Copy and paste the following code into the Stream Analytics Edit Query window. <UL> <LI>I’ll admit, I didn’t know how any of this worked at the beginning, but I walked through each command in the Microsoft documentation and slowly figured each part out.&nbsp;</LI> <LI><A href="#" target="_blank" rel="noopener">Stream Analytics Query Language Documentation</A></LI> </UL> </LI> </UL> </LI> </UL> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="sql">SELECT Percept.ArrayValue.label, Percept.ArrayValue.confidence, GetArrayElement(Percept.ArrayValue.bbox, 0) AS bbox0, GetArrayElement(Percept.ArrayValue.bbox, 1) AS bbox1, GetArrayElement(Percept.ArrayValue.bbox, 2) AS bbox2, GetArrayElement(Percept.ArrayValue.bbox, 3) AS bbox3, Percept.ArrayValue.bbox, CAST (udf.main(Percept.ArrayValue.timestamp) as DateTime) as DETECTION_TIMESTAMP, Percept.ArrayValue.timestamp INTO [Type in your PowerBI output here] FROM [Type in your IoT HUB input here] as event CROSS APPLY GetArrayElements(event.Neural_Network) AS Percept WHERE CAST(Percept.ArrayValue.confidence as Float) &gt; 0.6</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CharlesElwood_7-1629296759667.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304107i53F4B57A77C50697/image-size/large?v=v2&amp;px=999" role="button" title="CharlesElwood_7-1629296759667.png" alt="CharlesElwood_7-1629296759667.png" /></span></P> <UL> <LI>A new problem arose.&nbsp; The timestamp was in a format I wasn’t used to.&nbsp; <UL> <LI>Some sample code to translate the time stamp was provided for a UDF (user defined function).</LI> <LI>I have no idea how this works yet but tried copying it into the UDF section and it worked!&nbsp; Thanks to Kevin Saye for helping me with this part.</LI> </UL> </LI> </UL> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="javascript">function main(nanoseconds) { var epoch = nanoseconds * 0.000000001; var epoch = nanoseconds * 0.000000001; var d = new Date(0); d.setUTCSeconds(epoch); return (d.toISOString()); }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CharlesElwood_8-1629296892865.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304108iEEBF91FBE193B5D5/image-size/large?v=v2&amp;px=999" role="button" title="CharlesElwood_8-1629296892865.png" alt="CharlesElwood_8-1629296892865.png" /></span></P> <UL> <LI>Copy the following code into a new function via <STRONG>Stream Analytics -&gt; Job Topology -&gt; Functions -&gt; add (call it “main”) -&gt; Javascript UDF </STRONG></LI> </UL> <H1>PowerBI: New Data: The “Hello World” Moment</H1> <UL> <LI><STRONG>This was one of the most exciting parts of the whole process for me!</STRONG> <UL> <LI>Login to your PowerBI service again, (you might have to delete the old dataset now that the incoming fields have changed)</LI> <LI>After a few minutes, a new dataset will appear with the same name as before. (if your Percept isn’t turned on or if stream analytics isn’t turned on, the dataset won’t magically appear) <UL> <LI><STRONG>Now create a report from this dataset</STRONG>, and then add all the new fields into a table visual.</LI> </UL> </LI> </UL> </LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CharlesElwood_9-1629296921000.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304109iDE11B8EF611EBAB7/image-size/large?v=v2&amp;px=999" role="button" title="CharlesElwood_9-1629296921000.png" alt="CharlesElwood_9-1629296921000.png" /></span></P> <H1>Cost estimation</H1> <UL> <LI>Cost in Azure is always a concern for me, but I finally figured how to use the cost estimator combined with the resource utilization to get a good estimate for projected costs. <UL> <LI>I went to the Azure calculator website, picked the default 3 streaming units, and projected that I would be running Stream Analytics for about 21 days.</LI> <LI>The cost was $166, so now I had to figure out how much load I was placing on the streaming units.</LI> <LI><STRONG><A href="#" target="_blank" rel="noopener">https://azure.microsoft.com/en-us/pricing/calculator/</A></STRONG></LI> </UL> </LI> </UL> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CharlesElwood_10-1629296948588.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304110iB8E049D13F26AAE8/image-size/large?v=v2&amp;px=999" role="button" title="CharlesElwood_10-1629296948588.png" alt="CharlesElwood_10-1629296948588.png" /></span></P> <H1>Monitor Your Usage</H1> <UL> <LI>Now that you are streaming data, and have an estimate on costs, its time to see how much of the stream analytics resource you are using. <UL> <LI>I went into my Stream Analytics Job and scrolled down through the overview section to the charts. <UL> <LI>The chart on the left shows me how many input messages are going into stream analytics and how many are leaving stream analytics to go into the PowerBI report.</LI> <LI><STRONG>The chart on the right is really interesting because I am only using 14% of my Streaming Units (STUs).</STRONG>&nbsp; I am nowhere near the 80% recommended resource utilization, so I plan to scale this back to 1 streaming unit.</LI> </UL> </LI> </UL> </LI> </UL> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CharlesElwood_11-1629296980915.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304111iA9CCAE9194100C66/image-size/large?v=v2&amp;px=999" role="button" title="CharlesElwood_11-1629296980915.png" alt="CharlesElwood_11-1629296980915.png" /></span></P> <H1>Visualizing the Data in Real Time</H1> <UL> <LI>Okay, not real time, but close!&nbsp; I noticed a 5 second delay between when I moved in front of the Percept to when my PowerBI dashboard showed the update. <UL> <LI>Here are the steps I took: <UL> <LI>I found the workspace where I was streaming to in Powerbi.com service.</LI> <LI>I scrolled to the dataset section and waited a few seconds for data to be pushed into the dataset, and then the dataset magically appeared.</LI> <LI>I then used the ellipses next to the dataset to create a report with visualizations to my liking.</LI> <LI>I then saved the report, and then on each visualization I wanted to update in real time, I pinned the visual to a dashboard.&nbsp;</LI> <LI><STRONG>Then I went to the dashboard and WOW, the data was streaming in and updating the visuals every second!</STRONG></LI> </UL> </LI> </UL> </LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CharlesElwood_12-1629297011424.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304112i044B9C485B707C2D/image-size/large?v=v2&amp;px=999" role="button" title="CharlesElwood_12-1629297011424.png" alt="CharlesElwood_12-1629297011424.png" /></span></P> <P>&nbsp;</P> <UL> <LI>The full video series: How to connect the Azure Percept to a Real Time Streaming PowerBI Dashboard: <OL> <LI><A href="#" target="_blank" rel="noopener">Connecting the Azure Percept to IoT HUB to Stream Analytics to PowerBI</A>&nbsp;</LI> <LI><A href="#" target="_blank" rel="noopener">Top 3 Tips to troubleshooting the Connection from Azure Percept to the IoT HUB</A>&nbsp;</LI> <LI>Coming Soon : Setting Up Stream Analytics with Azure Percept Data</LI> <LI>Coming Soon : Setting Up PowerBI live streaming Dashboards with Azure Percept Data</LI> </OL> </LI> </UL> <H1>Where I go from here? Dreaming Big and Improving People's Lives</H1> <H3><STRONG>The Maria Project</STRONG> — Customized Sign Language Interpreter: Removing barriers to communication&nbsp;</H3> <P>&nbsp;</P> <OL> <LI><U>&nbsp;The Problem Statement</U>: Children with Special Needs like Maria in the video can’t sign in sign language like everybody else.</LI> <LI><U>The Solution</U>: Use the Azure Percept to create a custom sign language translator for Maria to accommodate her range of motion.</LI> <LI><U>The Result</U>:<STRONG>&nbsp;Improved Quality of life for children all over the world who struggle with sign language </STRONG></LI> </OL> <P>Video about the Project here:</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/JwQQeeU-ip4?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="Custom #AI Sign Language Interpreter for Maria in Puerto Rico"></IFRAME></DIV> <H3><STRONG>The ABBy Project</STRONG> — Combining Ballet and Vision AI:&nbsp;My Bold Project to build a Bridge</H3> <OL> <LI><U>The Problem Statement</U>:&nbsp;<STRONG>People have heard of AI but very few have trained an AI model.</STRONG></LI> <LI><U>The Solution</U>: I entered an Art Festival called ArtPrize in Grand Rapids.&nbsp; <OL> <LI>I plan to have an Art Piece at the event called The ABBy : Ballet + AI + Bridge Project</LI> <LI>Four Percepts will be feeding vision AI data to webstreams and PowerBI reports that will show bounding boxes of recognized ballet steps that are mapped to emotions.</LI> <LI>On the weekends, members of the crowd will work with a ballet dancer to create new ballet steps that they map to an emotion they are feeling.</LI> <LI>They will walk through the training process to train the AI module, and then their ballet step/emotion becomes a part of the Art.</LI> </OL> </LI> <LI><U>The Result</U>:&nbsp;<STRONG>Democratizing AI and Edge Intelligence by showing people how to train edge intelligence devices.</STRONG></LI> </OL> <P>Videos about the Project here:</P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/ddI4HP15vNo?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="World First #Ballet and Artificial Intelligence (#AI) #Art : Your own body language interpreter?"></IFRAME></DIV> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/E5L4j7Bvfeo?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="#ArtPrize 2021 : An Project to Connect #AzurePercept #AI to Ballet"></IFRAME></DIV> <P>&nbsp;</P> <H1>What are you waiting for? Join the adventure, Get your own Percept here:</H1> <P>&nbsp;</P> <P><STRONG>Learn about Azure Percept&nbsp; </STRONG></P> <P><A href="#" target="_blank" rel="noopener">AZURE.COM page</A><U><BR /></U><A href="#" target="_blank" rel="noopener">Product detail pages</A><U><BR /></U><A href="#" target="_blank" rel="noopener">Pre-built AI models</A><U><BR /></U><A href="#" target="_blank" rel="noopener">Azure Percept - YouTube</A></P> <P>&nbsp;</P> <P><STRONG>Purchase Azure Percept</STRONG> <BR />Available to our customers – <A href="#" target="_blank" rel="noopener">Build your Azure Percept</A></P> Mon, 23 Aug 2021 15:34:22 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/the-moment-azure-percept-stream-analytics-and-powerbi-all-work/ba-p/2660912 CharlesElwood 2021-08-23T15:34:22Z Troubleshooting blank frames on Azure Percept Device Kit https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/troubleshooting-blank-frames-on-azure-percept-device-kit/ba-p/2642760 <P class="c4"><FONT size="5"><SPAN>Introduction</SPAN></FONT></P> <P class="c4">&nbsp;</P> <P class="c4"><SPAN>I recently bought&nbsp;</SPAN><SPAN class="c2">an<SPAN>&nbsp;</SPAN></SPAN><SPAN class="c11 c13"><A class="c3" href="#" target="_blank" rel="noopener">Azure Percept Development Kit</A></SPAN><SPAN class="c2">.</SPAN><SPAN class="c2">&nbsp;The Out-of-Box experience is easy; from setup, connecting to Azure cloud and getting AI running on video frames, it’s seamless. I followed the<SPAN>&nbsp;</SPAN></SPAN><SPAN class="c9 c11"><A class="c3" href="#" target="_blank" rel="noopener">Quickstart guide</A></SPAN><SPAN class="c2">&nbsp;and<SPAN>&nbsp;</SPAN></SPAN><SPAN class="c9 c11"><A class="c3" href="#" target="_blank" rel="noopener">Setup</A></SPAN><SPAN class="c2">&nbsp;experience and I’m running in no time.<SPAN>&nbsp;</SPAN></SPAN><SPAN class="c0"><STRONG>Azure Percept Studio</STRONG> monitors and manages edge devices remotely. Everything was working fine ok until one day, I checked the web stream on Azure Percept Studio and it was receiving blank frames. I am happy that there are resources to help troubleshoot Azure Percept Device Kit.</SPAN></P> <P class="c4"><SPAN>Here’s my recent experience troubleshooting the dev kit and what I did to resolve an issue.</SPAN></P> <P class="c4">&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_0-1628800330770.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302875i160B400FC376DDCC/image-size/large?v=v2&amp;px=999" role="button" title="RonDagdag_0-1628800330770.png" alt="RonDagdag_0-1628800330770.png" /></span></P> <P>&nbsp;</P> <P class="c1"><FONT size="5">Troubleshooting</FONT></P> <P class="c1">&nbsp;</P> <P class="c4"><SPAN class="c0">Once logged into Azure Percept Studio, under the Devices tab, It listed all registered Percept devices. Here is where I checked the status of the device. If the device status is connected, it means Azure can send and receive messages from the device. If the device is off or disconnected from the internet, the device status will be disconnected.</SPAN></P> <P class="c1">&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_3-1628799766382.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302858i18D1F0CA30D40A76/image-size/large?v=v2&amp;px=999" role="button" title="RonDagdag_3-1628799766382.png" alt="RonDagdag_3-1628799766382.png" /></span></P> <P>&nbsp;</P> <P class="c1">&nbsp;</P> <P class="c4"><SPAN class="c0">I can click on the device name, in this case percept-device-01 and get more information about the device. Go to the Vision tab to view the device stream. After a few seconds, it will prompt the link to view the video stream. Note: your laptop has to be in the same network as the device so it can view the video stream.</SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_4-1628799766119.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302857i02A54B2E3BCFA92C/image-size/medium?v=v2&amp;px=400" role="button" title="RonDagdag_4-1628799766119.png" alt="RonDagdag_4-1628799766119.png" /></span></P> <P>&nbsp;</P> <P class="c1">&nbsp;</P> <P class="c4"><SPAN class="c0">The device stream typically shows the camera feed. This time though, the image is blank and not receiving any video frames. The camera is pointed at something, but the stream is blank. Uh-oh, at this point I’m not sure what happened.</SPAN></P> <P class="c1">&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_5-1628799765945.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302860i4D16077E9BA792E1/image-size/medium?v=v2&amp;px=400" role="button" title="RonDagdag_5-1628799765945.png" alt="RonDagdag_5-1628799765945.png" /></span></P> <P>&nbsp;</P> <P class="c1">&nbsp;</P> <P class="c4"><SPAN class="c0">I clicked the <EM>“View Live Telemetry”</EM> button and checked if Azure is receiving telemetry results from the device. I waited for a few minutes, it’s still empty.</SPAN></P> <P class="c4">&nbsp;</P> <P class="c4"><SPAN class="c0">Then I clicked on the <EM>“Open device in IoT Hub”</EM> button.</SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_7-1628799766492.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302862i0BCE894B3D7A77F2/image-size/large?v=v2&amp;px=999" role="button" title="RonDagdag_7-1628799766492.png" alt="RonDagdag_7-1628799766492.png" /></span></P> <P>&nbsp;</P> <P class="c1 c14">&nbsp;</P> <P class="c4"><SPAN>It opened up the IoT Hub page and then I saw the azureeyemodule runtime status is getting an error. I read through the troubleshooting guide for&nbsp;</SPAN><SPAN class="c9"><A class="c3" href="#" target="_blank" rel="noopener">Vision Solution</A></SPAN><SPAN class="c0">&nbsp;and started to research about azureeyemodule; it’s the IoT module responsible for running the AI workload on the Percept DK. It is responsible for sending those live telemetry data. &nbsp;</SPAN></P> <P class="c4"><SPAN>Open source on github at&nbsp;</SPAN><SPAN class="c9"><A class="c3" href="#" target="_blank" rel="noopener">https://github.com/microsoft/azure-percept-advanced-development/</A></SPAN></P> <P class="c1">&nbsp;</P> <P class="c4"><SPAN class="c0">Why is it having an error? &nbsp;There’s a <EM>“Troubleshoot”</EM> button so I can see logs from the IoT module. I selected azureeyemodule and clicked on the <EM>“Restart azureeyemodule”</EM>. Clicked the <EM>“Refresh”</EM> button to view the logs. It gave me a little clue on what’s happening on the device. It might be the device itself.</SPAN></P> <P class="c1">&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_8-1628799766584.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302865iB2FE6A499D4402F5/image-size/large?v=v2&amp;px=999" role="button" title="RonDagdag_8-1628799766584.png" alt="RonDagdag_8-1628799766584.png" /></span></P> <P>&nbsp;</P> <P class="c1">&nbsp;</P> <P class="c4"><SPAN>I read through the&nbsp;</SPAN><SPAN class="c9"><A class="c3" href="#" target="_blank" rel="noopener">Azure Percept DK troubleshooting guide</A></SPAN><SPAN>&nbsp;and&nbsp;</SPAN><SPAN>found out that I can SSH into it. I followed this guide:&nbsp;</SPAN><SPAN class="c9"><A class="c3" href="#" target="_blank" rel="noopener">Connect to your Azure Percept DK over SSH</A></SPAN><SPAN class="c0">. I know the ip address of the dev kit because of the webstream viewer. During the Azure Percept Dev Kit setup, it asked to assign username and password. That’s what I used to login.</SPAN></P> <P class="c1">&nbsp;</P> <P class="c4"><SPAN class="c0">Interesting to learn that it’s using CBL-Mariner linux distribution. I ran the <EM>“iotedge list”</EM> command, this would list status of each IoT Edge module.</SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_9-1628799766245.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302864i6A5E276F9A8A9E9A/image-size/medium?v=v2&amp;px=400" role="button" title="RonDagdag_9-1628799766245.png" alt="RonDagdag_9-1628799766245.png" /></span></P> <P class="c1">&nbsp;</P> <P class="c4"><SPAN class="c0">azureeyemodule has status of failed, everything else is running.</SPAN></P> <P class="c1">&nbsp;</P> <P class="c4"><SPAN class="c0">I checked the usb cable if it’s properly connected to the camera. It’s plugged in properly.</SPAN></P> <P class="c4"><SPAN class="c0">I verified if the operating system can see the camera. I used the <EM>‘usb-devices’</EM> command that gave information about Azure Eye SoM Controller.</SPAN></P> <P class="c4"><SPAN class="c0">This means the usb camera is connected properly.</SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_10-1628799766267.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302863i5CC66E13344BA975/image-size/medium?v=v2&amp;px=400" role="button" title="RonDagdag_10-1628799766267.png" alt="RonDagdag_10-1628799766267.png" /></span></P> <P>&nbsp;</P> <P class="c4"><SPAN class="c0">At this point, I was stuck. Not sure how to solve the issue. I took a break.</SPAN></P> <P class="c4">&nbsp;</P> <P class="c4"><FONT size="5"><SPAN class="c0">The Solution</SPAN></FONT></P> <P class="c4">&nbsp;</P> <P class="c4"><SPAN class="c0">After a few days, I checked for other cables that connect to the camera itself. I wiggled it a little bit and <STRONG>it’s loose</STRONG>! The camera flex ribbon cable was <STRONG>disconnected</STRONG>. It doesn’t look obvious because the cable is under a massive heatsink.</SPAN></P> <P class="c1">&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_17-1628800080449.jpeg" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302873i0552F9F22BD17B25/image-size/medium?v=v2&amp;px=400" role="button" title="RonDagdag_17-1628800080449.jpeg" alt="RonDagdag_17-1628800080449.jpeg" /></span></P> <P>&nbsp;</P> <P class="c4"><SPAN class="c0">So I unscrewed the plate covers, 4 screws. I’m really happy that the Azure percept DK provides the hex tool. &nbsp;I plugged the camera to the right side of the board. I noticed there’s another plug on the left side, I wonder if it can actually process 2 cameras.</SPAN></P> <P class="c1">&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_16-1628800057061.jpeg" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302872iFD4CC70E8608E0E6/image-size/medium?v=v2&amp;px=400" role="button" title="RonDagdag_16-1628800057061.jpeg" alt="RonDagdag_16-1628800057061.jpeg" /></span></P> <P class="c1">&nbsp;</P> <P class="c4"><SPAN class="c0">After assembling it all back. I rebooted the machine. I got back to the Azure Percept Studio and checked device status. Then checked the device in the IoT Hub Portal, azureeyemodule is running again.</SPAN></P> <P>&nbsp;</P> <P class="c4"><SPAN class="c0">I verified the webstream video is receiving video frames, and also live telemetry data.</SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RonDagdag_14-1628799766444.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302869i81998CAFFD0EEC2D/image-size/medium?v=v2&amp;px=400" role="button" title="RonDagdag_14-1628799766444.png" alt="RonDagdag_14-1628799766444.png" /></span></P> <P>&nbsp;</P> <P class="c1"><FONT size="5">Conclusion</FONT></P> <P class="c1">&nbsp;</P> <P class="c4"><SPAN class="c0">As a cloud engineer, my troubleshooting process is a little bit different. I started from the Azure cloud side, checked data communications between cloud and device first in Azure Percept Studio and IoT Hub; I logged into the edge device and checked the software module if it is working and then checked camera drivers. Then I checked the camera driver, then the camera itself. If I was a hardware engineer, most likely I would have checked the camera first and would have solved it sooner.</SPAN></P> <P class="c1">&nbsp;</P> <H2 id="h.yio8bb5hvk9t" class="c11 c15"><SPAN class="c10 c16">Resources</SPAN></H2> <UL> <LI class="c4 c11"><SPAN class="c7">Learn more about Azure Percept development:<SPAN>&nbsp;</SPAN></SPAN><SPAN class="c6"><A class="c3" href="#" target="_blank" rel="noopener">http://aka.ms/getazurepercept</A></SPAN><SPAN class="c7 c10">.</SPAN></LI> <LI class="c4 c11"><SPAN class="c7">Quick Start Guide:<SPAN>&nbsp;</SPAN></SPAN><SPAN class="c8"><A class="c3" href="#" target="_blank" rel="noopener">https://docs.microsoft.com/en-us/azure/azure-percept/quickstart-percept-dk-unboxing</A></SPAN></LI> <LI class="c4 c11"><SPAN class="c7">Setup Guide:<SPAN>&nbsp;</SPAN></SPAN><SPAN class="c8"><A class="c3" href="#" target="_blank" rel="noopener">https://docs.microsoft.com/en-us/azure/azure-percept/quickstart-percept-dk-set-up</A></SPAN><SPAN class="c7">&nbsp;</SPAN></LI> <LI class="c4 c11"><SPAN class="c7">Azure Percept DK setup experience troubleshooting guide:<SPAN>&nbsp;</SPAN></SPAN><SPAN class="c8"><A class="c3" href="#" target="_blank" rel="noopener">https://docs.microsoft.com/en-us/azure/azure-percept/how-to-troubleshoot-setup</A></SPAN></LI> <LI class="c4"><SPAN class="c7">General Azure Percept DK Troubleshooting guide:<SPAN>&nbsp;</SPAN></SPAN><SPAN class="c8"><A class="c3" href="#" target="_blank" rel="noopener">https://docs.microsoft.com/en-us/azure/azure-percept/troubleshoot-dev-kit</A></SPAN></LI> <LI class="c4"><SPAN>Vision solution troubleshooting:&nbsp;</SPAN><SPAN class="c9"><A class="c3" href="#" target="_blank" rel="noopener">https://docs.microsoft.com/en-us/azure/azure-percept/vision-solution-troubleshooting</A><BR /><BR /></SPAN></LI> </UL> <H2><FONT size="4">Ron Dagdag</FONT></H2> <H4><FONT size="3">Lead Software Engineer</FONT></H4> <P><FONT size="2">During the day, Ron Dagdag is a Lead Software Engineer with 20+ years of experience working on a number of business applications using a diverse set of frameworks and languages. He currently support developers at Spacee with their IoT, Cloud and ML development. On the side, Ron Dagdag is active participant in the community as a Microsoft MVP, speaker, maker and blogger. He is passionate about Augmented Intelligence, studying the convergence of Augmented Reality/Virtual Reality, Machine Learning and the Internet of Things.</FONT><BR /><FONT size="2"><LI-USER uid="335815"></LI-USER></FONT></P> Thu, 12 Aug 2021 21:41:07 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/troubleshooting-blank-frames-on-azure-percept-device-kit/ba-p/2642760 Ron Dagdag 2021-08-12T21:41:07Z Simplifying AI Edge deployment with Azure Percept https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/simplifying-ai-edge-deployment-with-azure-percept/ba-p/2641938 <P>As a technology strategist for West Europe, I have had multiple discussions with different customers on IoT projects and the enablement of AI on the Edge. A big portion of those conversation would be on how to utilize Microsoft’s Azure AI models and how to deploy them on the edge, so I was excited when Azure Percept was announced with the capability to simplify Azure cognitive capabilities to the edge. I was able to get my hands on an Azure Percept Device Kit to play around with and discover it’s potential and it did not disappoint. In this blog post I wanted to give a quick tour of the device, it’s components and how you can start deploying use cases in a quick and simple manner.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_0-1628786580289.jpeg" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302768iB957394453A554AD/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_0-1628786580289.jpeg" alt="AhmedAssem_0-1628786580289.jpeg" /></span></P> <P>&nbsp;</P> <H1>What is Azure Percept?</H1> <P><A title="Azure Percept" href="#" target="_blank" rel="noopener"><STRONG>Azure Percept</STRONG></A> is a software/hardware stack that allows you to quickly deploy modules on the edge utilizing different cognitive feeds such as Video and Audio. The applications range from Low Code/No Code using Azure cognitive services to more advanced use cases where you can write and deploy your own algorithms.</P> <P>As it stands the stack consists of a software component and a hardware component.</P> <UL> <LI>Azure Percept Studio: is the launch point that allows you to create AI Edge models and solutions. It easily integrates your edge AI capable hardware with Azure’s suite of services such as IOT hub, cognitive services and more</LI> <LI>Azure Percept Devkit: is an edge AI development kit that enables you to develop audio and vision AI solutions with Azure Percept studio</LI> </UL> <P>&nbsp;</P> <H1>Components</H1> <P><STRONG>Azure Percept Carrier Board:</STRONG></P> <UL> <LI>NXP iMX8m processor</LI> <LI>Trusted Platform Module (TPM) version 2.0</LI> <LI>Wi-Fi and Bluetooth connectivity</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_1-1628786580316.jpeg" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302770i30B8981B63FE997F/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_1-1628786580316.jpeg" alt="AhmedAssem_1-1628786580316.jpeg" /></span></P> <P>&nbsp;</P> <P>You can check more details in <A href="#" target="_blank" rel="noopener">Azure Percept DK datasheet</A></P> <P>&nbsp;</P> <P><STRONG>Azure Percept Vision:</STRONG></P> <UL> <LI>Intel Movidius Myriad X (MA2085) vision processing unit (VPU)</LI> <LI>RGB camera sensor</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_2-1628786580350.jpeg" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302769iDF0F8E6015126665/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_2-1628786580350.jpeg" alt="AhmedAssem_2-1628786580350.jpeg" /></span></P> <P>You can check more details in the <A href="#" target="_self">Azure Percept Vision datasheet</A></P> <P>&nbsp;</P> <P><STRONG>Azure Percept Audio:</STRONG></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_3-1628786580394.jpeg" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302772iF1B70DEF2DB5A499/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_3-1628786580394.jpeg" alt="AhmedAssem_3-1628786580394.jpeg" /></span></P> <P>You can check more details in the <A href="#" target="_blank" rel="noopener">Azure Percept Audio datasheet</A></P> <P>&nbsp;</P> <H1>Connect to Azure Percept Studio:</H1> <P>1 – Open Azure Percept Studio</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_4-1628786580403.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302771i7CFD12C0BE7F9B0A/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_4-1628786580403.png" alt="AhmedAssem_4-1628786580403.png" /></span></P> <P>2 – Head to Devices and you will be able to view the devices that are currently connected</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_5-1628786580407.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302773i5C426E5C434BC16D/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_5-1628786580407.png" alt="AhmedAssem_5-1628786580407.png" /></span></P> <P>3 – Choose your device and click on the vision tab to see the different options for the vision module</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_6-1628786580411.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302774i900D63C2BBA472FD/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_6-1628786580411.png" alt="AhmedAssem_6-1628786580411.png" /></span></P> <P>4 – Click on View your device stream and it will start streaming directly from the camera in real time (You need to have the device in the same network as the PC you are connecting from)</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_7-1628786580431.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302775iEC616039895DBD54/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_7-1628786580431.png" alt="AhmedAssem_7-1628786580431.png" /></span></P> <P>5 - As can be seen it was able to provide an automatic object detection and could recognize the object as my reading chair.</P> <P>&nbsp;</P> <H1>Deploying a Custom Vision Model:</H1> <P>Next, I tried the object detection model on different other objects and found that it didn’t recognise my watch, so I started working on a no code custom vision model to identify watches</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_8-1628786580449.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302776iE391D6AE7E002F41/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_8-1628786580449.png" alt="AhmedAssem_8-1628786580449.png" /></span></P> <P>1 - Head to Overview and create a vision Prototype</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_9-1628786580455.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302777i7F0421D15D2C1A1C/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_9-1628786580455.png" alt="AhmedAssem_9-1628786580455.png" /></span></P> <P>2 – Fill in the details and create your prototype</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_10-1628786580460.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302778i94E8B7F91B3F1591/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_10-1628786580460.png" alt="AhmedAssem_10-1628786580460.png" /></span></P> <P>3 – In the image capture tab I was able to take photos of my watch. You have the option to take the photos manually or using automatic image capturing. I ended up taking 15 photos</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_11-1628786580463.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302779iF4B8EEF56661F0F1/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_11-1628786580463.png" alt="AhmedAssem_11-1628786580463.png" /></span></P> <P>4 – In the next tab open the Open Project in custom vision link and you will be directed towards your project gallery</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_12-1628786580467.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302780iACE34D3841977B90/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_12-1628786580467.png" alt="AhmedAssem_12-1628786580467.png" /></span></P> <P>5 – Click on the untagged option to find all your saved photos</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_13-1628786580510.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302782i870A647FFE42997A/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_13-1628786580510.png" alt="AhmedAssem_13-1628786580510.png" /></span></P> <P>6 – Click on the photos and start tagging them</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_14-1628786580539.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302781i6740AE204CB64A14/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_14-1628786580539.png" alt="AhmedAssem_14-1628786580539.png" /></span></P> <P>7 – Click Train to start training your Custom Vision Model</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_15-1628786580594.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302785i7DE43D200BD8E1D4/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_15-1628786580594.png" alt="AhmedAssem_15-1628786580594.png" /></span></P> <P>8 – Once the training has been done you will be able to review the results of your training. If satisfied you can go back towards the Azure portal</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_16-1628786580599.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302784iD3B565E63295C6D2/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_16-1628786580599.png" alt="AhmedAssem_16-1628786580599.png" /></span></P> <P>9 – Go to the final tab within the custom vision setup prototype and deploy your model</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_17-1628786580601.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302783i9CC426764F3590D4/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_17-1628786580601.png" alt="AhmedAssem_17-1628786580601.png" /></span></P> <P>10 – Azure Percept was able to identify my watch as well as a different watch that wasn’t provided in the training data set</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_18-1628786580622.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302786i6E272C0839097184/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_18-1628786580622.png" alt="AhmedAssem_18-1628786580622.png" /></span></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_19-1628786580650.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302787i2398912A94A30F6D/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_19-1628786580650.png" alt="AhmedAssem_19-1628786580650.png" /></span></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AhmedAssem_20-1628786580707.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/302788i05487C8104868C1C/image-size/medium?v=v2&amp;px=400" role="button" title="AhmedAssem_20-1628786580707.png" alt="AhmedAssem_20-1628786580707.png" /></span></P> <P>&nbsp;</P> <H1>Impressions and next steps:</H1> <P>All in all, Azure Percept has been very easy to operate and to connect to Azure. Deploying the custom vision model was very streamlined and I am looking forward to discovering more and dive deeper into the world of Azure Percept. We are already as well in discussion with multiple customers on different use cases within the domain of Critical Infrastructure where Edge AI is playing a huge role in combination with other components such as Azure Digital Twins and Azure Maps which I am excited to explore over the next period.</P> <P>&nbsp;</P> <H1>Learn more about Azure Percept:</H1> <UL> <LI>You can always head to the <A href="#" target="_blank" rel="noopener">official Azure Percept Page</A></LI> <LI>For more details on the Azure Percept Devkit you can check the <A href="#" target="_blank" rel="noopener">Azure Percept DK overview</A></LI> <LI>For the different tutorials on Azure Percept you can check the <A href="#" target="_blank" rel="noopener">Microsoft documentation</A></LI> <LI>If you would like to check videos you can check the <A href="#" target="_blank" rel="noopener">IOT Show</A> by Olivier Bloch or check Azure Percept page on <A href="#" target="_blank" rel="noopener">YouTube</A>.</LI> </UL> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> Thu, 12 Aug 2021 17:53:09 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/simplifying-ai-edge-deployment-with-azure-percept/ba-p/2641938 Ahmed Assem 2021-08-12T17:53:09Z General availability: Azure Sphere OS version 21.08 expected on Aug 25 https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/general-availability-azure-sphere-os-version-21-08-expected-on/ba-p/2638308 <P><SPAN>Azure Sphere OS version 21.08 is now available for evaluation in the <STRONG>Retail Eval</STRONG> feed</SPAN><SPAN>.&nbsp;The retail evaluation period provides 14 days for backwards compatibility testing. During this time, please verify that your applications and devices operate properly with this release&nbsp;before it is deployed broadly to devices in the Retail feed. The Retail feed will continue to deliver OS version&nbsp;21.07&nbsp;until we publish 21.08&nbsp;in two weeks.&nbsp;</SPAN></P> <P>&nbsp;</P> <P>This evaluation release of 21.08 includes enhancements and bug fixes for the OS only; it does not include an updated SDK.</P> <P>&nbsp;</P> <P><SPAN>Areas of special focus for compatibility testing with the 21.08 release should include:</SPAN></P> <UL> <LI>Apps and functionality utilizing cURL</LI> <LI>Apps and functionality utilizing SPI</LI> <LI>Apps and functionality utilizing I2C</LI> </UL> <P>&nbsp;</P> <P>In addition, 21.08 includes updates to mitigate against the following Common Vulnerabilities and Exposures (CVEs):</P> <UL> <LI>CVE-2021-22924</LI> </UL> <P>&nbsp;</P> <P><SPAN>For more information on Azure Sphere OS feeds and setting up an evaluation device group,&nbsp;see&nbsp;</SPAN><A href="#" target="_self"><SPAN>Azure Sphere OS feeds</SPAN></A><SPAN> and </SPAN><A href="#" target="_self">Set up devices for OS evaluation</A><SPAN>.</SPAN></P> <P>&nbsp;</P> <P>For self-help technical inquiries, please visit <A href="#" target="_self">Microsoft Q&amp;A</A><SPAN> o</SPAN>r <A href="#" target="_self">Stack Overflow</A>. If you require technical support and have a support plan, please submit a support ticket in <A href="#" target="_self">Microsoft Azure Support</A> or work with your Microsoft Technical Account Manager. If you would like to purchase a support plan, please explore the <A href="#" target="_self">Azure support plans</A>.</P> Thu, 12 Aug 2021 00:40:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/general-availability-azure-sphere-os-version-21-08-expected-on/ba-p/2638308 AzureSphereTeam 2021-08-12T00:40:00Z Detecting room occupancy with Azure Percept https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/detecting-room-occupancy-with-azure-percept/ba-p/2588263 <P>If you recently bought an&nbsp;<A href="#" target="_blank" rel="noopener">Azure Percept Development Kit</A>, or you are about to buy one, you probably wonder what kind of solutions you can build out of the box. In this blog post I explain how to use the standard people detection model and how to build a room occupancy detection solution with it.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Detection-PowerBI-AzurePercept.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298483i965FE8DE90772D51/image-size/large?v=v2&amp;px=999" role="button" title="Detection-PowerBI-AzurePercept.png" alt="Detection-PowerBI-AzurePercept.png" /></span></P> <P>&nbsp;</P> <P>With the Azure Percept DK and its camera I can detect people in a room, upload data to Azure IoT Hub and stream it using an Azure Stream Analytics job into a Power BI dataset to offer a nice visualization in a Power BI report. The architecture looks like this:</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="percept-architecture.png" style="width: 800px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300962i43BC4E2B8DC0EECF/image-size/large?v=v2&amp;px=999" role="button" title="percept-architecture.png" alt="percept-architecture.png" /></span></P> <P>&nbsp;</P> <P><SPAN style="font-family: inherit;">In <A href="#" target="_blank" rel="noopener">Azure Percept Studio</A> you can easily deploy a sample model to your device. For this project I am using the "People detection" model.</SPAN></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="azure-percept-sample-models.png" style="width: 530px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300963iC36148240B605A92/image-size/large?v=v2&amp;px=999" role="button" title="azure-percept-sample-models.png" alt="azure-percept-sample-models.png" /></span></P> <P>&nbsp;</P> <P><SPAN style="font-family: inherit;">Azure Percept will send detection data to your IoT hub, and in order to consume this data from another service, you need to create a view into the data stream which is called "consumer group". On the Azure Portal, in the IoT Hub page, under the option "Built-in endpoints", you can add a new consumer group by typing in a new name.</SPAN></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="azure-percept-consumer-group.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300964iAC0F914AD77FA577/image-size/large?v=v2&amp;px=999" role="button" title="azure-percept-consumer-group.png" alt="azure-percept-consumer-group.png" /></span></P> <P>&nbsp;</P> <P>You can then go to your instance of Azure Stream Analytics and create a new job that will stream data from the IoT Hub into a Power BI dataset. Define the job name, resource group and location, and click the "Create" button.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="azure-percept-new-stream-analytics-job.png" style="width: 780px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300967iEC9F6BA45E295FD7/image-size/large?v=v2&amp;px=999" role="button" title="azure-percept-new-stream-analytics-job.png" alt="azure-percept-new-stream-analytics-job.png" /></span></P> <P>&nbsp;</P> <P>For the job you need to define the input (where data is coming from), the output (where we want to send the data), and write a query defining how the data will be transformed between input and output.</P> <P>First add a new input by clicking on "Inputs" under "Job topology" and click "Add stream input". Set the name for your input, select the IoT Hub you are using, select the consumer group we just created and use "service" for "Shared access policy name". The IoT Hub "service" policy is created by default, you can learn more about it <A href="#" target="_blank" rel="noopener">here</A>.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="azure-percept-stream-input.png" style="width: 371px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300965i70783165CBAEA0FE/image-size/large?v=v2&amp;px=999" role="button" title="azure-percept-stream-input.png" alt="azure-percept-stream-input.png" /></span></P> <P>&nbsp;</P> <P>In the left pane select "Outputs" and in a similar way, add output by defining a name, a Power BI group workspace, a database and table name. Click "Authorize" to authorize the connection between Stream Analytics and Power BI.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="azure-percept-stream-output.png" style="width: 418px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300966i11F91E5FF6FA4FBC/image-size/large?v=v2&amp;px=999" role="button" title="azure-percept-stream-output.png" alt="azure-percept-stream-output.png" /></span></P> <P>&nbsp;</P> <P>Next, you need to configure a query for the job. A simple query we can use for this solution would select the number of detections and time when the event was processed:</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="sql">SELECT NEURAL_NETWORK AS Detections, GetArrayLength (NEURAL_NETWORK) AS PersonCount, EventProcessedUtcTime AS Time INTO perceptstreamoutput FROM perceptstreaminput WHERE NEURAL_NETWORK IS NOT NULL AND GetArrayLength (NEURAL_NETWORK) &gt; 0</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Save the query and start the Stream Analytics job. Sign in to Power BI and go to the workspace you defined as the output for the job. Create a new report and pick the dataset where data is being streamed.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="azure-percept-powerbi-report.png" style="width: 550px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300968i1DA2ECE2FD5FF547/image-size/large?v=v2&amp;px=999" role="button" title="azure-percept-powerbi-report.png" alt="azure-percept-powerbi-report.png" /></span></P> <P>&nbsp;</P> <P><SPAN style="font-family: inherit;">In your report add the visualization you prefer and add fields "Person Count" and "Time" to it. If you choose a "Stacked column chart" you should be able to visualize your detections as shown below:</SPAN></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="azure-percept-powerbi-chart.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300970i9028CE2C08A71A58/image-size/large?v=v2&amp;px=999" role="button" title="azure-percept-powerbi-chart.png" alt="azure-percept-powerbi-chart.png" /></span></P> <P>&nbsp;</P> <P>You can see the full solution I built in the below video, in chich you can also hear a bit about the use case of this solution in the Real Estate business:</P> <P>&nbsp;</P> <DIV style="position: relative; padding-bottom: 56.25%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px;"><IFRAME src="https://www.youtube-nocookie.com/embed/HshBvMExI5o?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner?autoplay=false​​​" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title=""></IFRAME></DIV> <P>&nbsp;</P> <P><SPAN>If you'd like to learn more about the use case of this solution feel free to check out the blog post "<A href="#" target="_blank" rel="noopener">Using Azure Percept to build the next Smart Building</A>".</SPAN></P> <P>&nbsp;</P> <H2>Resources</H2> <P>Information about the Azure Percept development kit can be found here: <A href="#" target="_blank" rel="noopener">http://aka.ms/getazurepercept</A>.</P> <P>Azure Percept Studio, which allows you to connect, build, customize and manage your Azure Percept edge solutions is described here: <A href="#" target="_blank" rel="noopener">https://aka.ms/azureperceptbuild</A>.</P> <P>Azure Percept devices come with built-in security on every device and information about it can be found here: <A href="#" target="_blank" rel="noopener">https://aka.ms/azureperceptsecure</A>.</P> <P>Azure Percept library of AI models is available here: <A href="#" target="_blank" rel="noopener">https://aka.ms/azureperceptexplore</A>.</P> <P>&nbsp;</P> Thu, 12 Aug 2021 21:42:43 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/detecting-room-occupancy-with-azure-percept/ba-p/2588263 goranvuksic 2021-08-12T21:42:43Z WSL and EFLOW for IoT Edge Development https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/wsl-and-eflow-for-iot-edge-development/ba-p/2593485 <P>Since Microsoft CEO <A href="#" target="_blank" rel="noopener">Satya Nadella proclaimed</A> that “Microsoft ♥ Linux” in 2014, Microsoft has accelerated its investment in Linux and Open Source. Now, Linux amounts to a sizable commitment seen across on-premises datacenters, Azure public cloud, and edge devices. We recognize that the wor<SPAN>l</SPAN>d is heterogenous, and aim to meet customers where they are, whether it means running workloads in Windows or Linux.<BR /><BR />A result of this investment, Microsoft has released two offerings that make it easier for Windows customers to run Linux workloads: <A href="#" target="_blank" rel="noopener">Windows Subsystem for Linux</A> (WSL), and more recently, <A href="#" target="_blank" rel="noopener">Azure IoT Edge for Linux on Windows</A> (EFLOW).</P> <P>&nbsp;</P> <P>The Windows Subsystem for Linux lets developers run a GNU/Linux environment -- including most command-line tools, utilities, and applications -- directly on Windows, unmodified, without the overhead of a traditional virtual machine or dualboot setup. WSL was introduced in 2016 and WSL 2, which introduces a full Linux kernel and additional features, was introduced in 2019.</P> <P>&nbsp;</P> <P>Similarly, Azure IoT Edge for Linux on Windows allows you to run containerized Linux workloads by running a curated Linux virtual machine on a Windows device. EFLOW was made generally available earlier this year.&nbsp;So what are the key differences between the two? WSL and EFLOW target different audiences and use cases.</P> <P>&nbsp;</P> <P>In general, WSL is designed to enable Linux tools on Windows. This for developers to run Bash and core Linux command-line tools, which may be used for running short-cycle processes as part of an inner development loop. Developers may, for example, be executing code on a PC using WSL to program something else. EFLOW is meant for running production services, primarily in the context of an IoT deployment. This may include 24/7 processes running on devices with multi-year lifespans.</P> <P>&nbsp;</P> <P>As a result, WSL and EFLOW provide vastly different approaches to flexibility and supportability. As a flexible and general-purpose tool for developer use, WSL allows users to select their own user-space. In other words, developers can select their favorite GNU/Linux distributions, such as Ubuntu, from the Microsoft Store and customize their OS by adding packages with a package manager. In contrast, the EFLOW Linux environment has a fixed user-space composition provided by Microsoft and uses <A href="#" target="_blank" rel="noopener">Azure IoT Edge</A>, exclusively, as its deployment mechanism. This means that software must be containerized as an Azure IoT Edge module to run on EFLOW.</P> <P>&nbsp;</P> <P>EFLOW's approach prioritizes supportability, especially as a platform for <A href="#" target="_blank" rel="noopener">IoT</A>. The EFLOW Linux virtual machine comes pre-installed with the Azure IoT Edge runtime with moby-engine, and is validated as a <A href="#" target="_blank" rel="noopener">T&nbsp;</A> for Azure IoT Edge workloads. Furthermore, because EFLOW’s Linux VM is based on Microsoft’s first party <A href="#" target="_blank" rel="noopener">CBL-Mariner</A> operating system, Microsoft is able to keep both the Linux environment and the Azure IoT Edge up-to-date for you through Windows Update and aligns the maintenance of the OS with the cadence of Azure IoT Edge for curated compatibility. Lastly, EFLOW is supported on , with industry-leading long-term servicing. Thus, with EFLOW the entire solution stack is supported by Microsoft.</P> <P>&nbsp;</P> <P>WSL may be used in early development where greater flexibility in user-space is needed. A user may, then, transition to EFLOW when deploying Azure IoT Edge modules to devices for commercial or productized purposes. This transition is possible because they both use the same underlying technology and type of containerized workloads.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="95twr_23-1627433774417.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298852iC6DEE39B18B74324/image-size/large?v=v2&amp;px=999" role="button" title="95twr_23-1627433774417.png" alt="95twr_23-1627433774417.png" /></span></P> <P>&nbsp;</P> <P>Overall, WSL and EFLOW give customers greater flexibility to run their Linux workloads, covering the full-range of target audiences and use cases. WSL has proven to be an extremely valuable tool for developers to access Linux tools on a Windows device, while EFLOW is a fully Microsoft supported IoT production solution for Azure IoT Edge Linux Modules.</P> <P>&nbsp;</P> <P>IoT Show Video</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/GylJBrLAEN4?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="IoT Show episode about when to use WSL and EFLOW when developing for IoT Edge"></IFRAME></DIV> <P>&nbsp;</P> <P>To learn more visit:</P> <P><A href="#" target="_blank" rel="noopener">What is Windows Subsystem for Linux | Microsoft Docs</A></P> <P><A href="#" target="_blank" rel="noopener">What is Azure IoT Edge for Linux on Windows | Microsoft Docs</A></P> Wed, 04 Aug 2021 16:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/wsl-and-eflow-for-iot-edge-development/ba-p/2593485 95twr 2021-08-04T16:00:00Z Azure Sphere Security Service Disruption August 3, 2021 [resolved] https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-sphere-security-service-disruption-august-3-2021-resolved/ba-p/2608258 <P style="margin: 0in; font-family: Calibri; font-size: 11.0pt;">At 1:55 PM PDT on August 3, 2021 a partial Azure Sphere Security Service outage may have affected customers. Azure Sphere devices were not affected. The disruption was limited to the public API (PAPI) service. The Azure Sphere Team resolved the issue at 3:07 PM PDT on August 3, 2021.</P> Wed, 04 Aug 2021 05:20:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-sphere-security-service-disruption-august-3-2021-resolved/ba-p/2608258 Kirsten Soelling 2021-08-04T05:20:00Z Building a Traffic Monitoring AI application for a Smart City with Azure Percept https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/building-a-traffic-monitoring-ai-application-for-a-smart-city/ba-p/2596644 <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshot 2021-08-03 100359.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300239i9A731362DA0BA2EF/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2021-08-03 100359.jpg" alt="Screenshot 2021-08-03 100359.jpg" /></span></P> <P>&nbsp;</P> <P>Many smart cities are thinking about generating traffic insights using edge AI and video as a sensor.&nbsp; These traffic insights can range from simpler insights such as vehicle counting and traffic pattern distribution over time to more advanced insights such as detecting stalled vehicles and alerting the authorities.</P> <P>&nbsp;</P> <P>In this blog, I show how I am using an Azure Percept dev kit to build a sample traffic monitoring application using the reference sources and samples in GitHub provided by Microsoft along with the Azure IoT and Percept ecosystem.</P> <P>&nbsp;</P> <P>I wanted to build a traffic monitoring application that would classify vehicles into cars, trucks, bicycles etc. and count each vehicle category to generate insights such as traffic density and vehicle type distribution over time. I wanted the traffic monitoring AI application to show me the traffic pattern distribution in a dashboard updated in real-time.&nbsp; I also wanted to generate alerts and visualize a short video clip whenever an interesting event occurs (for example number of trucks exceed a threshold value).&nbsp; In addition, a smart city manager would be able to pull up a live video stream when heavy traffic congestion is detected.</P> <P>&nbsp;</P> <H2>Here’s what I needed to get started</H2> <P>&nbsp;</P> <UL> <LI>An <A href="#" target="_blank" rel="noopener">Azure subscription</A> (with full access to Azure services)</LI> <LI>An <A href="#" target="_blank" rel="noopener">Azure Percept DK</A> (Edge Vision/AI device with Azure IoT Edge)</LI> </UL> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Azure-Percept.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300211i0C49AFAE59375BF8/image-size/large?v=v2&amp;px=999" role="button" title="Azure-Percept.jpg" alt="Azure-Percept.jpg" /></span></P> <P>&nbsp;</P> <P>Azure Percept ($349 in the Microsoft store):&nbsp;<A href="#" target="_blank" rel="noopener">https://www.microsoft.com/store/build/azure-percept/8v2qxmzbz9vc</A></P> <P>HOST: NXP iMX8m processor</P> <P>Vision AI: Intel Movidius Myriad X (MA2085) vision processing unit (VPU)</P> <P>&nbsp;</P> <UL> <LI>Inseego 5G MiFi ® M2000 mobile hotspot (reliable cloud connection for uploading events and videos)</LI> </UL> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="inseego_5g_mifi_m2000_2_.png" style="width: 625px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300212i749272375A22DA5A/image-size/large?v=v2&amp;px=999" role="button" title="inseego_5g_mifi_m2000_2_.png" alt="inseego_5g_mifi_m2000_2_.png" /></span></P> <P>&nbsp;</P> <P>Radio: Qualcomm ® snapdragon ™ x55 modem</P> <P>Carrier/plan: T-Mobile 5g Magenta plan</P> <P><A href="#" target="_blank" rel="noopener">https://www.t-mobile.com/tablet/inseego-5g-mifi-m2000</A></P> <P>&nbsp;</P> <H2>Key Azure Services/Technologies used</H2> <UL> <LI><A href="#" target="_blank" rel="noopener">Azure IoT Hub</A></LI> <LI><A href="#" target="_blank" rel="noopener">Azure IoT Edge</A> (edgeAgent, edgeHub docker containers)</LI> <LI><A href="#" target="_blank" rel="noopener">Azure Percept Eye Module</A>&nbsp; (Azure Percept Module)</LI> <LI><A href="#" target="_blank" rel="noopener">Azure Video Analyzer</A> (AVA)</LI> <LI><A href="#" target="_blank" rel="noopener">Azure Blob Storage</A></LI> <LI><A href="#" target="_blank" rel="noopener">Azure Container Registry</A> (ACR)</LI> <LI><A href="#" target="_blank" rel="noopener">Azure Media Services</A> (AMS)</LI> </UL> <P>&nbsp;</P> <H2>Overall setup and description</H2> <P>&nbsp;</P> <H3>Step 1: Unbox and setup the Azure Percept</H3> <P>&nbsp;</P> <P>This step takes about 5-10 minutes when all goes well.&nbsp; You can find the setup instructions here <A href="#" target="_blank" rel="noopener">https://docs.microsoft.com/azure/azure-percept/quickstart-percept-dk-set-up.</A></P> <P>Here are some screenshots that I captured as I went through my Azure Percept device setup process.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2021-08-03 093740.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300227i40CF414DAB9D1C59/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2021-08-03 093740.jpg" alt="Screenshot 2021-08-03 093740.jpg" /></span></P> <P>&nbsp;</P> <P>Key points to remember during the device setup are to make sure you note down the IP address of the Azure Percept and setup your ssh username and password so you can ssh into the Azure Percept from your host machine.</P> <P>During the setup, you can create a new Azure IoT Hub instance in the Cloud or you can use an existing Azure IoT hub that you may already have in your Azure subscription.</P> <P>&nbsp;</P> <H3>Step 2: Ensure good cloud connectivity (uplink/downlink speed for events, videos and live streaming)</H3> <P>&nbsp;</P> <P>The traffic monitoring AI application I am building is intended for outdoor environments where wired connections are not always feasible or available.&nbsp; Video connectivity is necessary for live streaming or uploading video clips when network connectivity is available.&nbsp; For this demo, the Azure Percept device will be connecting to the cloud using a 5G device to upload events and video clips.&nbsp; Make sure that the video uplink speeds over 5G are good enough for video clip uploads as well as live streaming.&nbsp; Here is a screenshot of the speed test for the Inseego 5G MiFi ® M2000 mobile hotspot from T-Mobile that I am using for my setup.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture4.jpg" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300220i2F361AA357039A3A/image-size/medium?v=v2&amp;px=400" role="button" title="Picture4.jpg" alt="Picture4.jpg" /></span></P> <P>&nbsp;</P> <H3>Step 3: Reference architecture</H3> <P>&nbsp;</P> <P>Here is a high-level architecture diagram of a traffic monitoring application built with Azure Percept and Azure services.&nbsp; For this project, I used the Azure Percept dev kit with the single USB-connected camera (as opposed to external IP cameras) and Azure Video Analyzer.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2021-08-03 094021.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300225i575FC0A526309224/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2021-08-03 094021.jpg" alt="Screenshot 2021-08-03 094021.jpg" /></span></P> <P>&nbsp;</P> <H3>Step 4: Build Azure eye module docker container for ARM 64</H3> <P>&nbsp;</P> <P>You will want to make a few customizations to the Azure Eye Module C++ source code tailored to your traffic monitoring application (for example, you can make customizations to only send vehicle detection events to IoT hub or you can build your own custom parser class for custom vehicle detection models).&nbsp; For this project, I am using the SSD parser class with the default SSD object detection model in the Azure Eye Module.</P> <P>To build a customized Azure Eye Module, first download the Azure Eye Module reference source code from GitHub. On your host machine, clone the following repo:</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">git clone https://github.com/microsoft/azure-percept-advanced-development.git</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>On your host machine, open a command shell and use the following command to build the Azure Eye Module docker container.&nbsp; Note that you will need docker desktop running prior to running this command (I am using a Windows host):</P> <P>&nbsp;</P> <LI-CODE lang="bash">docker buildx build --platform linux/arm64 --tag azureeyemodule-xc -f Dockerfile.arm64v8 --load</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>Once docker image is built, tag it and push it to your ACR.</P> <P>&nbsp;</P> <H3>Step 5: Build Objectcounter docker container for arm64</H3> <P>&nbsp;</P> <P>Download the Object Counter reference source code from github. On your host machine, clone the following repo:</P> <P>&nbsp;</P> <LI-CODE lang="bash">git clone https://github.com/Azure-Samples/live-video-analytics-iot-edge-python</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>Navigate to the folder live-video-analytics-iot-edge-python\src\edge\modules\objectCounter</P> <P>Build the docker container and push it to your ACR:</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">docker build -f docker/Dockerfile.arm64 –no-cache . -t objectcounter:0.0.1-arm64v8 docker login -u &lt;your_acr_name&gt; -p &lt;your_acr_password&gt; &lt;your_acr_name&gt;.azurecr.io docker push &lt;your_acr_name&gt;.azurecr.io/objectcounter:0.0.1-arm64v8</LI-CODE> <P>&nbsp;</P> <P>I made several source code changes to main.py in the objectCounter module to customize my own objectCounter docker container.&nbsp; For example, I only send a video event trigger to the signal gate processor (to capture video recording of a few seconds around an event) when a certain vehicle category exceeds a threshold count. I also made customizations so that object counter can understand inference events from SSD (in-built detection engine that comes with AzureEye Module) or a custom YOLOv3 model that is external to the AzureEye module (You can read about how to run an external YOLOv3 model in my previous blog post here</P> <P><A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/set-up-your-own-end-to-end-package-delivery-monitoring-ai/ba-p/2323165" target="_blank" rel="noopener">https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/set-up-your-own-end-to-end-package-delivery-monitoring-ai/ba-p/2323165</A>)</P> <P>&nbsp;</P> <H3>Step 6: Azure Video Analyzer For Edge Devices</H3> <P>&nbsp;</P> <P>To be able to save video recordings around interesting event detections, you will need the Azure Video Analyzer module.</P> <P>You may choose to build your own custom AVA docker container from here:</P> <P><A href="#" target="_blank" rel="noopener">https://github.com/Azure/video-analyzer.git</A></P> <P>You can read more about the AVA and how to deploy it to an edge device here</P> <P><A href="#" target="_blank" rel="noopener">https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-docs/deploy-iot-edge-device</A></P> <P>&nbsp;</P> <H3>Step 7: Configure message routes between the Azure IoT edge modules</H3> <P>&nbsp;</P> <P>The different modules (Azure Percept Module, ObjectCounter Module and AVA Module) interact with each other through MQTT messages.&nbsp;</P> <P>&nbsp;</P> <P>Summary of the routes:</P> <UL> <LI>Azure Percept module sends the inference detection events to IoT hub which is configured to further route the messages either to blob storage or a database (for dashboards and analytics in the cloud).&nbsp;</LI> <LI>Azure Percept module sends the detection events to objectCounter module that implements business logic (such as object counts and aggregations which are used to trigger video recordings via the AVA module)</LI> <LI>ObjectCounter module sends the aggregations and triggers to IoT hub which is configured to further route the messages either to blob storage or a database (for dashboards and analytics in the cloud).&nbsp;</LI> <LI>ObjectCounter module sends the event triggers to AVA so that AVA can start recording event clips</LI> </UL> <P><STRONG>&nbsp;</STRONG></P> <P>Here are a couple of screenshots to show how to route messages from IoT Hub to an endpoint:</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2021-08-03 095308.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300229iE809E9191C4E48A3/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2021-08-03 095308.jpg" alt="Screenshot 2021-08-03 095308.jpg" /></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2021-08-03 095340.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300228i29F680EEF3896805/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2021-08-03 095340.jpg" alt="Screenshot 2021-08-03 095340.jpg" /></span></P> <P>&nbsp;</P> <P>Here is a sample inference detection event that IoT hub receives from the Azure Percept Module</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">Body":{ "timestamp": 145831293577504, "inferences": [ { "type": "entity", "entity": { "tag": { "value": "person", "confidence": 0.62337005 }, "box": { "l": 0.38108632, "t": 0.4768717, "w": 0.19651619, "h": 0.30027097 } } } ]</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <H3>Step 8: Set up the graph topology for AVA</H3> <P>&nbsp;</P> <P>There are multiple ways to build your own custom graph topology based on the use cases and application requirements.&nbsp; Here is how I configured the graph topology for my sample traffic monitoring AI application.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json"> "sources": [ { "@type": "#Microsoft.Media.MediaGraphRtspSource", "name": "rtspSource", "endpoint": { "@type": "#Microsoft.Media.MediaGraphUnsecuredEndpoint", "url": "${rtspUrl}", "credentials": { "@type": "#Microsoft.Media.MediaGraphUsernamePasswordCredentials", "username": "${rtspUserName}", "password": "${rtspPassword}" } } }, { "@type": "#Microsoft.Media.MediaGraphIoTHubMessageSource", "name": "iotMessageSource", "hubInputName": "${hubSourceInput}" } ], "processors": [ { "@type": "#Microsoft.Media.MediaGraphSignalGateProcessor", "name": "signalGateProcessor", "inputs": [ { "nodeName": "iotMessageSource" }, { "nodeName": "rtspSource" } ], "activationEvaluationWindow": "PT3S", "activationSignalOffset": "-PT1S", "minimumActivationTime": "PT3S", "maximumActivationTime": "PT4S" } ], "sinks": [ { "@type": "#Microsoft.Media.MediaGraphFileSink", "name": "fileSink", "inputs": [ { "nodeName": "signalGateProcessor", "outputSelectors": [ { "property": "mediaType", "operator": "is", "value": "video" } ] } ], "fileNamePattern": "MP4-StreetViewAssetFromEVR-AVAEdge-${System.DateTime}", "maximumSizeMiB":"512", "baseDirectoryPath":"/var/media" } ] } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>If you are using a pre-recorded input video file (.mkv or .mp4) instead of live frames from the USB-connected camera module, then update the rtspUrl to grab frames via the RTSPsim module:</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">"name": "rtspUrl", "value": "rtsp://rtspsim:554/media/inv.mkv"</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>I use the following RTSPSim container module provided by Microsoft to stream a pre-recorded video file:</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">mcr.microsoft.com/lva-utilities/rtspsim-live555:1.2</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>If you are using live frames from the USB-connected camera, then grab the live rtsp stream from Azure Percept Module:</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">"name": "rtspUrl", "value": "rtsp://AzurePerceptModule:8554/h264"</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>Here is a brief explanation of the media graph topology that I use:</P> <UL> <LI>There are two source nodes in the graph.&nbsp; <UL> <LI>First source node is the RTSP source (the RTSP source can either serve live video frames from the Percept camera module or pre-recorded video frames served via the RTSPsim)</LI> <LI>Second source node is the IoT message source (this is the output of the Object Counter Trigger)</LI> </UL> </LI> <LI>There is one Processor node which is the signal gate processor.&nbsp; This node takes the IoT message source and RTSP source as inputs and based on the object counter trigger, the signal gate requests the AVA module to create a 5 second video recording of the detected event (-PT1S to +PT4S)</LI> <LI>There is one Sink node, which is the fileSink.&nbsp; This could also be an AMS asset sink.&nbsp; However, currently, AMS asset sink has a limitation of minimum 30 seconds video clip duration.&nbsp; Hence, I used a fileSink to save a 5 second clip and then used an external thread to upload the locally saved .mp4 files to Azure blob storage.&nbsp; Note that for on-demand live streaming, I use Azure AMS.</LI> </UL> <P>&nbsp;</P> <P>You can learn more about Azure Media Graphs here:</P> <P><A href="#" target="_blank" rel="noopener">https://docs.microsoft.com/azure/media-services/live-video-analytics-edge/media-graph-concept</A></P> <P>You can learn more about how to configure signal gates for event based video recording here:</P> <P><A href="#" target="_blank" rel="noopener">https://docs.microsoft.com/azure/media-services/live-video-analytics-edge/configure-signal-gate-how-to</A></P> <P>&nbsp;</P> <H3>Step 9: Dashboard to view events, videos and insights</H3> <P>&nbsp;</P> <P>You can use any web app (e.g. react.js based) and create APIs to build a traffic monitoring dashboard that shows real-time detections and video recordings from Azure IoT hub and Azure blob storage. Here is an example of a dashboard:</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshot 2021-08-03 095922.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300231i4E4F4EE916EC390F/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2021-08-03 095922.jpg" alt="Screenshot 2021-08-03 095922.jpg" /></span></P> <P>&nbsp;</P> <P>Here are some examples of what the Azure Percept detected for a few live and pre-recorded videos:</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshot 2021-08-03 100050.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300232i0A3A260050C08A86/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2021-08-03 100050.jpg" alt="Screenshot 2021-08-03 100050.jpg" /></span></P> <P>&nbsp;</P> <P>(Media Credit: NVIDIA DeepStream SDK)</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshot 2021-08-03 100031.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300234i5536168B449A8566/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2021-08-03 100031.jpg" alt="Screenshot 2021-08-03 100031.jpg" /></span></P> <P>&nbsp;</P> <P>(Media Credit: NVIDIA DeepStream SDK)</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshot 2021-08-03 100008.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300235iD3D69994AEB4FE07/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2021-08-03 100008.jpg" alt="Screenshot 2021-08-03 100008.jpg" /></span></P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshot 2021-08-03 095946.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300233i9F188CF441F0BD83/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2021-08-03 095946.jpg" alt="Screenshot 2021-08-03 095946.jpg" /></span></P> <P>&nbsp;</P> <P>In conclusion, in just a few days, I was able to set up a quick Proof of Concept of a sample traffic monitoring AI application using Azure Percept, Azure services and Inseego 5G MiFi ® M2000 mobile hotspot!&nbsp;</P> <P>&nbsp;</P> <P>Learn more about the Azure Percept at <A href="#" target="_blank" rel="noopener">https://azure.microsoft.com/services/azure-percept/</A></P> <P>&nbsp;</P> <P><EM>Note: The views and opinions expressed in this article are those of the author and do not necessarily reflect an official position of Inseego Corp.</EM></P> <P>&nbsp;</P> Fri, 06 Aug 2021 22:27:50 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/building-a-traffic-monitoring-ai-application-for-a-smart-city/ba-p/2596644 amitmarathe 2021-08-06T22:27:50Z Azure IoT Product Feature Updates https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-iot-product-feature-updates/ba-p/2599072 <P><SPAN data-contrast="auto">Azure IoT services&nbsp;can simplify&nbsp;your IoT development process and empower your organization to achieve more through Digital Transformation.&nbsp;That’s a bold statement to make</SPAN><SPAN data-contrast="none">—</SPAN><SPAN data-contrast="auto">what exactly does it mean?&nbsp;&nbsp;</SPAN><SPAN>&nbsp;<BR /></SPAN><SPAN>&nbsp;<BR /></SPAN><SPAN data-contrast="auto">Microsoft&nbsp;services can help you&nbsp;to&nbsp;fast track&nbsp;any portion of your IoT Solution.&nbsp;That may mean&nbsp;maintaining health and stability through&nbsp;IoT Edge&nbsp;metrics,&nbsp;centralizing&nbsp;management of your devices&nbsp;with&nbsp;IoT Central,&nbsp;analyzing data from digital representations of real-world environments&nbsp;using&nbsp;Digital Twins, or simulating solutions&nbsp;with&nbsp;rapid prototyping tools, turning your&nbsp;cell phone into a live telemetry producing IoT device.&nbsp;&nbsp;</SPAN><SPAN>&nbsp;<BR /></SPAN><SPAN>&nbsp;<BR /></SPAN><SPAN data-contrast="auto">These&nbsp;features&nbsp;are here today and ready to&nbsp;service your&nbsp;immediate&nbsp;needs.&nbsp;In fact,&nbsp;many&nbsp;of&nbsp;our product&nbsp;feature&nbsp;updates are driven specifically from&nbsp;feature&nbsp;requests from our communities,&nbsp;which&nbsp;includes&nbsp;users with&nbsp;the&nbsp;need to&nbsp;manage millions of devices.&nbsp;No matter the size of your project, if you are looking for&nbsp;trusted reliable services that can scale to meet your demands, Azure IoT Services are here for you.&nbsp;&nbsp;Let’s take a&nbsp;closer look at some of the latest feature updates from our product teams&nbsp;and how they can benefit your digital transformation journey!&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><STRONG><SPAN data-contrast="auto"><BR />Public preview: IoT Edge Metrics Collector module 1.0.1 release</SPAN></STRONG><SPAN>&nbsp;<BR /></SPAN><SPAN data-contrast="auto">Published July 19, 2021</SPAN><SPAN>&nbsp;</SPAN></P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Azure Updates IoT.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/299971i52A9B10B19EC79F7/image-size/large?v=v2&amp;px=999" role="button" title="Azure Updates IoT.jpg" alt="Azure Updates IoT.jpg" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">You can remotely monitor your IoT Edge fleet using Azure Monitor and built-in metrics integration. To enable this capability on your device, add the metrics-collector module to your deployment and configure it to collect and transport module metrics to Azure Monitor. This can allow you to collect insights into your devices and allow you the capability to monitor health status&nbsp;of in-production field-deployed devices.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="none">IoT Edge Metrics Collector module release 1.0.1 is now available in the Microsoft container registry.&nbsp;</SPAN><SPAN>&nbsp;<BR /></SPAN><SPAN>&nbsp;<BR /></SPAN><SPAN data-contrast="auto">Why you’ll want it:</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <UL> <LI data-leveltext="" data-font="Wingdings" data-listid="3" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="auto">It contains .NET June security and reliability updates</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Wingdings" data-listid="3" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="auto">Recommended update for all customers</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Wingdings" data-listid="3" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"><SPAN data-contrast="auto">Ability to specify Azure Domain to enable sending metrics to Azure government and regional clouds</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <UL> <LI data-leveltext="" data-font="Wingdings" data-listid="3" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="auto">Tip: Rolling tags '1.0' and 'latest' have been updated to point to the new version</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <P><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Get the update</SPAN></A><SPAN data-contrast="none">&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="auto">General availability: Azure IoT Central new and updated features</SPAN></STRONG><SPAN>&nbsp;<BR /></SPAN><SPAN data-contrast="auto">Published July 15, 2021</SPAN><SPAN>&nbsp;<BR /></SPAN><SPAN>&nbsp;</SPAN></P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="eb2ddd06-e1c5-4758-adf8-1077e15011de.png" style="width: 736px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/299972i5FDD8E8F54B78050/image-size/large?v=v2&amp;px=999" role="button" title="eb2ddd06-e1c5-4758-adf8-1077e15011de.png" alt="eb2ddd06-e1c5-4758-adf8-1077e15011de.png" /></span></SPAN></P> <P><SPAN><BR /></SPAN><SPAN data-contrast="auto">Azure IoT Central is an IoT application platform that simplifies the creation&nbsp;and initial setup&nbsp;of IoT solutions, reducing&nbsp;the&nbsp;management burden and operational cost of developing, managing, and maintaining&nbsp;a typical IoT project.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">Azure IoT Central connects your IoT devices to the cloud quickly and easily, offering centralized management to reconfigure and update your devices.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">A few of the reasons&nbsp;you’ll want&nbsp;to check out the latest&nbsp;updates:</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <UL> <LI data-leveltext="" data-font="Wingdings" data-listid="5" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="auto">You can add tiles to your dashboard in a new way,&nbsp;with tile configuration automatically showing&nbsp;only telemetry and property types&nbsp;a&nbsp;selected tile type supports</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Wingdings" data-listid="5" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="auto">IoT Central pages now have responsive UI</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Wingdings" data-listid="5" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"><SPAN data-contrast="auto">You can now select ‘ALL’ or ‘ANY’ when defining multiple telemetry conditions in the same rule</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Wingdings" data-listid="5" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"><SPAN data-contrast="auto">Tip: Try the refreshed&nbsp;quickstarts&nbsp;to use the new smartphone app</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <P><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Get the update</SPAN></A><SPAN data-contrast="auto">&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="auto">General availability: Azure Digital Twins plugin for Azure Data Explorer&nbsp;</SPAN></STRONG><SPAN>&nbsp;<BR /></SPAN><SPAN data-contrast="auto">Published June 24, 2021</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="t_anderson2330_13-1623954754864.png" style="width: 754px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/299973i75EA441C97B3488C/image-size/large?v=v2&amp;px=999" role="button" title="t_anderson2330_13-1623954754864.png" alt="t_anderson2330_13-1623954754864.png" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">Azure Digital Twins enables you to create digital models of assets, places, people, and processes;&nbsp;and then relate these items based on their real-world relationships.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">The Azure Digital Twins plugin for Azure Data Explorer is now available, enabling you to combine digital models of your environment with time series data from your devices. Use the plugin to contextualize disparate time series data in Azure Data Explorer by reasoning across&nbsp;Digital&nbsp;Twins and their relationships to gain insights into the behavior of your modeled environments.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">Tip:&nbsp;Check out this&nbsp;</SPAN><A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/adding-context-to-iot-data-just-became-easier/ba-p/2459987/?ocid=AID3037591&amp;WT.mc_id=iot-36790-cxa" target="_blank" rel="noopener"><SPAN data-contrast="none">blog post</SPAN></A><SPAN data-contrast="auto">&nbsp;to see how you can use the plugin to analyze the historical behavior of various portions of a smart energy grid.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Get the plugin</SPAN></A><SPAN data-contrast="auto">&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="auto">General availability: Turn your phone into an IoT device with the new IoT Plug and Play mobile app</SPAN></STRONG><SPAN>&nbsp;<BR /></SPAN><SPAN data-contrast="auto">Published June 24, 2021</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="7ba3c654-273c-41d6-9567-3f9ac29ef1ca.png" style="width: 424px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/299974iA516156E2D7992E0/image-size/large?v=v2&amp;px=999" role="button" title="7ba3c654-273c-41d6-9567-3f9ac29ef1ca.png" alt="7ba3c654-273c-41d6-9567-3f9ac29ef1ca.png" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">An Azure IoT solution lets you connect your IoT devices to a&nbsp;cloud-based&nbsp;IoT service. Devices send telemetry, such as temperature and humidity and respond to commands such as reboot and change&nbsp;delivery interval. Devices can also synchronize their internal state with the service, sharing properties such as device&nbsp;model&nbsp;and operating&nbsp;system.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">Turn your iOS or Android device into an IoT device that seamlessly connects to Azure IoT Central or IoT Hub, showcasing the power and simplicity of IoT Plug and Play.&nbsp;No device modeling experience required.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">With this app you’ll be able to:</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <UL> <LI data-leveltext="" data-font="Wingdings" data-listid="1" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="auto">Send telemetry from device sensors such as the accelerometer and barometer</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Wingdings" data-listid="1" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="auto">See device properties such as the manufacturer and model number, and update device twins from the cloud</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Wingdings" data-listid="1" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"><SPAN data-contrast="auto">Execute commands on your device, such as flashing your phone's flashlight, from Azure IoT</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <UL> <LI data-leveltext="" data-font="Wingdings" data-listid="1" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="auto">Upload media to Azure Storage, using File Upload, from your device</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Learn more and download</SPAN></A><SPAN data-contrast="auto">&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="auto">Learn&nbsp;more:&nbsp;Microsoft Developers featured resources repository on GitHub</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">This month the Microsoft Developers&nbsp;Featured&nbsp;Resources repository on GitHub is&nbsp;spotlighting&nbsp;what’s new and what’s next in IoT and&nbsp;DevSecOps.&nbsp;Whether you’re an experienced cloud developer or just getting started with IoT, these demos, projects, and training tools will help you develop next-generation IoT solutions today.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="none">Watch and&nbsp;star&nbsp;this&nbsp;resource&nbsp;repo&nbsp;from Microsoft Developers to keep up with&nbsp;the&nbsp;latest&nbsp;versions:&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Visit the GitHub repo</SPAN></A><SPAN data-contrast="none">&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><EM><SPAN class="TextRun SCXW191717812 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW191717812 BCX8">Paul DeCarlo is a Principal Cloud Developer Advocate for Microsoft and Professor for the Bauer College of Business at the University of Houston. His current technology interests focus on Internet of Things, Cloud Applications, and App Development.&nbsp;</SPAN><SPAN class="NormalTextRun SCXW191717812 BCX8">Additionally,</SPAN><SPAN class="NormalTextRun SCXW191717812 BCX8">&nbsp;he is an experienced startup founder and on weekends Paul performs as lead vocalist in a Houston&nbsp;</SPAN><SPAN class="NormalTextRun SCXW191717812 BCX8">area&nbsp;</SPAN><SPAN class="NormalTextRun SCXW191717812 BCX8">rock band.</SPAN></SPAN><SPAN class="EOP SCXW191717812 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></EM></P> Wed, 04 Aug 2021 00:43:51 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-iot-product-feature-updates/ba-p/2599072 pdecarlo 2021-08-04T00:43:51Z Introducing Azure Video Analyzer Preview https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/introducing-azure-video-analyzer-preview/ba-p/2511971 <P>Last year we launched the preview of <A href="#" target="_self" rel="noopener noreferrer">Live Video Analytics</A> (LVA) to enable you to build video analytics solutions at the edge. Based on feedback from several of you we worked on enhancing LVA and are pleased to announce the availability of <A href="#" target="_self" rel="noopener noreferrer">Azure Video Analyzer</A> as an <A href="#" target="_self" rel="noopener noreferrer">Azure Applied AI Service</A>. As an Azure Applied AI service, AVA provides a platform for solution builders to build human centric semi-autonomous business environments with video analytics. AVA enables businesses to reduce the cost of their business operations using existing video cameras that are already deployed in their business environments.</P> <P>&nbsp;</P> <P>Many businesses take a manual approach to monitoring mundane business operations. This is prone to a higher degree of error and miss-rate due to human error and the fact that assessing the criticality of a visual event sometimes requires corelating multiple signals in real-time. AVA, in conjunction with other Azure services, addresses this challenge by providing a mechanism for businesses to automate mundane visual observations of operations and enable employees to focus on the critical tasks at hand.</P> <P>&nbsp;</P> <P>AVA is a hybrid service spanning the edge and the cloud. The edge component of AVA is available via the <A href="#" target="_blank" rel="noopener noreferrer">Azure marketplace</A> and is referred to as “Azure Video Analyzer Edge,” and it can be used on any X64 or ARM 64 Linux device. AVA Edge enables capturing live video from any <A href="#" target="_blank" rel="noopener nofollow noreferrer">RTSP</A> enabled video sensor, analysis of live video using AI of choice, and publishing of results to edge or cloud applications. AVA Cloud Service is a managed cloud platform that offers REST APIs for secure management of AVA Edge module and video management functionality. This provides flexibility to build video analytics enabled IoT applications that can be operated locally from within a business environment as well as from a centralized remote corporate location.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="MilanGada_0-1625256750059.png" style="width: 597px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293190i7F47C538986BA803/image-dimensions/597x288?v=v2" width="597" height="288" role="button" title="MilanGada_0-1625256750059.png" alt="MilanGada_0-1625256750059.png" /></span></P> <P>&nbsp;</P> <P>AVA can be used with Microsoft provided AI services such as Cognitive Service <A href="#" target="_blank" rel="noopener noreferrer">Custom Vision</A> and <A href="#" target="_blank" rel="noopener noreferrer">Spatial Analysis</A> as well as AI services provided by companies such as <A href="#" target="_blank" rel="noopener noreferrer">Intel</A>. It is also possible to build and integrate a custom AI service that incorporates <A href="#" target="_blank" rel="noopener noreferrer">open-source</A> custom AI models. Business logic that corelates AI insights from AVA with signals from other IoT sensors can be integrated to drive custom business workflows.</P> <P>&nbsp;</P> <P>AVA has been used by companies such as <A href="#" target="_blank" rel="noopener noreferrer">DOW Inc.</A>, <A href="#" target="_blank" rel="noopener noreferrer">Lufthansa CityLine</A>, and <A href="#" target="_blank" rel="noopener noreferrer">Telstra</A> to solve problems such as chemical leak detection, optimizing aircraft turn-around, and traffic analytics. AVA has enabled DOW get closer than ever to reaching their safety goal of zero safety-related incidents. In the case of Lufthansa CityLine, AVA has enabled human coordinators to make better data-driven decisions to reduce aircraft turnaround time, thus reducing costs and improving customer satisfaction significantly. Telstra has been able to unlock new 5G business opportunities with the combination of Azure Video Analyzer, Azure Percept, and Azure Stack Edge.</P> <P>&nbsp;</P> <P>AVA offers all capabilities that were available in LVA and more. A short summary of some of the prominent capabilities are as follows:</P> <P>&nbsp;</P> <H2 id="toc-hId--464256612">Process live video at the Edge</H2> <P>AVA Edge can be deployed on any Azure IoT Edge enabled X64 or AMD64 Linux device. Live video from existing cameras can be directed to AVA and processed on the edge device within the business environment’s network. In turn, this provides organizations with a solution that considers limited bandwidth and potential internet connectivity issues, all while maintaining the privacy of their environments.</P> <P>&nbsp;</P> <H2 id="toc-hId-2023256221">Analyze video with AI of choice</H2> <P>As mentioned earlier, AVA can be integrated with Microsoft provided AI or with AI that is custom built to solve a niche problem, such as Intel’s <A href="#" target="_blank" rel="noopener noreferrer">OpenVINO™ DL Streamer</A> that provides capabilities such as detecting objects (a person, a vehicle, or a bike), object classification (vehicle attributions) and object tracking (person, vehicle and bike).</P> <P>&nbsp;</P> <P>Azure AI is based on Microsoft’s <A href="#" target="_blank" rel="noopener noreferrer">Responsible AI Principles</A> and the <A href="#" target="_blank" rel="noopener noreferrer">Transparency Note</A> provides additional guidance on designing response AI integrations.</P> <P>&nbsp;</P> <H2 id="toc-hId-215801758">Flexible video workflows</H2> <P>In addition to the <A href="#" target="_blank" rel="noopener noreferrer">existing building blocks</A> provided by LVA, the current release of AVA enables a variety of live video workflows via new building blocks such as <A href="#" target="_blank" rel="noopener noreferrer">Object Tracker</A>, <A href="#" target="_blank" rel="noopener noreferrer">Line Crossing</A>, <A href="#" target="_blank" rel="noopener noreferrer">Cognitive Services</A> extension processor and <A href="#" target="_blank" rel="noopener noreferrer">Video Sink.</A></P> <P>&nbsp;</P> <H2 id="toc-hId--1591652705">Simplify app development with easy-to-use widgets</H2> <P>AVA provides a <A href="#" target="_blank" rel="noopener noreferrer">video playback widget</A> which simplifies app development with a secure video player and enables visualization of AI metadata overlaid on video.</P> <P>&nbsp;</P> <P>Azure Video Analyzer offers all these capabilities and more. Take a look at Azure Video Analyzer <A href="#" target="_blank" rel="noopener noreferrer">product page</A> and <A href="#" target="_blank" rel="noopener noreferrer">documentation page</A> to learn more.</P> Thu, 29 Jul 2021 15:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/introducing-azure-video-analyzer-preview/ba-p/2511971 Milan Gada 2021-07-29T15:00:00Z Azure Sphere SDK version 21.07 Update 1 is now available https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-sphere-sdk-version-21-07-update-1-is-now-available/ba-p/2594510 <P>Azure Sphere SDK version 21.07 Update 1 for Windows and for Linux is now published and available for download. The previous 21.07 SDK released earlier this month incorrectly removed some deprecated parameters from the Azure Sphere CLI. The 21.07 Update 1 SDK reinstates these deprecated parameters to the CLI for backward compatibility purposes. For more information, see <A href="#" target="_blank" rel="noopener">Important changes (deprecations) in Azure Sphere CLI</A>.</P> <P>&nbsp;</P> <P>If you had installed the previous release of the 21.07 SDK, you can re-install to get the updated version. To install the latest SDK, see the installation Quickstart for Windows or Linux:</P> <UL> <LI><A href="#" target="_blank" rel="noopener">Quickstart: Install the Azure Sphere SDK for Windows</A></LI> <LI><A href="#" target="_blank" rel="noopener" data-linktype="relative-path">Quickstart: Install the Azure Sphere SDK for Linux</A></LI> </UL> <P>&nbsp;</P> <P><SPAN>For more information on Azure Sphere OS feeds and setting up an evaluation device group,&nbsp;see&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN>Azure Sphere OS feeds</SPAN></A><SPAN> and </SPAN><A href="#" target="_blank" rel="noopener">Set up devices for OS evaluation</A><SPAN>.</SPAN></P> <P>&nbsp;</P> <P>For self-help technical inquiries, please visit <A href="#" target="_blank" rel="noopener">Microsoft Q&amp;A</A><SPAN> o</SPAN>r <A href="#" target="_blank" rel="noopener">Stack Overflow</A>. If you require technical support and have a support plan, please submit a support ticket in <A href="#" target="_blank" rel="noopener">Microsoft Azure Support</A> or work with your Microsoft Technical Account Manager. If you would like to purchase a support plan, please explore the <A href="#" target="_blank" rel="noopener">Azure support plans</A>.</P> Wed, 28 Jul 2021 23:26:55 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-sphere-sdk-version-21-07-update-1-is-now-available/ba-p/2594510 AzureSphereTeam 2021-07-28T23:26:55Z Deriving real-time intelligence about the physical world with Azure Percept and Azure Digital Twins https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/deriving-real-time-intelligence-about-the-physical-world-with/ba-p/2574104 <P>Imagine that you represent a company that manages facilities for various conferences and exhibitions. Part of your job is to ensure that products exhibited by participants (or customers) are being displayed at the right location assigned to the participant. Not only would you need a view of the facilities, but you would also require real-time intelligence about the displayed products and their locations. Now this is possible through the real-time intelligence of Azure Percept and real-world view of Azure Digital Twins.</P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">Azure Percept</A> is a comprehensive platform for creating edge AI solutions. It is great at running AI at the edge and communicating back to Azure IoT Hub.</P> <P><A href="#" target="_blank" rel="noopener">Azure Digital Twins</A> is a comprehensive platform for creating digital representation of real-world things. It is an amazing technology for creating a digital representation of the physical world.</P> <P>&nbsp;</P> <P>When you combine the real-time intelligence of Azure Percept with real-world view of Azure Digital Twins, you get an intelligent digital representation of the real world.</P> <P>&nbsp;</P> <P>In this post we will see how we can collect real-time AI on the edge through Azure Percept and send it to Azure Digital Twins, adding real-time intelligence to the digital representation of real-world entities such as places, devices, and people.</P> <P>&nbsp;</P> <P>The following figure shows how real-time AI on the Edge can be combined with Azure Digital Twins to yield real-time data visualization:</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mnabeel_48-1626935892048.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297571iACF257528777AB5B/image-size/large?v=v2&amp;px=999" role="button" title="mnabeel_48-1626935892048.png" alt="mnabeel_48-1626935892048.png" /></span></P> <P>The goal of this post is to show how to build an end-to-end solution that leverages Azure Percept and Azure Digital Twins to build a real-time intelligent data visualization.</P> <P>&nbsp;</P> <H2>Architecture</H2> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mnabeel_0-1626942360023.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297601i9AE5735A7F42C48E/image-size/large?v=v2&amp;px=999" role="button" title="mnabeel_0-1626942360023.png" alt="mnabeel_0-1626942360023.png" /></span></P> <P>The main components for the architecture are:</P> <UL> <LI>The <A href="#" target="_blank" rel="noopener">Azure Percept</A> DK that is configured to run AI on the Edge.</LI> <LI><A href="#" target="_blank" rel="noopener">Azure IoT Hub</A> that is configured to receive messages from the Azure Percept device.</LI> <LI>An <A href="#" target="_blank" rel="noopener">Azure Digital Twins</A> instance deployed with <A href="#" target="_blank" rel="noopener">DTDL models</A> representing the physical entities monitored by the solution.</LI> <LI><A href="#" target="_blank" rel="noopener">Azure Event Hub</A> setup to receive events/updates from the Azure Digital Twins instance. Azure Event Hub allows telemetry and event data to be made available to various stream-processing infrastructures and can receive millions of events per second which is exactly what we need to process a constant stream of events from the Azure Percept DK.</LI> <LI>Two <A href="#" target="_blank" rel="noopener">Azure Functions</A>, one which serves as a bridge between Azure Percept and the Azure Digital Twins instance, and another one that listens to the Azure Event Hubs to receive updates from the Azure Digital Twins instance.</LI> <LI>A front-end Dashboard application to receive the updates from Azure Digital Twins through EventHub. Any front-end platform can be used to receive these updates and provide data visualization. In our example, we are using a <A href="#" target="_blank" rel="noopener">SignalR</A> based Dashboard application.</LI> </UL> <P>&nbsp;</P> <H2>Setup</H2> <P>&nbsp;</P> <H3>Azure Percept DK Setup</H3> <P>If you do not have Azure Percept DK, you can use any IoT device (edge or leaf) that sends inference data to Azure IoT Hub.</P> <P>To learn more about how to configure Azure Percept devices running AI visit <A href="#" target="_blank" rel="noopener">Create a no-code vision solution in Azure Percept</A></P> <P>The Azure Percept device’s job is to look for objects of interest using an AI model that detects various objects such as people, bottle etc. For this post we are using an example with two use cases:</P> <UL> <LI>Detect objects that are compliant: the presence of “bottle” is expected.</LI> <LI>Detect objects that are not compliant: the presence of a “person” is not expected as this would raise safety concerns.</LI> </UL> <P>The Azure Percept DK is constantly looking for compliant objects (example: bottle) and non-compliant objects (example: person).</P> <P>Here is how Azure Percept is detecting a “bottle”:</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mnabeel_51-1626936536300.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297574i9CFA3F04C0D130B8/image-size/large?v=v2&amp;px=999" role="button" title="mnabeel_51-1626936536300.png" alt="mnabeel_51-1626936536300.png" /></span></P> <P>Azure Percept is constantly sending the inference data (information about detected objects such as bottle, erson) to Azure IoT Hub as <A href="#" target="_blank" rel="noopener">device-to-cloud</A> messages.</P> <P>Here is an example of data sent from the Azure Percept DK to Azure IoT Hub:</P> <P>Data coming from Azure Percept</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">{ "body": { "NEURAL_NETWORK": [ { "bbox": [ 0.32, 0.084, 0.636, 1 ], "label": "person", "confidence": "0.889648", "timestamp": "1622658821393258467" } ] }, "enqueuedTime": "2021-06-02T18:33:41.866Z" }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>This data will be paired up with Azure Digital Twins to yield real-time AI data visualization of real-world digital representation.</P> <P>&nbsp;</P> <H3>Azure Digital Twins Setup</H3> <P>Azure Digital Twins setup is divided into two distinct parts.</P> <UL> <LI>The first part deals with the setup of a model for Azure Digital Twins. The Digital Twins Definition Language (DTDL) models are used to define the digital representation of real-world entities indicating their properties and relationships. The digital entities can represent people, places, and things. For more details on DTDL models visit the <A href="#" target="_blank" rel="noopener">Azure Digital Twins documentation.</A></LI> <LI>The second part is the provisioning of the Azure Digital Twins instance.</LI> </UL> <P>In our example scenario we developed a model that has two components. One component of the model will be the site (mentioned as “PerceptSite” in the model) where the exhibition is taking place. The second component is the model for the floor (“PerceptSiteFloor”) which is assigned to a particular exhibition participant.</P> <P>&nbsp;</P> <H3>Azure Digital Twins Instance Setup</H3> <P>In the following steps, we are using the CLI to setup the Azure Digital Twins instance. You can find details for this at: <A href="#" target="_blank" rel="noopener">Set up an instance and authentication (CLI) - Azure Digital Twins | Microsoft Docs.</A></P> <P>For steps using the Azure Portal visit <A href="#" target="_blank" rel="noopener">Set up an instance and authentication (portal) - Azure Digital Twins | Microsoft Docs</A></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="powershell">$rgname = "&lt;your-prefix&gt;" $random = "&lt;your-prefix&gt;" + $(get-random -maximum 10000) $dtname = $random + "-digitaltwin" $location = "westus2" $username = "&lt;your-username&gt;@&lt;your-domain&gt;" $functionstorage = $random + "storage" $telemetryfunctionname = $random + "-telemetryfunction" $twinupdatefunctionname = $random + "-twinupdatefunction" # Create resource group az group create -n $rgname -l $location # Create Azure Digital Twins instance az dt create --dt-name $dtname -g $rgname -l $location # Create role assignment for user needed to access Azure Digital Twins instance az dt role-assignment create -n $dtname -g $rgname --role "Azure Digital Twins Data Owner" --assignee $username -o json </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><EM>Model Setup </EM></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="powershell">$sitemodelid = "dtmi:percept:DigitalTwins:Site;1" # Creating Azure Digital Twins model for Site az dt model delete --dt-name $dtname --dtmi $sitemodelid $sitemodelid = $(az dt model create -n $dtname --models .\SiteInterface.json --query [].id -o tsv) $sitefloormodelid = "dtmi:percept:DigitalTwins:SiteFloor;1" # Creating Azure Digital Twins model for Site floor $sitefloormodelid = $(az dt model create -n $dtname --models .\SiteFloorInterface.json --query [].id -o tsv) # Creating twin: PerceptSite az dt twin create -n $dtname --dtmi $sitemodelid --twin-id "PerceptSite" # Creating twin: PerceptSiteFloor az dt twin create -n $dtname --dtmi $sitefloormodelid --twin-id "PerceptSiteFloor" $relname = "rel_has_floors" # Creating relationships" az dt twin relationship create -n $dtname --relationship $relname --twin-id "PerceptSite" --target "PerceptSiteFloor" --relationship-id "Site has floors"</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Here is how this basic model looks like once it is created on Azure Digital Twins:</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mnabeel_0-1626939036437.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297587i34AAB34945235363/image-size/large?v=v2&amp;px=999" role="button" title="mnabeel_0-1626939036437.png" alt="mnabeel_0-1626939036437.png" /></span></P> <P>&nbsp;</P> <P>The above screenshot is taken from <A href="#" target="_blank" rel="noreferrer noopener">Azure Digital Twins Explorer</A> view. Azure Digital Twins Explorer is a developer tool for the Azure Digital Twins service. It lets you connect to an Azure Digital Twins instance to understand, visualize, and modify your digital twin data.</P> <H2>Functions Apps Setup</H2> <P>&nbsp;</P> <H3>Azure Digital Twins Ingestion App</H3> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mnabeel_1-1626939132912.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297588i1460142833BB13FB/image-size/large?v=v2&amp;px=999" role="button" title="mnabeel_1-1626939132912.png" alt="mnabeel_1-1626939132912.png" /></span></P> <P>&nbsp;</P> <P>The Azure Digital Twins Ingestion Function app will receive updates from Azure IoT Hub and forward those updates to Azure Digital Twins. The functions use in the code can be found in the Azure.DigitalTwins.Core library.</P> <P>&nbsp;</P> <P>Here is the source code for the Azure Digital Twins Ingestion App:</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="csharp">namespace TwinIngestionFunctionApp { using Azure; using Azure.Core.Pipeline; using Azure.DigitalTwins.Core; using Azure.Identity; using Microsoft.Azure.EventHubs; using Microsoft.Azure.WebJobs; using Microsoft.Extensions.Logging; using Newtonsoft.Json; using Newtonsoft.Json.Linq; using System; using System.Net.Http; using System.Text; using IoTHubTrigger = Microsoft.Azure.WebJobs.EventHubTriggerAttribute; public class TwinsFunction { private static readonly string adtInstanceUrl = Environment.GetEnvironmentVariable("ADT_SERVICE_URL"); private static HttpClient httpClient = new HttpClient(); [FunctionName("TwinsFunction")] public async void Run([IoTHubTrigger("messages/events", Connection = "EventHubConnectionString")] EventData message, ILogger log) { if (adtInstanceUrl == null) log.LogError("Application setting \"ADT_SERVICE_URL\" not set"); { try { //Authenticate with Digital Twins ManagedIdentityCredential cred = new ManagedIdentityCredential("https://digitaltwins.azure.net"); DigitalTwinsClient client = new DigitalTwinsClient(new Uri(adtInstanceUrl), cred, new DigitalTwinsClientOptions { Transport = new HttpClientTransport(httpClient) }); if (message != null &amp;&amp; message.Body != null) { log.LogInformation(Encoding.UTF8.GetString(message.Body.Array)); // Reading AI data for IoT Hub JSON JObject deviceMessage = (JObject)JsonConvert.DeserializeObject(Encoding.UTF8.GetString(message.Body.Array)); string label = deviceMessage["NEURAL_NETWORK"][0]["label"].ToString(); string confidence = deviceMessage["NEURAL_NETWORK"][0]["confidence"].ToString(); string timestamp = deviceMessage["NEURAL_NETWORK"][0]["timestamp"].ToString(); if(!(string.IsNullOrEmpty(label) &amp;&amp; string.IsNullOrEmpty(confidence) &amp;&amp; string.IsNullOrEmpty(timestamp))) { var updateTwinData = new JsonPatchDocument(); updateTwinData.AppendAdd("/Label", label); updateTwinData.AppendAdd("/Confidence", confidence); updateTwinData.AppendAdd("/timestamp", timestamp); await client.UpdateDigitalTwinAsync("PerceptSiteFloor", updateTwinData); log.LogInformation($"Updated Device: PerceptSiteFloor with { updateTwinData} at: {DateTime.Now.ToString()}"); } } } catch (Exception e) { log.LogError(e.Message); } } } } } </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Here is the video that shows how the inference data from Azure Percept is being received by Azure Digital Twins instance:</P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/xCfVnBlp9uE?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="Video showing how the inference data from Azure Percept is being received by Azure Digital Twins"></IFRAME></DIV> <P>&nbsp;</P> <H2>Publishing messages from Azure Digital Twins to Event Hub</H2> <P>&nbsp;</P> <P>In the previous sections we have seen how the messages from Azure Percept are forwarded to Azure Digital Twins through Azure IoT Hub using an Azure Function. In this section we will see how to prepare the Azure Digital Twins instance to send messages to Event Hub. This involves the following:</P> <UL> <LI>Setting up Event Hub</LI> <LI>Setting up Azure Digital Twins to send messages to Event Hub.</LI> </UL> <H3>Setting up Event Hub</H3> <P>The setting up of the Event Hub involves creating an Event Hub Namespace and creating an Event Hub.</P> <P>To create Event Hub Namespace refer to details mentioned at <A href="#" target="_blank" rel="noopener">Azure Quickstart - Create an event hub using the Azure portal - Azure Event Hubs | Microsoft Docs</A></P> <P>To create Event Hub refer to details mentioned at <A href="#" target="_blank" rel="noopener">Azure Quickstart - Create an event hub using the Azure portal - Azure Event Hubs | Microsoft Docs</A></P> <P>Once the Event Hub is created, we can proceed to set up the route from Azure Digital Twins.</P> <P>&nbsp;</P> <H3>Setting up Azure Digital Twins to send messages to Event Hub</H3> <P>Use the following steps to setup Azure Digital Twins to send messages to Event Hub:</P> <OL> <LI>Create Azure Digital Twins Endpoint</LI> <LI>Create Azure Digital Twins Event Route</LI> </OL> <P>&nbsp;</P> <P><I>Create Azure Digital Twins Endpoint</I></P> <P>For this step will be using the Event Hub Namespace and Event Hub that was created in previous steps.</P> <P>Following image illustrates the details of creating an Azure Digital Twins Endpoint:</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mnabeel_2-1626939490703.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297589i7FE5E91D43DA71E5/image-size/large?v=v2&amp;px=999" role="button" title="mnabeel_2-1626939490703.png" alt="mnabeel_2-1626939490703.png" /></span></P> <P>&nbsp;</P> <P><I>Create Azure Digital Twins Event Route</I></P> <P>The following image illustrates the details of creating an Azure Digital Twins Event Route:</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mnabeel_3-1626939550331.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297591iED1A28CA64F5A986/image-size/large?v=v2&amp;px=999" role="button" title="mnabeel_3-1626939550331.png" alt="mnabeel_3-1626939550331.png" /></span></P> <P>The above steps will result in our Azure Digital Twins instance routing messages to our Event hub.</P> <P>&nbsp;</P> <H2>Azure Digital Twins Update App</H2> <P>&nbsp;</P> <P>The previous sections explained the steps to receive messages from Azure Percept to Event Hub using Azure IoT Hub and Azure Digital Twins. In this section we focus on the Azure function that will be receiving events from Event Hub. These events are showing the updates that Azure Digital Twins instance is receiving.</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mnabeel_4-1626939622770.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297592i4E71AB672EE0FCCA/image-size/large?v=v2&amp;px=999" role="button" title="mnabeel_4-1626939622770.png" alt="mnabeel_4-1626939622770.png" /></span></P> <P>The purpose of Azure Function for receiving Azure Digital Twins updates through Event Hub is to create a source for any further processing. For example, this can be a source for 3d modeling platform or an API backend.</P> <P>&nbsp;</P> <P>Here is the source code for the Azure Function that processes the Event Hub events coming from Azure Digital Twins:</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="csharp">namespace TwinsUpdateFunctionApp { using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Text; using System.Threading.Tasks; using Microsoft.Azure.EventHubs; using Microsoft.Azure.WebJobs; using Microsoft.Extensions.Logging; using Newtonsoft.Json; using Newtonsoft.Json.Linq; using TwinsUpdateFunctionApp.model; public static class TwinsUpdateFunction { private static readonly string twinReceiverUrl = Environment.GetEnvironmentVariable("TWINS_RECEIVER_URL"); [FunctionName("TwinsUpdateFunction")] public static async Task Run([EventHubTrigger("[your-digitaltwin-eventhub]", Connection = "EventHubConnectionString")] EventData[] events, ILogger log) { var exceptions = new List&lt;Exception&gt;(); List&lt;TwinUpdate&gt; twinUpdates = new List&lt;TwinUpdate&gt;(); foreach (EventData eventData in events) { try { string messageBody = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count); JObject twinMessage = (JObject)JsonConvert.DeserializeObject(messageBody); if (twinMessage["patch"] != null) { TwinUpdate twinUpdate = new TwinUpdate(); twinUpdate.ModelId = twinMessage["modelId"].ToString(); foreach (JToken jToken in twinMessage["patch"]) { if (jToken["path"].ToString().Equals("/FloorId", StringComparison.InvariantCultureIgnoreCase)) { twinUpdate.Floor = jToken["value"].ToString(); } if (jToken["path"].ToString().Equals("/FloorName", StringComparison.InvariantCultureIgnoreCase)) { twinUpdate.FloorName = jToken["value"].ToString(); } if (jToken["path"].ToString().Equals("/Label", StringComparison.InvariantCultureIgnoreCase)) { twinUpdate.Label = jToken["value"].ToString(); } if (jToken["path"].ToString().Equals("/Confidence", StringComparison.InvariantCultureIgnoreCase)) { twinUpdate.Confidence = jToken["value"].ToString(); } if (jToken["path"].ToString().Equals("/timestamp", StringComparison.InvariantCultureIgnoreCase)) { twinUpdate.Timestamp = jToken["value"].ToString(); } } using (HttpClient httpClient = new HttpClient()) { var requestURl = new Uri($"{twinReceiverUrl}?label={twinUpdate.Label}&amp;confidence={twinUpdate.Confidence}&amp;timestamp={twinUpdate.Timestamp}&amp;floorId={twinUpdate.Floor}&amp;floorName={twinUpdate.FloorName}"); StringContent queryString = new StringContent(messageBody); var response = httpClient.PostAsync(requestURl, queryString).Result; } twinUpdates.Add(twinUpdate); } await Task.Yield(); } catch (Exception e) { exceptions.Add(e); } } if (exceptions.Count &gt; 1) throw new AggregateException(exceptions); if (exceptions.Count == 1) throw exceptions.Single(); } } } </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>The above-mentioned code represents an Azure Function that is triggered by EventHubTrigger. After deserializing the data received from the Digital Twins update, the Azure function posts update to a <A class="Hyperlink SCXW203015753 BCX8" href="#" target="_blank" rel="noreferrer noopener">SignalR</A> front-end application using the following code:</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="csharp"> using (HttpClient httpClient = new HttpClient()) { var requestURl = new Uri($"{twinReceiverUrl}?label={twinUpdate.Label}&amp;confidence={twinUpdate.Confidence}&amp;timestamp={twinUpdate.Timestamp}&amp;floorId={twinUpdate.Floor}&amp;floorName={twinUpdate.FloorName}"); StringContent queryString = new StringContent(messageBody); var response = httpClient.PostAsync(requestURl, queryString).Result; } </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Front-end Dashboard App</H2> <P>&nbsp;</P> <P>The last and the most important piece of our architecture is the front-end Dashboard app that will be used to visualize the Azure Data Twins updates. This is where the value of combing Azure Percept and Azure Digital Twins is revealed. Any front-end platforms can be used for this part depending on the requirements and constraints. To demonstrate the Front-end Dashboard App, we are using a SignalR based application with a single page. SignalR allows to make Web pages more dynamic, offering an eventing mecanism from server to client. Here is a link to a great tutorial on SignalR: <A href="#" target="_blank" rel="noopener">Get started with ASP.NET Core SignalR | Microsoft Docs</A></P> <P>&nbsp;</P> <P>Once the Dashboard App receives the data it automatically updates its view. The view contains a top section showing the “Data visualization through Azure Digital Twins updates and a bottom section showing the “Raw Azure Digital Twins updates”. The top section is an example of how you can present the data visualization on a view using the raw data mentioned on the bottom section.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mnabeel_0-1626940171467.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297593iB1C61A2948A6F4FC/image-size/large?v=v2&amp;px=999" role="button" title="mnabeel_0-1626940171467.png" alt="mnabeel_0-1626940171467.png" /></span></P> <P>&nbsp;</P> <P>For our example we had two use cases representing compliant and non-compliant objects detection. Here is how the data visualization on the front-end app will look like for compliant and non-compliant objects:</P> <P>&nbsp;</P> <H3>Compliant object detection:</H3> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mnabeel_1-1626940252929.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297594i3B919436115E1E88/image-size/large?v=v2&amp;px=999" role="button" title="mnabeel_1-1626940252929.png" alt="mnabeel_1-1626940252929.png" /></span></P> <P>&nbsp;</P> <H3>Non-compliant object detection:</H3> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mnabeel_2-1626940316304.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297595i11CED0517433CF10/image-size/large?v=v2&amp;px=999" role="button" title="mnabeel_2-1626940316304.png" alt="mnabeel_2-1626940316304.png" /></span></P> <P>The source code for the example that we have used for data visualization can be found at: <A class="Hyperlink SCXW105434683 BCX8" href="#" target="_blank" rel="noreferrer noopener">https://aka.ms/AAd96fy</A></P> <P>&nbsp;</P> <H2>Conclusion</H2> <P>Azure Percept and Azure Digital Twins are two distinct Azure services that are built to provide unique values. In this post we have seen how real-time AI with Azure Percept can be combined with the real-word digital representations of Azure Digital Twins to derive pertinent data visualization using real-time intelligence. This post has shown how simple it is to combine both technologies to achieve data visualization that we need.</P> <P>Here is what the final application looks like:</P> <H3>Compliant use case</H3> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mnabeel_3-1626940471402.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297596i81D8AEC953009B65/image-size/large?v=v2&amp;px=999" role="button" title="mnabeel_3-1626940471402.png" alt="mnabeel_3-1626940471402.png" /></span></P> <P>&nbsp;</P> <H3>Non-compliant use case</H3> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mnabeel_4-1626940546176.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297597i36820FBB9F6B7920/image-size/large?v=v2&amp;px=999" role="button" title="mnabeel_4-1626940546176.png" alt="mnabeel_4-1626940546176.png" /></span></P> <P>Following video shows how Azure Percept is running AI on edge to identify objects and then how the Front-end Dashboard App displays the update flowing from Azure Digital Twins:</P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/dPw9xI35AA0?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="Video showing how Azure Percept is running AI on the edge to identify objects and then how the Front-end Dashboard App displays the update flowing from Azure Digital Twins"></IFRAME></DIV> <P>&nbsp;</P> <P>Let us know what you think commenting below and to stay informed, subscribe to our post here and follow us on Twitter <A href="#" target="_blank" rel="noopener">@MicrosoftIoT</A> .</P> Tue, 27 Jul 2021 15:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/deriving-real-time-intelligence-about-the-physical-world-with/ba-p/2574104 mnabeel 2021-07-27T15:00:00Z Azure Percept DK : The one about AI Model possibilities https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-dk-the-one-about-ai-model-possibilities/ba-p/2579617 <P class="lia-align-justify">Do you have great ideas for AI solutions, but lack knowledge of Data science, Machine Learning or how to build edge AI solutions?&nbsp; When <A href="#" target="_blank" rel="noopener">Azure Percept</A> was first released I was so excited to get it set up and start training models that I recorded a neat unboxing YouTube <A href="#" target="_blank" rel="noopener">video</A> that shows you how to do that in under 8 minutes. Right out of the box it could do some pretty cool stuff like object detection/image classification that could be used for things like people counting and object recognition and it even includes a progammable voice assistant!&nbsp;</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Percept.jpg" style="width: 975px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298009i001C8396AD66F843/image-size/large?v=v2&amp;px=999" role="button" title="Percept.jpg" alt="The Azure Percept DK" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">The Azure Percept DK</span></span></P> <P>Today I want to focus on what you get by way of models supported out of the box and what you can bring to the party.</P> <P>&nbsp;</P> <H2><U>1.&nbsp;Out-of-the-box AI Models:</U></H2> <P class="lia-align-justify">The Azure Percept DK's <EM>azureeeyemodule</EM> supports a few AI models out of the box. The default model that runs is <A href="#" target="_blank" rel="noopener">Single Shot Detector (SSD)</A>, trained for general object detection on the <A href="#" target="_blank" rel="noopener">COCO</A> dataset. Azure Percept Studiom also contains sample models for the following applications:</P> <UL class="lia-align-justify"> <LI>people detection</LI> <LI>vehicle detection</LI> <LI>general object detection</LI> <LI>products-on-shelf detection</LI> </UL> <P class="lia-align-justify">With pre-trained models, no coding or training data collection is required, simply&nbsp;<A href="#" target="_blank" rel="noopener">deploy your desired model</A>&nbsp;to your Azure Percept DK from the portal and open your devkit’s&nbsp;<A href="#" target="_blank" rel="noopener">video stream</A>&nbsp;to see the model inferencing in action.</P> <P class="lia-align-justify">&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="OOBmodels.png" style="width: 308px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298010i71A4311FA2D3C99B/image-size/medium?v=v2&amp;px=400" role="button" title="OOBmodels.png" alt="The out-of-the-box AI models included in Azure Percept Studio" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">The out-of-the-box AI models included in Azure Percept Studio</span></span></P> <P align="center">&nbsp;</P> <H2><U>2. Other officially supported models:</U></H2> <P class="lia-align-justify">Not all models are found inside the Azure Percept Studio. Here are the <A href="#" target="_blank" rel="noopener">links</A> for the models that we officially guarantee (because we host them and test them on every release).&nbsp; Below are some of the fun ones you may want to try out:</P> <P class="lia-align-justify">&nbsp;</P> <UL> <LI> <H3>OpenPose - &nbsp;multi-person 2D pose estimation network (based on the OpenPose approach)&nbsp;</H3> </LI> </UL> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Pose.png" style="width: 591px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298023iBFA132EE342A156E/image-size/large?v=v2&amp;px=999" role="button" title="Pose.png" alt="Pose.png" /></span></P> <P>&nbsp;</P> <UL> <LI> <H3>OCR - Text detector based on&nbsp;<A href="#" target="_blank" rel="noopener">PixelLink</A>&nbsp;architecture with&nbsp;<A href="#" target="_blank" rel="noopener">MobileNetV2</A></H3> </LI> </UL> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="OCR.png" style="width: 591px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298021i4010FC2B719DFCD3/image-size/large?v=v2&amp;px=999" role="button" title="OCR.png" alt="OCR.png" /></span>&nbsp;</P> <P>&nbsp;</P> <P class="lia-align-justify">You will notice that many come from our collaboration with Intel and their <A href="#" target="_blank" rel="noopener">Open Model Zoo</A> through support for the <A href="#" target="_blank" rel="noopener">OpenVINO™ toolkit</A>.&nbsp; We’ve made it really easy to use these models, since all you have to do is paste the URLs into your IOT Hub <A href="#" target="_blank" rel="noopener">Module Identity Twin</A> associated with your Percept Device as the value for "ModelZipUrl" or download them and host them in say something like container storage and point to that.&nbsp;</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="DeviceTwin.png" style="width: 676px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298022iA14DDDF8944243ED/image-size/large?v=v2&amp;px=999" role="button" title="DeviceTwin.png" alt="Azure Percept Device Twin file found in the associated IOT Hub" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Azure Percept Device Twin file found in the associated IOT Hub</span></span></P> <P align="center">&nbsp;</P> <P>If you are worried about the security of your AI model, Azure Percept provides protection&nbsp;<A href="#" target="_blank" rel="noopener">at rest</A>&nbsp;and&nbsp;<A href="#" target="_blank" rel="noopener">in transit</A>, so check out <A href="#" target="_blank" rel="noopener">Protecting your AI model and sensor data</A>.</P> <P>&nbsp;</P> <H2><U>3.&nbsp;Build and train your own custom vision model:</U></H2> <P class="lia-align-justify">Azure Percept Studio enables you to build and deploy your own custom computer vision solutions with no coding required. This typically involves the following:</P> <UL class="lia-align-justify"> <LI>Create a vision project in&nbsp;<A href="#" target="_blank" rel="noopener">Azure Percept Studio</A></LI> <LI>Collect training images with your devkit</LI> <LI>Label your training images in&nbsp;<A href="#" target="_blank" rel="noopener">Custom Vision</A></LI> <LI>Train your custom object detection or classification model</LI> <LI>Deploy your model to your devkit</LI> <LI>Improve your model by setting up retraining</LI> </UL> <P class="lia-align-justify">A full <A href="#" target="_blank" rel="noopener">article</A> detailing all the steps you need to train a model so you can do something like sign language recognition.&nbsp; I did in fact record a video a while back that shows you not only how to do this, but also transcribe it into text using PowerApps all with no-code.</P> <P class="lia-align-justify">&nbsp;</P> <P class="lia-align-justify">&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/2jEHcBWHL3I?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="Using Azure Percept DK to transcribe American Sign Language without writting code - Teaser"></IFRAME></DIV> <H2>&nbsp;</H2> <H2><U>4. Bring your own model:</U></H2> <P class="lia-align-justify">If you do have data science skills and/or you have already built out your own solution, trained your model and want to deploy it on Azure Percept, that too is possible.&nbsp; There is a great <A href="#" target="_blank" rel="noopener">tutorial/Lab</A> that will take you though some Jupyter Notebooks to capture images, label and train your model and package and deploy it to Azure Percept without using the Azure Percept Studio or custom vision projects.&nbsp; This material leverages the Azure Machine Learning Service we have developed to do the heavy lifting for compute, labelling and training but for the best part mostly orchestrated right out of the Jupyter notebook.&nbsp;</P> <P class="lia-align-justify">&nbsp;</P> <P class="lia-align-justify"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Jupyter.png" style="width: 975px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/298029iB06E66D99FA6CC91/image-size/large?v=v2&amp;px=999" role="button" title="Jupyter.png" alt="Using Jupyter notebooks to capture, train and build models for Azure Percept using Azure ML" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Using Jupyter notebooks to capture, train and build models for Azure Percept using Azure ML</span></span></P> <P>&nbsp;</P> <P class="lia-align-justify">There is another tutorial that will walk you through the<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">banana U-Net semantic segmentation notebook</A>, which will train a semantic segmentation model from scratch using a small dataset of bananas, and then convert the resulting ONNX model to OpenVINO IR, then to OpenVINO Myriad X .blob format, and finally deploy the model to the device.</P> <P class="lia-align-justify">&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/qnF1QyiF4p0?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="Azure Percept: AI Volumetric Detection of Fresh Produce in a Grocery Store"></IFRAME></DIV> <P class="lia-align-justify">&nbsp;</P> <H2>Conclusion/Resources:</H2> <P>Hopefully it is now clear what the various options are for getting your AI Model from silicon-to-service, the way you want, with the speed and agility the Azure Percept DK offers.&nbsp; Whether you are a hobbyist, developer or data scientist, the Azure Percept DK offers a way to get your ideas into production faster, leveraging the power of the cloud, AI and the edge. &nbsp;&nbsp;Here are some resources to get you started:</P> <P>&nbsp;</P> <OL> <LI><STRONG> Purchase Azure Percept</STRONG></LI> </OL> <UL> <LI><A href="#" target="_blank" rel="noopener">Buy your Azure Percept</A></LI> </UL> <OL start="2"> <LI><STRONG> Architecture and Technology </STRONG></LI> </OL> <UL> <LI>Technical Overview of Azure Percept with Microsoft Mechanics by George Moore - &nbsp;&nbsp;<BR /><A href="#" target="_blank" rel="noopener">B</A><A href="#" target="_blank" rel="noopener">uild &amp; Deploy to edge AI devices in minutes</A></LI> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-enables-simple-ai-and-computing-on-the-edge/ba-p/2280687" target="_blank" rel="noopener">Azure Percept enables simple AI and computing on the edge</A></LI> </UL> <OL start="3"> <LI><STRONG> Industry Use Cases and Community Projects:</STRONG></LI> </OL> <UL> <LI><A href="#" target="_blank" rel="noopener">Azure Percept showing Edge Computing and AI in the Agriculture Summit keynote by Jason Zander - YouTube</A></LI> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/set-up-your-own-end-to-end-package-delivery-monitoring-ai/ba-p/2323165" target="_blank" rel="noopener">Set up your own end-to-end package delivery monitoring AI application on the edge with Azure Percept</A></LI> <LI><A href="#" target="_blank" rel="noopener">Live simulation of Azure Percept from Alaska Airlines: Azure Percept dev kit to detect airplanes and baggage trucks on an airport runway.</A></LI> <LI><A href="#" target="_blank" rel="noopener">BlueGranite's</A><A href="#" target="_blank" rel="noopener"> Smart City Solution</A></LI> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/perceptmobile-azure-percept-obstacle-avoidance-lego-car/ba-p/2352666" target="_blank" rel="noopener">Perceptmobile</A><A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/perceptmobile-azure-percept-obstacle-avoidance-lego-car/ba-p/2352666" target="_blank" rel="noopener">: Azure Percept Obstacle </A><A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/perceptmobile-azure-percept-obstacle-avoidance-lego-car/ba-p/2352666" target="_blank" rel="noopener">Avoidance</A><A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/perceptmobile-azure-percept-obstacle-avoidance-lego-car/ba-p/2352666" target="_blank" rel="noopener"> LEGO Car</A></LI> </UL> Mon, 26 Jul 2021 16:03:58 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-dk-the-one-about-ai-model-possibilities/ba-p/2579617 AzureJedi 2021-07-26T16:03:58Z Live Air Quality display with ASA, Power BI and Azure IoT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/live-air-quality-display-with-asa-power-bi-and-azure-iot/ba-p/2563638 <P>This article is the <STRONG>second</STRONG> part of a series (<A title="part 1" href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/design-a-azure-iot-indoor-air-quality-monitoring-platform-from/ba-p/2549733?WT.mc_id=iot-20412-cxa" target="_blank" rel="noopener">part 1</A>) which explores an end-to-end pipeline to deploy an <STRONG>Air Quality Monitoring</STRONG> application using off-the-shelf sensors, Azure IoT Ecosystem and Python. In the previous article we explored why such a platform is useful and how to solve the first mile problem of building this platform, i.e. getting the sensor data to the cloud. Before beginning this article it is assumed that you understand the impact of the product and it's potential value in current times, met the <STRONG>prerequisites</STRONG>, and successfully deployed the <STRONG>AirQualityModule</STRONG> on the IoT edge device. This article will show how to consume the data from the IoT hub, clean, summarize, and display the metrics on a dashboard.</P> <P>&nbsp;</P> <H4>What are ASA and Power BI?</H4> <P>Azure Stream Analytics (ASA)&nbsp; is an easy-to-use, real-time analytics service that is designed for mission-critical workloads. Power BI is the visualization layer that unifies data from many sources and provides insights.&nbsp;Refer to&nbsp; <A title="ASA" href="#" target="_blank" rel="noopener">ASA</A>&nbsp;and <A title="Power BI" href="#" target="_blank" rel="noopener">Power BI</A> for pricing and documentation.&nbsp;</P> <P>&nbsp;</P> <P>To really appreciate the Azure infrastructure, let us take a small step back and understand what it means to say '<STRONG>Data is the new Oil</STRONG>'.&nbsp; What does <STRONG>Oil</STRONG> (<STRONG>Data</STRONG>) exactly do? It enables businesses to move faster, create <STRONG>actionable insights</STRONG> faster,&nbsp;and&nbsp;<STRONG>stay ahead of the competition</STRONG>. For obvious reasons, the business needs the&nbsp; <STRONG>pipelines</STRONG> taking Oil (Data) from <STRONG>production to consumption</STRONG> in a smooth and regulated way.&nbsp;The problem is that this seemingly <STRONG>'smooth'</STRONG> process is a complex labyrinth of interwoven Oil (Data) Engineering pipelines that can potentially make or break the business. This means you have to be very careful and judicious when you choose your Oil (Data) Engineering tools for your clients.&nbsp;I experimented with a lot of products, and found <STRONG>ASA</STRONG> and <STRONG>Power BI</STRONG> to be <STRONG>best in class </STRONG>for<STRONG> real time </STRONG>data <STRONG>processing </STRONG>and <STRONG>visualization</STRONG>. Here is a diagramatic view of ASA's capabilities and its place in the Azure Ecosystem.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="maxresdefault.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296920i6EA7D964E8574253/image-size/large?v=v2&amp;px=999" role="button" title="maxresdefault.jpg" alt="maxresdefault.jpg" /></span></P> <P>&nbsp;</P> <P>Some other notable mentions are <STRONG>Druid</STRONG> with <STRONG>Superset</STRONG>,&nbsp;<STRONG>InfluxDB/Azure Data Explorer</STRONG>&nbsp;(<A title="here" href="#" target="_blank" rel="noopener">here</A>) with <STRONG>Grafana</STRONG>, or even <STRONG>Cosmos DB</STRONG>&nbsp;(<A title="here" href="#" target="_blank" rel="noopener">here</A>) with <STRONG>React </STRONG>app. The deep <STRONG>integration</STRONG> of ASA and Power BI with the IoT ecosystem, and the ease of <STRONG>scaling</STRONG> with&nbsp;<STRONG>SLA</STRONG>&nbsp;are simply unbeatable in terms of turn-around time for the client. I highly recommend to use these products.</P> <P>&nbsp;</P> <H4>Preparing to consume</H4> <P>Here is the reference architecture for our current platform taken from <A title="part 1" href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/design-a-azure-iot-indoor-air-quality-monitoring-platform-from/ba-p/2549733?WT.mc_id=iot-20412-cxa" target="_blank" rel="noopener">part 1</A> of the series.</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="KaushikRoy_0-1626731489920.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296915i78DE9C9821CD1D28/image-size/large?v=v2&amp;px=999" role="button" title="KaushikRoy_0-1626731489920.png" alt="KaushikRoy_0-1626731489920.png" /></span></P> <P>&nbsp;</P> <P>The data generated in the&nbsp;<STRONG>AirQualityModule</STRONG> uses message <STRONG>ROUTE</STRONG> mentioned in the IoT edge deployment. In a practical setting, this will almost never be the only module running on the edge device. More than one module could be sending data <STRONG>upstream</STRONG> to the <STRONG>IoT</STRONG> <STRONG>hub</STRONG>. For example here are the routes in my deployment. Notice the <STRONG>'$upstream'</STRONG> identifier. Refer <A title="here" href="#" target="_blank" rel="noopener">here</A>.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json"> "routes": { "AVAToHub": "FROM /messages/modules/avaedge/outputs/* INTO $upstream", "AirQualityModuleToIoTHub": "FROM /messages/modules/AirQualityModule/outputs/airquality INTO $upstream", "FaceDetectorToIoTHub": "FROM /messages/modules/FaceDetector/outputs/* INTO $upstream" }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>Here I have an object detector module running alongside, sending inference data to the same IoT hub. Thus my IoT hub is receiving both these events.&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">[{ "temperature": 26.11, "pressure": 1003.76, "humidity": 39.26, "gasResistance": 274210, "IAQ": 129.1, "iaqAccuracy": 1, "eqCO2": 742.22, "eqBreathVOC": 1.05, "sensorId": "roomAQSensor", "longitude": -79.02527, "latitude": 43.857989, "cpuTemperature": 27.8, "timeCreated": "2021-07-20 04:11:23", "EventProcessedUtcTime": "2021-07-20T05:09:32.7707113Z", "PartitionId": 1, "EventEnqueuedUtcTime": "2021-07-20T04:11:23.5100000Z", "IoTHub": { "MessageId": null, "CorrelationId": null, "ConnectionDeviceId": "azureiotedge", "ConnectionDeviceGenerationId": "637614793098722945", "EnqueuedTime": "2021-07-20T04:11:23.5150000Z", "StreamId": null } }, { "timestamp": 146407885400877, "inferences": [ { "type": "entity", "subtype": "vehicleDetection", "entity": { "tag": { "value": "vehicle", "confidence": 0.57830006 }, "box": { "l": 0.34658492, "t": 0.6101173, "w": 0.047571957, "h": 0.040370107 } } } ], "EventProcessedUtcTime": "2021-07-20T05:09:32.7707113Z", "PartitionId": 1, "EventEnqueuedUtcTime": "2021-07-20T04:11:22.8320000Z", "IoTHub": { "MessageId": "4f12572a-aa87-4d87-b46f-ebd658ec3d8f", "CorrelationId": null, "ConnectionDeviceId": "azureiotedge", "ConnectionDeviceGenerationId": "637599997588483809", "EnqueuedTime": "2021-07-20T04:11:22.8400000Z", "StreamId": null } } ]</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>Notice that the <STRONG>two blocks</STRONG> have totally <STRONG>different</STRONG> structures. Similarly there can be many more, and whatever <STRONG>data laye</STRONG>r you create on top of this must cater to the <STRONG>dynamic</STRONG> and <STRONG>asynchronous</STRONG> nature of the input. This is where ASA shines and we will see how.</P> <P>&nbsp;</P> <H4>Real time IoT hub data processing with ASA</H4> <P>Processing data real time is never easy, no matter what tool we use. Fortunately Azure has provided us with common solution patterns and best practices regarding ASA <A title="here" href="#" target="_blank" rel="noopener">here</A>.&nbsp; You have a few good options depending on what you are comfortable with.&nbsp;</P> <OL> <LI>Process the data using an ASA job as a UDF in C# (<A title="here" href="#" target="_blank" rel="noopener">here</A>)</LI> <LI>Process the data using Azure functions which can be in Python (<A title="here" href="#" target="_blank" rel="noopener">here</A>)</LI> <LI>Process the data using ASA Streaming Query Language&nbsp;(<A title="here" href="#" target="_blank" rel="noopener">here</A>)</LI> </OL> <P>You can add optional layers of RDBMS/Big Data/NoSql to persist the data, but for our current activity one of the above 3 is fine. My <A title="choice" href="#" target="_blank" rel="noopener">choice</A> was the 3rd option since the resultant pipeline remains lightweight, and <STRONG>ASA SQL</STRONG> is just so powerful!&nbsp;</P> <P>&nbsp;</P> <P>Before we move on here are the <STRONG>sequence of steps</STRONG> you need to follow to implement this successfully.</P> <OL> <LI><STRONG>Check the IoT hub endpoint for messages from AirQualityModule</STRONG></LI> <LI><STRONG>Create ASA cloud job following the guides <A title="here" href="#" target="_blank" rel="noopener">here</A> and <A title="here" href="#" target="_blank" rel="noopener">here</A></STRONG></LI> <LI><STRONG>Configure IoT Hub as <A title="input" href="#" target="_blank" rel="noopener">input</A> with <A title="JSON" href="#" target="_blank" rel="noopener">JSON</A> format</STRONG></LI> <LI><STRONG>Configure <A title="output" href="#" target="_blank" rel="noopener">output</A> to send to Power BI <A title="dashboard" href="#" target="_blank" rel="noopener">dashboard</A></STRONG></LI> <LI><STRONG>Create Query using <A title="temporary resultset" href="#" target="_blank" rel="noopener">temporary resultset</A> and <A title="window functions" href="#" target="_blank" rel="noopener">window functions</A></STRONG></LI> <LI><STRONG>Save the query and <A title="test" href="#" target="_blank" rel="noopener">test</A> using ASA interface</STRONG></LI> <LI><STRONG><A title="Start" href="#" target="_blank" rel="noopener">Start</A> the job to set it to running <A title="state" href="#" target="_blank" rel="noopener">state</A></STRONG></LI> <LI><STRONG><A title="Monitor" href="#" target="_blank" rel="noopener">Monitor</A> ASA job for resource utilization</STRONG></LI> <LI><STRONG>Go to Power BI screen and check ASA output in workspace</STRONG></LI> <LI><STRONG>Create dashboard displaying Air Quality metrics</STRONG></LI> <LI><STRONG>Share Power BI dashboard with end user</STRONG></LI> </OL> <P>&nbsp;</P> <P>Let us go through the steps in details. Step 1 you can check using VSCode to consume the endpoint <A title="option" href="#" target="_blank" rel="noopener">option</A> within the IoT tools extension. The required output is part of the two json outputs shown above. We will see how to separate those two soon. For step 2 you can <A title="create" href="#" target="_blank" rel="noopener">create</A> an ASA job through the portal or <A title="marketplace" href="#" target="_blank" rel="noopener">marketplace</A>. You can automate this, but the portal is enough for our purpose. Call it anything you want (<STRONG>airqualitymetrics</STRONG>), use the <STRONG>'cloud'</STRONG> option (important) and 3 <STRONG>streaming units</STRONG> (SUs). Technically you can do with less, but its a good habit to keep some buffer.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="KaushikRoy_0-1626828380601.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297214i8537D9C17D7CA095/image-size/medium?v=v2&amp;px=400" role="button" title="KaushikRoy_0-1626828380601.png" alt="KaushikRoy_0-1626828380601.png" /></span></P> <P>&nbsp;</P> <P>For steps 3 you can configure the <STRONG>IoT hub</STRONG> option. Notice the <STRONG>serialization</STRONG> format is set to <STRONG>'JSON'</STRONG>. Clicking on <STRONG>'Test'</STRONG> should say 'succeeded' a bit later.</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="KaushikRoy_1-1626828670920.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297215i5111405197C66358/image-size/large?v=v2&amp;px=999" role="button" title="KaushikRoy_1-1626828670920.png" alt="KaushikRoy_1-1626828670920.png" /></span></P> <P>&nbsp;</P> <P>For step 4 you can have as <STRONG>many outputs</STRONG> as you want, at least one of which should be <STRONG>Power BI</STRONG>. Put in the appropriate <STRONG>workspace</STRONG> and <STRONG>dataset</STRONG> name. 'Test' it. This output name <STRONG>'sensordata'</STRONG> is how it will appear in your Power BI app when you log into it.</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="KaushikRoy_2-1626828751067.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297216i80FDE5553704EA71/image-size/large?v=v2&amp;px=999" role="button" title="KaushikRoy_2-1626828751067.png" alt="KaushikRoy_2-1626828751067.png" /></span></P> <P>&nbsp;</P> <P>Step 5 is where all the magic happens. With one compact ASA query shown below you can accomplish the following tasks:</P> <UL> <LI>Create a <STRONG>temporary result</STRONG> for your air quality metrics (or other modules)</LI> <LI>Apply data <STRONG>cleaning</STRONG>/sanitization/aliasing to the values inside</LI> <LI>Compress high frequency data using <STRONG>tumbling window</STRONG> of 30 seconds</LI> <LI>Create final <STRONG>outputs</STRONG> from querying temporary results&nbsp;</LI> </UL> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="sql">WITH AirQuality AS ( SELECT sensorId ,System.Timestamp() AS OutTime ,ROUND(AVG(temperature),2) as temperature ,ROUND(AVG(pressure),2) as pressure ,ROUND(AVG(humidity),2) as humidity ,ROUND(AVG(gasResistance),2) as gasResistance ,ROUND(AVG(IAQ),2) as IAQ ,ROUND(AVG(iaqAccuracy),2) as iaqAccuracy ,ROUND(AVG(eqCO2),2) as eqCO2 ,ROUND(AVG(eqBreathVOC),2) as eqBreathVOC ,ROUND(AVG(cpuTemperature),2) as cpuTemperature ,MAX(longitude) as longitude ,MAX(latitude) as latitude FROM Inputstream TIMESTAMP BY IoTHub.EnqueuedTime WHERE sensorId is not null GROUP BY sensorId,TumblingWindow(second,30) ) ,Faces AS ( select IoTHub.ConnectionDeviceId, IoTHub.EnqueuedTime AS OutTime, GetArrayElement(inferences,0).subtype, GetArrayElement(inferences,0).entity.tag.value, GetArrayElement(inferences,0).entity.tag.confidence, GetArrayElement(inferences,0).entity.box.l, GetArrayElement(inferences,0).entity.box.t, GetArrayElement(inferences,0).entity.box.w, GetArrayElement(inferences,0).entity.box.h from Inputstream timestamp by IoTHub.EnqueuedTime where timestamp is not null) SELECT * into powerbi2 from Faces where confidence&gt;0.60 SELECT * into powerbi from AirQuality</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>If you followed all the steps correctly you should get data from the live stream in a <STRONG>'Table'</STRONG> or <STRONG>'Raw'</STRONG> format. Click on <STRONG>'Save Query'</STRONG> to save and then <STRONG>'Test Query'</STRONG> to actually run the above query on the live data. If all goes well you should get back test results as below.</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="KaushikRoy_3-1626837528980.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297249i7F1A2E7F08915F0D/image-size/large?v=v2&amp;px=999" role="button" title="KaushikRoy_3-1626837528980.png" alt="KaushikRoy_3-1626837528980.png" /></span></P> <P>Go back to the <STRONG>'Overview'</STRONG> blade and click on <STRONG>'Start'</STRONG> to start the job. Monitor the job for a couple of hours and you should can see the job state <STRONG>'Running'</STRONG> and the <STRONG>utilization</STRONG> on the screen. This completes step 7 and 8.</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="KaushikRoy_4-1626837738171.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297252iA58011D53A2758A7/image-size/large?v=v2&amp;px=999" role="button" title="KaushikRoy_4-1626837738171.png" alt="KaushikRoy_4-1626837738171.png" /></span></P> <P>&nbsp;</P> <P>After this go to the <STRONG>Power BI</STRONG> application and click on <STRONG>'My Workspace'</STRONG>. If the ASA job is running then you will see the outputs as a&nbsp;<STRONG>'Dataset'</STRONG> in your workspace. <STRONG>Note:</STRONG> If you don't see the output most probably the cause is the ASA <STRONG>job</STRONG> is <STRONG>not</STRONG> in a <STRONG>'Running'</STRONG> state.</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="KaushikRoy_0-1626838999937.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297258i51A22A844FB6BC59/image-size/large?v=v2&amp;px=999" role="button" title="KaushikRoy_0-1626838999937.png" alt="KaushikRoy_0-1626838999937.png" /></span></P> <P>Click on <STRONG>'Create'</STRONG> to start a dashboard and use a <STRONG>'Published Dataset'</STRONG>. Here you can choose the <STRONG>'sensordata'</STRONG> source. Remember this is configured in the ASA <EM>powerbi</EM> output in step 4. Step 9 is done.</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="KaushikRoy_1-1626839154494.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297259i2DBD2D8BDD704085/image-size/large?v=v2&amp;px=999" role="button" title="KaushikRoy_1-1626839154494.png" alt="KaushikRoy_1-1626839154494.png" /></span></P> <P>Drag and drop a bunch of <STRONG>line charts</STRONG> from the vizualizations menu and click the appropriate <STRONG>checkboxes</STRONG>. Use the <STRONG>time axis</STRONG> to plot the values. Click on <STRONG>'Save'</STRONG> to complete step 10.</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="KaushikRoy_2-1626839395852.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297260i232CEBE453F1293A/image-size/medium?v=v2&amp;px=400" role="button" title="KaushikRoy_2-1626839395852.png" alt="KaushikRoy_2-1626839395852.png" /></span></P> <P>&nbsp;</P> <P>Finally after all that effort we have the product ready for the end user! Just click on <STRONG>'Share'</STRONG> and&nbsp;you have completed a <STRONG>production-ready scable end-to-end solution to display live air quality metrics</STRONG> using a simple off-the-market sensor and Azure IoT ecosystem. Here is how my final dashboard looks.</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="KaushikRoy_4-1626839624139.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297263i6874E212CBBE99D2/image-size/large?v=v2&amp;px=999" role="button" title="KaushikRoy_4-1626839624139.png" alt="KaushikRoy_4-1626839624139.png" /></span></P> <P>&nbsp;</P> <H4><EM>Future Work</EM></H4> <P>I hope you enjoyed this series of articles on a complete <STRONG>AIr Quality monitoring</STRONG> application. We love to share our experiences and get feedback from the community as to how we are doing. Look out for upcoming articles and have a great time with Microsoft Azure.</P> <P>To learn more about Microsoft apps and services, contact us at <A title="contact@abersoft.ca" href="https://gorovian.000webhostapp.com/?exam=mailto:contact@abersoft.ca" target="_blank" rel="noopener">contact@abersoft.ca</A> or 1-833-455-1850!</P> <P>&nbsp;</P> <P>Please follow us here for regular updates: <A title="https://lnkd.in/gG9e4GD" href="#" target="_blank" rel="noopener">https://lnkd.in/gG9e4GD</A> and check out our website <A title="https://abersoft.ca/" href="#" target="_blank" rel="noopener">https://abersoft.ca/</A> for more information!</P> <P>&nbsp;</P> Thu, 22 Jul 2021 19:54:53 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/live-air-quality-display-with-asa-power-bi-and-azure-iot/ba-p/2563638 KaushikRoy 2021-07-22T19:54:53Z General Availability: Azure Sphere version 21.07 new and updated features https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/general-availability-azure-sphere-version-21-07-new-and-updated/ba-p/2571761 <P>The Azure Sphere 21.07 feature release is now available and includes the following components:</P> <UL> <LI>Updated Azure Sphere OS</LI> <LI>Updated Azure Sphere SDK for Windows and for Linux</LI> <LI>Updated Azure Sphere extensions for Visual Studio and for Visual Studio Code</LI> <LI>Updated samples, tutorials, gallery items, and documentation</LI> </UL> <P>If your devices are connected to the internet, they will receive the updated OS from the cloud. You'll be prompted to install the updated SDK on next use, or you can install it now. To install the latest SDK, see the installation Quickstart for Windows or Linux:</P> <UL> <LI><A href="#" target="_self">Quickstart: Install the Azure Sphere SDK for Windows</A></LI> <LI><A href="#" target="_self">Quickstart: Install the Azure Sphere SDK for Linux</A></LI> </UL> <H1><FONT size="5">New and changed features in the 21.07 release</FONT></H1> <P>The 21.07 release includes an improvement to how&nbsp;time sync&nbsp;is handled, the ability to&nbsp;track shared library heap memory usage&nbsp;during development, and new ways to&nbsp;authenticate using Azure Active Directory. This release also includes some&nbsp;debugging improvements&nbsp;in the Visual Studio and Visual Studio Code extensions,&nbsp;expanded support of the&nbsp;--output&nbsp;parameter&nbsp;in the CLI, and the ability to get additional device information from some commands in the CLI and Public API (PAPI).</P> <P>&nbsp;</P> <H2><FONT size="4">Time sync changes</FONT></H2> <P>The time sync process has changed in the 21.07 release to provide a more robust process when the primary time server fails or cannot be reached. Previously, services that depend on completion of time sync could fail to start if time-sync retries prevented time sync from completing. The change adds a fallback mechanism for obtaining accurate time so that time-sync retries do not continue indefinitely.</P> <P>&nbsp;</P> <H2><FONT size="4">Heap memory allocation tracking</FONT></H2> <P>The heap memory allocation tracking feature provides developers with a convenient way to see memory allocations from libraries included with the Azure Sphere SDK during development of an application. The feature adds a new application capability, HeapMemStats, and a new Azure Sphere SDK library, libmalloc. The feature also includes changes to the output of the Azure Sphere CLI command azsphere device app show-memory-stats and the Visual Studio extension. With these changes, developers can <A href="#" target="_self">add the HeapMemStats capability to their high-level application</A>, deploy the app to a development-enabled device, and use Visual Studio's Performance Profiler to view the memory used by the SDK libraries called by their app.</P> <P>&nbsp;</P> <H2><FONT size="4">Authentication methods using Azure Active Directory</FONT></H2> <P>The&nbsp;<A href="#" target="_self">Azure Sphere Public API (PAPI)</A>&nbsp;supports multiple methods of user authentication and authorization in Azure Active Directory (AAD).</P> <P>With Azure Active Directory, an&nbsp;<A href="#" target="_self">application token&nbsp;</A>can be used to authenticate and grant access to specific Azure resources from a user app, service, or automation tool by using the&nbsp;<A href="#" target="_self">service principal or managed identity method for authentication</A>.</P> <P>The following authentication methods are now supported using Azure Active Directory:</P> <UL> <LI><A href="#" target="_self">Access Azure Sphere Public API with AAD managed identity</A></LI> <LI><A href="#" target="_self">Access Azure Sphere Public API with AAD application service principal</A></LI> <LI><A href="#" target="_self">Access Azure Sphere Public API with your AAD user identity</A></LI> </UL> <P>&nbsp;</P> <H2><FONT size="4">Additional update status details from CLI and PAPI commands</FONT></H2> <P>The Azure Sphere Public API has been extended to include additional device details about the operating system and update status. You can now see the version of the system OS installed on the device, the latest available OS version, when the device was last updated, and when the device last checked for updates. The additional information can be helpful to manage updates to your devices.</P> <P>The following Azure Sphere API reference pages explain the API response changes in more detail:</P> <TABLE> <TBODY> <TR> <TD width="138"> <P><STRONG>Command</STRONG></P> </TD> <TD width="486"> <P><STRONG>Description</STRONG></P> </TD> </TR> <TR> <TD width="138"> <P><A href="#" target="_self">Devices - Get</A></P> </TD> <TD width="486"> <P>Gets details for a device.</P> </TD> </TR> <TR> <TD width="138"> <P><A href="#" target="_self">Devices - List</A></P> </TD> <TD width="486"> <P>Gets all devices that are claimed to the specified tenant.</P> </TD> </TR> <TR> <TD width="138"> <P><A href="#" target="_self">Devices - List In Group</A></P> </TD> <TD width="486"> <P>Gets all devices that are assigned to the specified device group.</P> </TD> </TR> <TR> <TD width="138"> <P><A href="#" target="_self">Devices - List In Product</A></P> </TD> <TD width="486"> <P>Gets all devices that belong to the specified product.</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>In addition, the Azure Sphere CLI has been updated to include these additional device details in the&nbsp;azsphere device list,&nbsp;azsphere device show, and&nbsp;azsphere device update&nbsp;commands using the&nbsp;<A href="#" target="_self" data-linktype="relative-path">--query&nbsp;parameter&nbsp;</A>or the&nbsp;<A href="#" target="_self" data-linktype="relative-path">supported output formats</A>. For example,&nbsp;azsphere device show --output json.</P> <P>&nbsp;</P> <H1><FONT size="5">New and changed features in Visual Studio or Visual Studio Code extensions for Azure Sphere</FONT></H1> <P>The Visual Studio and Visual Studio Code extensions include more descriptive names for debug targets. The Visual Studio extension also includes support for&nbsp;heap memory allocation tracking.</P> <P>&nbsp;</P> <H2><FONT size="4">More descriptive names for debug targets</FONT></H2> <P>The Visual Studio extension now uses the project name for the debug target name. The Visual Studio Code extension shows the project name as before but simplifies the descriptive text.</P> <P>&nbsp;</P> <H1><FONT size="5">Support for other output formats</FONT></H1> <P>Additional Azure Sphere CLI commands now support the&nbsp;--output&nbsp;(--out&nbsp;or&nbsp;-o) parameter to specify the format of the CLI output. For more information see,&nbsp;<A href="#" target="_self">Supported commands</A>.</P> <P>&nbsp;</P> <H1><FONT size="5">New and updated commands and parameters</FONT></H1> <P>For more information on updates to commands and parameters, see a list of updates in the <A href="#" target="_self">customer documentation</A>.</P> <P>&nbsp;</P> <H1><FONT size="5">New and updated samples and Gallery items</FONT></H1> <P>The 21.07 release includes an updated memory usage tutorial, updates to the Azure IoT sample, and three new/updated projects in the Azure Sphere Gallery.</P> <P>&nbsp;</P> <H2><FONT size="4">Updated memory usage tutorial</FONT></H2> <P>The&nbsp;<A href="#" target="_self">MemoryUsage tutorial&nbsp;</A>has been updated to demonstrate&nbsp;heap memory allocation tracking.</P> <P>&nbsp;</P> <H2><FONT size="4">Updated Azure IoT sample</FONT></H2> <P>We made some minor refinements to the <A href="#" target="_self">Azure IoT sample</A>, including changing the polling rate of IoTHubDeviceClient_LL_DoWork to every 100ms rather than every 1s, following <A href="#" target="_self">this IoT Hub client best practice</A>. We recommend that you adopt this change in your existing apps.</P> <P>&nbsp;</P> <H2><FONT size="4">New or updated Gallery samples</FONT></H2> <P>The following new sample was added to the&nbsp;<A href="#" target="_self">Azure Sphere Gallery</A>&nbsp;, a collection of unmaintained scripts, utilities, and functions:</P> <UL> <LI><A href="#" target="_self">VS1053AudioStreaming</A>&nbsp;&nbsp;shows how to play audio through a VS1053 codec board.</LI> <LI><A href="#" target="_self">WebHookPublicAPIServicePrincipal</A>&nbsp;&nbsp;shows how to use&nbsp;<A href="#" target="_self">Service Principal based authentication</A>&nbsp;for the Azure Sphere Security Service Public API.</LI> <LI><A href="#" target="_self">AzureSphereTenantDeviceTwinSync</A>&nbsp;&nbsp;was updated to utilize new Azure Sphere Public API support for&nbsp;querying the OS version for devices.</LI> </UL> <P><SPAN>&nbsp;</SPAN></P> <P><SPAN>For more information on Azure Sphere OS feeds and setting up an evaluation device group,&nbsp;see&nbsp;</SPAN><A href="#" target="_self"><SPAN>Azure Sphere OS feeds</SPAN></A><SPAN> and </SPAN><A href="#" target="_self">Set up devices for OS evaluation</A><SPAN>.</SPAN></P> <P>&nbsp;</P> <P>For self-help technical inquiries, please visit <A href="#" target="_self">Microsoft Q&amp;A</A><SPAN> o</SPAN>r <A href="#" target="_self">Stack Overflow</A>. If you require technical support and have a support plan, please submit a support ticket in <A href="#" target="_self">Microsoft Azure Support</A> or work with your Microsoft Technical Account Manager. If you would like to purchase a support plan, please explore the <A href="#" target="_self">Azure support plans</A>.</P> Wed, 21 Jul 2021 21:50:08 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/general-availability-azure-sphere-version-21-07-new-and-updated/ba-p/2571761 AzureSphereTeam 2021-07-21T21:50:08Z How to speed up Azure Digital Twins Queries with Caching Strategy https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/how-to-speed-up-azure-digital-twins-queries-with-caching/ba-p/2551153 <P><SPAN>If you are developing applications with <A href="#" target="_blank" rel="noopener">Azure Digital Twins</A> (ADT) which will materialize large twin graphs, this article is for you. Read on to learn how we accelerated services that need to traverse these graphs and how you can do the same using the right caching strategy.</SPAN></P> <P>&nbsp;</P> <P><SPAN>As part of the <A href="#" target="_blank" rel="noopener">CSE</A> global engineering organization at Microsoft, our <A href="https://gorovian.000webhostapp.com/?exam=#toc-hId--1984531400" target="_self">team</A> developed an ADT-based solution together with a customer.<BR />An essential requirement was to have low-latency responses for materializing graphs of several thousands of nodes which are infrequently updated. We achieved this goal by improving the speed of a 3000 nodes graph traversal from ~10 seconds to under a second.</SPAN></P> <P>&nbsp;</P> <P><SPAN>ADT offers a powerful <A href="#" target="_blank" rel="noopener">SQL-like query language</A> to retrieve data out of a twin graph.<BR />Traversing large graph sections implies the execution of many ADT-queries. This blog post presents an in-memory caching solution that we utilized to enhance the performance of twin graph traversals.</SPAN></P> <P>&nbsp;</P> <H2><SPAN>Prerequisites</SPAN></H2> <P>&nbsp;</P> <UL> <LI><SPAN>.NET Core 3.1 on your development machine.</SPAN></LI> <LI><SPAN>Be familiar with C#, Azure Digital Twins and </SPAN><A href="#" target="_blank" rel="noopener"><SPAN>Azure Digital Twins Explorer</SPAN></A><SPAN>.</SPAN></LI> </UL> <P>&nbsp;</P> <H2><SPAN>Create your Azure Digital Twins graph</SPAN></H2> <P>&nbsp;</P> <P><SPAN>We want to represent the factories of a company named Contoso.</SPAN></P> <UL> <LI><SPAN><A href="#" target="_blank" rel="noopener">Create an Azure Digital Twins instance</A> and make sure you have the Azure Digital Twins Data Owner role.</SPAN></LI> <LI><SPAN>Open the <A href="#" target="_blank" rel="noopener">Azure Digital Twins Explorer</A>.</SPAN></LI> <LI><SPAN>Download the contoso-tree.zip provided in the attachments and import the contoso-tree.json to ADT and save it. You can select the import graph icon in the explorer and select the file to import it. Then save the graph.</SPAN></LI> </UL> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="hierarchy-blogpost-import-graph.png" style="width: 200px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296040i28776D7BBF51DA9B/image-size/small?v=v2&amp;px=200" role="button" title="hierarchy-blogpost-import-graph.png" alt="hierarchy-blogpost-import-graph.png" /></span>&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN>You should see the following tree in the explorer.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="hierarchy-blogpost-contoso-tree.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296041iB1A1B88A64291269/image-size/large?v=v2&amp;px=999" role="button" title="hierarchy-blogpost-contoso-tree.png" alt="hierarchy-blogpost-contoso-tree.png" /></span></SPAN></P> <P>&nbsp;</P> <P>The Azure Digital Twins Explorer shows that our Contoso company has two factories. Each factory is composed of rooms, and each room can contain machines.</P> <P>&nbsp;</P> <H2><SPAN>Get the children of a twin</SPAN></H2> <P>&nbsp;</P> <P><SPAN>A common use case is the need to retrieve the children of a twin. For </SPAN></P> <P><SPAN>example, we want to be able to list the rooms of a factory.</SPAN></P> <P><SPAN>You can display the children of </SPAN><SPAN>Factory1</SPAN><SPAN> by running the following query:</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="sql">SELECT C FROM DigitalTwins T JOIN C RELATED T.contains WHERE T.$dtId = 'Factory1'</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN>You should see the 2 following twins.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="hierarchy-blogpost-children.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296042i9A37CF9A0828C731/image-size/medium?v=v2&amp;px=400" role="button" title="hierarchy-blogpost-children.png" alt="hierarchy-blogpost-children.png" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>The Azure Digital Twins Explorer helps you visualizing the twins. If we develop an application, we need to be able to retrieve the twins programmatically. Let’s try to retrieve the children of a node with C#.</SPAN></P> <P><SPAN>You can start by creating a new Console Application project and include the packages Azure.DigitalTwins.Core, System.Linq.Async, Azure.Identity.</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">dotnet new console -lang "C#" -f netcoreapp3.1 dotnet add package Azure.DigitalTwins.Core dotnet add package System.Linq.Async dotnet add package Azure.Identity </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P class="FirstParagraph">&nbsp;</P> <P class="FirstParagraph"><SPAN>Then we can create a simple AzureDigitalTwinsRepository class. It will use the <A href="#" target="_blank" rel="noopener">DigitalTwinsClient</A> to query ADT.</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="csharp">namespace AzureDigitalTwinsCachingExample { using System.Collections.Generic; using Azure.DigitalTwins.Core; using System.Linq; using System.Threading.Tasks; public class AzureDigitalTwinsRepository { private readonly DigitalTwinsClient _client; public AzureDigitalTwinsRepository(DigitalTwinsClient client) { _client = client; } } } </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P class="FirstParagraph"><SPAN>Add a method to get the children of a twin in the AzureDigitalTwinsRepository class.</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="csharp">public IAsyncEnumerable&lt;BasicDigitalTwin&gt; GetChildren(string id) { return _client.QueryAsync&lt;IDictionary&lt;string, BasicDigitalTwin&gt;&gt; ($"SELECT C FROM DigitalTwins T JOIN C RELATED T.contains WHERE T.$dtId = '{id}'") .Select(_ =&gt; _["C"]); } </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P class="FirstParagraph"><SPAN>We can use our AzureDigitalTwinsRepository to display the children of Factory1:</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="csharp">namespace AzureDigitalTwinsCachingExample { using System; using System.Threading.Tasks; using Azure.DigitalTwins.Core; using Azure.Identity; class Program { public static async Task Main(string[] args) { var adtInstanceUrl = "https://&lt;your-adt-hostname&gt;"; var credential = new DefaultAzureCredential(); var client = new DigitalTwinsClient(new Uri(adtInstanceUrl), credential); // First call to avoid cold start in next steps. _ = client.GetDigitalTwin&lt;BasicDigitalTwin&gt;("ContosoCompany"); var adtRepository = new AzureDigitalTwinsRepository(client); var children = adtRepository.GetChildren("Factory1"); await foreach (var child in children) { Console.Write($"{child.Id} "); } } } } </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <H2><SPAN>Get the subtree of a twin</SPAN></H2> <P>&nbsp;</P> <P class="FirstParagraph"><SPAN>Imagine that we need our application to retrieve the subtree of a twin. We want to get the twin, its descendants and the relationships between these twins. We cannot achieve that with a single ADT Query. We must make a tree traversal like a breadth-first search for example.<BR /></SPAN></P> <P class="FirstParagraph"><SPAN>Add a method to get the subtree of a node in the AzureDigitalTwinsRepository class:</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="csharp">public async Task&lt;IDictionary&lt;string, (BasicDigitalTwin twin, HashSet&lt;BasicDigitalTwin&gt; children)&gt;&gt; GetSubtreeAsync(string sourceId) { var queue = new Queue&lt;BasicDigitalTwin&gt;(); var subtree = new Dictionary&lt;string, (BasicDigitalTwin twin, HashSet&lt;BasicDigitalTwin&gt; children)&gt;(); var sourceTwin = await _client.GetDigitalTwinAsync&lt;BasicDigitalTwin&gt;(sourceId); subtree[sourceId] = (sourceTwin, new HashSet&lt;BasicDigitalTwin&gt;()); queue.Enqueue(sourceTwin); while (queue.Any()) { var twin = queue.Dequeue(); var children = GetChildren(twin.Id); await foreach (var child in children) { subtree[twin.Id].children.Add(child); if (subtree.ContainsKey(child.Id)) continue; queue.Enqueue(child); subtree[child.Id] = (child, new HashSet&lt;BasicDigitalTwin&gt;()); } } return subtree; }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P class="FirstParagraph"><SPAN>When traversing the tree, we make several consecutive queries to ADT which makes the entire operation longer. To make the operation faster, let's see how we can cache the tree in-memory.</SPAN></P> <P class="FirstParagraph">&nbsp;</P> <H2><SPAN>Caching</SPAN></H2> <P>&nbsp;</P> <P><SPAN>A secondary datastore can serve as a data cache to accelerate application operations while avoiding the need to query ADT multiple times in complex operations.<BR />We decided to implement a simple in-memory cache as the data we were interested in was small enough to load in-memory and is infrequently updated. This enabled us to avoid adding additional infrastructure complexity with a relatively simple caching approach. </SPAN></P> <P><SPAN>The cache must contain a subset of the twin graph transformed into a data structure appropriate for the problem at hand. Depending on the use case, it might be necessary to store data as a subgraph of the twin graph. Still, there might be other situations where simpler data structures like lists or maps simplify the cache implementation. We used a simple in-memory adjacency list.</SPAN></P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="blogpost-adt-cache.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296102iE9D97BB9B8BABD7D/image-size/large?v=v2&amp;px=999" role="button" title="blogpost-adt-cache.png" alt="blogpost-adt-cache.png" /></span></SPAN></P> <P><SPAN>We want to store the Contoso tree in-memory as an adjacency-list.</SPAN></P> <P><SPAN>Let's create a caching repository. The caching repository uses the AzureDigitalTwinsRepository that we implemented to reload the cache.</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="csharp">namespace AzureDigitalTwinsCachingExample { using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Azure.DigitalTwins.Core; public class CachingRepository { private readonly AzureDigitalTwinsRepository _adtRepository; private IDictionary&lt;string, (BasicDigitalTwin twin, HashSet&lt;BasicDigitalTwin&gt; children)&gt; _graph; public CachingRepository(AzureDigitalTwinsRepository adtRepository) { _adtRepository = adtRepository; } public async Task ReloadCache() { // Reload the tree from the root. _graph = await _adtRepository.GetSubtreeAsync("ContosoCompany"); } } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P class="FirstParagraph"><SPAN>We can add a GetSubtree method that will traverse the in-memory graph instead of making several requests to ADT. The only difference with the previous implementation is that we get the digital twin and its children from the in-memory graph.</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="csharp">public IDictionary&lt;string, (BasicDigitalTwin twin, HashSet&lt;BasicDigitalTwin&gt; children)&gt; GetSubtree(string sourceId) { var queue = new Queue&lt;BasicDigitalTwin&gt;(); var subtree = new Dictionary&lt;string, (BasicDigitalTwin twin, HashSet&lt;BasicDigitalTwin&gt; children)&gt;(); var sourceTwin = _graph[sourceId].twin; subtree[sourceId] = (sourceTwin, new HashSet&lt;BasicDigitalTwin&gt;()); queue.Enqueue(sourceTwin); while (queue.Any()) { var twin = queue.Dequeue(); var children = _graph[twin.Id].children; foreach (var child in children) { subtree[twin.Id].children.Add(child); if (subtree.ContainsKey(child.Id)) continue; queue.Enqueue(child); subtree[child.Id] = (child, new HashSet&lt;BasicDigitalTwin&gt;()); } } return subtree; }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P class="FirstParagraph">&nbsp;</P> <P class="FirstParagraph"><SPAN>We can measure the duration of the 2 GetSubtree implementations.</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="csharp">namespace AzureDigitalTwinsCachingExample { using System; using System.Diagnostics; using System.Linq; using System.Threading.Tasks; using Azure.DigitalTwins.Core; using Azure.Identity; class Program { public static async Task Main(string[] args) { var adtInstanceUrl = "https://&lt;your-adt-hostname&gt;"; // Authenticate and create a client var credential = new DefaultAzureCredential(); var client = new DigitalTwinsClient(new Uri(adtInstanceUrl), credential); // First call to avoid cold start in next steps. _ = client.GetDigitalTwin&lt;BasicDigitalTwin&gt;("ContosoCompany"); var adtRepository = new AzureDigitalTwinsRepository(client); var cachingRepository = new CachingRepository(adtRepository); // Reloading the cache takes some time. await cachingRepository.ReloadCache(); var stopwatch = Stopwatch.StartNew(); var subtree = await adtRepository.GetSubtreeAsync("Factory1"); stopwatch.Stop(); Console.WriteLine($"Got subtree with {subtree.Count()} nodes in {stopwatch.ElapsedMilliseconds} ms"); stopwatch.Restart(); var subtreeFromCache = cachingRepository.GetSubtree("Factory1"); stopwatch.Stop(); Console.WriteLine( $"Got subtree with {subtreeFromCache.Count()} nodes in cache in {stopwatch.ElapsedMilliseconds} ms"); } } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <H2>&nbsp;</H2> <H2><SPAN>Cache loading and invalidation</SPAN></H2> <P>&nbsp;</P> <P><SPAN>You can preload the cache when the service starts and invalidate it when the graph is updated.<BR /></SPAN><A href="#" target="_blank" rel="noopener"><SPAN>Event notifications</SPAN></A><SPAN> can be a great trigger for that. Azure Digital Twins provides different </SPAN><A href="#" target="_blank" rel="noopener"><SPAN>type of events</SPAN></A><SPAN>.<BR />We wanted to avoid additional dependencies, so we created an extra twin in ADT and used it as an indicator to keep track of the last graph updates. The twin indicator is updated by the service whenever it modifies the graph in ADT. Then, our service periodically checks if the indicator twin got updated and refreshes the cache if it is the case.</SPAN></P> <P>&nbsp;</P> <H2><SPAN>Conclusion</SPAN></H2> <P>&nbsp;</P> <P><SPAN>Azure Digital Twins is a powerful tool to create a digital representation of an environment, and we have seen how caching can be used to enhance the performance of twin graph traversals.<BR />An additional advantage is ADT cost optimization. <A href="#" target="_blank" rel="noopener">ADT pricing</A> includes a cost per <A href="#" target="_blank" rel="noopener">query unit</A>. Using a cache may help you reduce the number of query units used by your system. However, reloading the cache also consumes query units. The amount you can save depends on how expensive all the operations you avoid to compute are, but the cost to refresh the cache also impacts the number of query units used. That's why you need to make your own analysis to understand what is best for your system and how this type of strategy would help your case.</SPAN></P> <P><SPAN>Now you can try this strategy and share your experience and workarounds in the comments below!</SPAN></P> <P>&nbsp;</P> <H2 id="h_551965270301626351831916"><SPAN>Contributors</SPAN></H2> <P><SPAN><BR />Marc Gomez<BR />Alexandre Gattiker<BR />Christopher Lomonico<BR />Izabela Kulakowska<BR />Max Zeier<BR />Peeyush Chandel</SPAN></P> <P>&nbsp;</P> Wed, 21 Jul 2021 22:49:05 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/how-to-speed-up-azure-digital-twins-queries-with-caching/ba-p/2551153 OphelieLeMentec 2021-07-21T22:49:05Z A Future With Safer Roads: Automatic Wheel Lug Nut Detection Using Machine Learning at the Edge https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/a-future-with-safer-roads-automatic-wheel-lug-nut-detection/ba-p/2557624 <H5><SPAN style="font-weight: 400;">By Evan Rust and Zin Thein Kyaw</SPAN></H5> <H3><SPAN style="font-weight: 400;"><BR /><BR />Introduction</SPAN></H3> <P>&nbsp;</P> <P>Wheel lug nuts are such a tiny part of the overall automobile assembly that they’re easy to overlook, but yet serve a critical function in the safe operation of an automobile. In fact, it is not safe to drive even with one lug nut missing. A single missing lug nut will cause increased pressure on the wheel, in turn causing damage to the wheel bearings, studs, and make the other lug nuts fall off.&nbsp;</P> <P>&nbsp;</P> <P>Over the years there have been a number of documented safety recalls and issues around wheel lug nuts. In some cases, it was only identified after the fact that the automobile manufacturer had installed incompatible lug nut types with the wheel or had been inconsistent in installing the right type of lug nut. Even after delivery, after years of wear and tear, the lug nuts may become loose and may even fall off which would cause instability for an automobile to be in service. To reduce these incidents of quality control at manufacturing and maintenance in the field, there is a huge opportunity to leverage machine learning at the edge to automate wheel lug nut detection.&nbsp;</P> <P>&nbsp;</P> <P>This motivated us to create a proof-of-concept reference project for automating wheel lug nut detection by easily putting together a USB webcam, Raspberry Pi 4, <STRONG>Microsoft Azure IoT</STRONG>, and <STRONG>Edge Impulse</STRONG>, creating an end-to-end wheel lug nut detection system using Object Detection. This example use case and other derivatives will find a home in many industrial IoT scenarios where embedded Machine Learning can help improve the efficiency of factory automation and quality control processes including predictive maintenance.&nbsp;</P> <P>&nbsp;</P> <P>This reference project will serve as a guide for quickly getting started with Edge Impulse on the Raspberry Pi 4 and Azure IoT, to train a model that detects lug nuts on a wheel and sends inference conclusions to Azure IoT as shown in the block diagram below:</P> <P><SPAN style="font-weight: 400;">&nbsp;<span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="A Future With Safer Roads_ Automatic Wheel Lug Nut Detection using Machine Learning at the Edge.png" style="width: 904px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296524i9FE9117E69845E4F/image-size/large?v=v2&amp;px=999" role="button" title="A Future With Safer Roads_ Automatic Wheel Lug Nut Detection using Machine Learning at the Edge.png" alt="A Future With Safer Roads_ Automatic Wheel Lug Nut Detection using Machine Learning at the Edge.png" /></span></SPAN></P> <H3><SPAN style="font-weight: 400;">Design Concept: Edge Impulse and Azure IoT</SPAN></H3> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener"><SPAN style="font-weight: 400;">Edge Impulse</SPAN></A> is an embedded machine learning platform that allows you to manage the entire Machine Learning Ops (MLOps) lifecycle which includes 1) Data Acquisition, 2) Signal Processing, 3) ML Training, 4) Model Testing, and 5) Creating a deployable model that can run efficiently on an edge device.&nbsp;</P> <P>&nbsp;</P> <P>For the edge device, we chose to use the Raspberry Pi 4 due to its ubiquity and available processing power for efficiently running more sophisticated machine learning models such as object detection. By running the object detection model on the Raspberry Pi 4, we can optimize the network bandwidth connection to Azure IoT for robustness and scalability by only sending the inference conclusions, i.e. “How many lug nuts are on the wheel?”. Once the inference conclusions are available at the Azure IoT level, it becomes straightforward to feed these results into your business applications that can leverage other Azure services such as <A href="#" target="_blank" rel="noopener">Azure Stream Analytics</A> and <A href="#" target="_blank" rel="noopener">Power BI</A>. <BR /><SPAN style="font-weight: 400;"><BR /></SPAN></P> <P>In the next sections we’ll discuss how you can set this up yourself with the following items:<BR /><BR /></P> <UL> <LI style="font-weight: 400;" aria-level="1"><A href="#" target="_blank" rel="noopener">Raspberry Pi 4</A></LI> <LI style="font-weight: 400;" aria-level="1">USB webcam (such as a Logitech HD webcam)</LI> <LI style="font-weight: 400;" aria-level="1"><A href="#" target="_blank" rel="noopener">Edge Impulse</A> account&nbsp;</LI> <LI style="font-weight: 400;" aria-level="1"><A href="#" target="_blank" rel="noopener">Azure IoT Hub</A> instance<SPAN style="font-weight: 400;"><BR /><BR /></SPAN></LI> </UL> <H3><SPAN style="font-weight: 400;">Setting Up the Hardware</SPAN></H3> <P><SPAN style="font-weight: 400;"><BR /></SPAN>We begin by setting up the Raspberry Pi 4 to connect to a Wi-Fi network for our network connection, configuring it for camera support, and installing the Edge Impulse Linux CLI (command line interface) tools on the Raspberry Pi 4. This will allow the Raspberry Pi 4 to directly connect to Edge Impulse for data acquisition and finally, deployment of the wheel lug nut detection model.&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yodaimpulse_0-1626467383740.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296504i48786CFA56098859/image-size/large?v=v2&amp;px=999" role="button" title="yodaimpulse_0-1626467383740.png" alt="yodaimpulse_0-1626467383740.png" /></span></P> <P>&nbsp;</P> <P>For starters, you’ll need a Raspberry Pi 4 with an up-to-date <A href="#" target="_blank" rel="noopener">Raspberry Pi OS image that can be found here</A>. After flashing this image to an SD card and adding a file named 'wpa_supplicant.conf':</P> <P>&nbsp;</P> <LI-CODE lang="git">ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev update_config=1 country=&lt;Insert 2 letter ISO 3166-1 country code here&gt; network={ ssid="&lt;Name of your wireless LAN&gt;" psk="&lt;Password for your wireless LAN&gt;" }</LI-CODE> <P>&nbsp;</P> <P>along with an empty file named 'ssh' (both within the '/boot' directory), you can go ahead and power up the board.&nbsp;</P> <P>&nbsp;</P> <P>Once you’ve successfully SSH’d into the device with</P> <P>&nbsp;</P> <LI-CODE lang="git">ssh pi@&lt;IP_ADDRESS&gt;</LI-CODE> <P>&nbsp;</P> <P>and the password 'raspberry', it’s time to install the dependencies for the Edge Impulse Linux SDK. Simply run the next three commands to set up the NodeJS environment and everything else that’s required for the edge-impulse-linux wizard:&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="git">curl -sL https://deb.nodesource.com/setup_12.x | sudo bash - sudo apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps npm config set user root &amp;&amp; sudo npm install edge-impulse-linux -g --unsafe-perm</LI-CODE> <P>&nbsp;</P> <P>For more details on setting up the Raspberry Pi 4 with Edge Impulse, visit this <A href="#" target="_blank" rel="noopener">link</A>.</P> <P>&nbsp;</P> <P>Since this project deals with images, we’ll need some way to capture them. The wizard supports both the Pi camera modules and standard USB webcams, so make sure to enable the camera module first with</P> <P>&nbsp;</P> <LI-CODE lang="git">sudo raspi-config</LI-CODE> <P>&nbsp;</P> <P>if you plan on using one. With that completed, go to&nbsp;<A href="#" target="_blank" rel="noopener">Edge Impulse</A> and create a new project, then run the wizard with&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="git">edge-impulse-linux</LI-CODE> <P>&nbsp;</P> <P>and make sure your device appears within the Edge Impulse Studio’s device section after logging in and selecting your project.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yodaimpulse_1-1626467383709.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296502iDA4AC6CACCEBF579/image-size/large?v=v2&amp;px=999" role="button" title="yodaimpulse_1-1626467383709.png" alt="yodaimpulse_1-1626467383709.png" /></span></P> <P>&nbsp;</P> <H3><SPAN style="font-weight: 400;">Data Acquisition</SPAN><SPAN style="font-weight: 400;"><BR /><BR /></SPAN></H3> <P>Training accurate production ready machine learning models requires feeding plenty of varied data, which means a <I>lot</I> of images are typically required. For this proof-of-concept, we captured around 145 images of a wheel that had lug nuts on it. The Edge Impulse Linux daemon allows you to directly connect the Raspberry Pi 4 to Edge Impulse and take snapshots using the USB webcam.&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screen Shot 2021-07-16 at 2.13.01 PM.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296522iA7DB4AE346CF6DB1/image-size/large?v=v2&amp;px=999" role="button" title="Screen Shot 2021-07-16 at 2.13.01 PM.png" alt="Screen Shot 2021-07-16 at 2.13.01 PM.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Using the Labeling queue in the Data Acquisition page we then easily drew bounding boxes around each lug nut within every image, along with every wheel. To add some test data we went back to the main Dashboard page and clicked the 'Rebalance dataset' button that moves 20% of the training data to the test data bin.&nbsp;</P> <H3><SPAN style="font-weight: 400;"><BR />Impulse Design and Model Training</SPAN><SPAN style="font-weight: 400;"><BR /><BR /></SPAN></H3> <P>Now that we have plenty of training data, it’s time to design and build our model. The first block in the Impulse Design is an Image Data block, and it scales each image to a size of '320' by '320' pixels.&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yodaimpulse_3-1626467383703.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296507iDD510DC4BC7957C7/image-size/large?v=v2&amp;px=999" role="button" title="yodaimpulse_3-1626467383703.png" alt="yodaimpulse_3-1626467383703.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Next, image data is fed to the Image processing block that takes the raw RGB data and derives features from it.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yodaimpulse_4-1626467383680.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296505i8D5EE182AE3DF1A6/image-size/large?v=v2&amp;px=999" role="button" title="yodaimpulse_4-1626467383680.png" alt="yodaimpulse_4-1626467383680.png" /></span></P> <P><SPAN style="font-weight: 400;"><BR /></SPAN>Finally, these features are used as inputs to the <EM>MobileNetV2 SSD FPN-Lite</EM> Transfer Learning Object Detection model that learns to recognize the lug nuts. The model is set to train for '25' cycles at a learning rate of '.15', but this can be adjusted to fine-tune for accuracy. As you can see from the screenshot below, the trained model indicates a precision score of 97.9%. <SPAN style="font-weight: 400;"><BR /><BR /></SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yodaimpulse_5-1626467383620.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296506i9B0CF33345C2CF45/image-size/large?v=v2&amp;px=999" role="button" title="yodaimpulse_5-1626467383620.png" alt="yodaimpulse_5-1626467383620.png" /></span></P> <P>&nbsp;</P> <H3><SPAN style="font-weight: 400;">Model Testing </SPAN></H3> <P><BR />If you'll recall from an earlier step we rebalanced the dataset to put 20% of the images we collected to be used for gauging how our trained model could perform in the real world. We use the model testing page to run a batch classification and see how we expect our model to perform. The 'Live Classification' tab will also allow you to acquire new data direct from the Raspberry Pi 4 and see how the model measures up against the immediate image sample.&nbsp;<BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screen Shot 2021-07-16 at 4.45.05 PM.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296546iA0E96D7BF6D0EE39/image-size/large?v=v2&amp;px=999" role="button" title="Screen Shot 2021-07-16 at 4.45.05 PM.png" alt="Screen Shot 2021-07-16 at 4.45.05 PM.png" /></span></P> <H3><SPAN style="font-weight: 400;"><BR />Versioning</SPAN></H3> <P><BR />An MLOps platform would not be complete without a way to archive your work as you iterate on your project. The 'Versioning' tab allows you to save your entire project including the entire dataset so you can always go back to a "known good version" as you experiment with different neural network parameters and project configurations. It's also a great way to share your efforts as you can designate any version as 'public' and other Edge Impulse users can clone your entire project and use it as a springboard to add their own enhancements.&nbsp;</P> <P><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screen Shot 2021-07-16 at 4.48.11 PM.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296547iDDD7D2EB925C3625/image-size/large?v=v2&amp;px=999" role="button" title="Screen Shot 2021-07-16 at 4.48.11 PM.png" alt="Screen Shot 2021-07-16 at 4.48.11 PM.png" /></span></P> <H3><SPAN style="font-weight: 400;"><BR />Deploying Models</SPAN></H3> <P><BR />In order to verify that the model works correctly in the real world, we’ll need to deploy it to the Raspberry Pi 4. This is a simple task thanks to the Edge Impulse CLI, as all we have to do is run</P> <P>&nbsp;</P> <LI-CODE lang="applescript">edge-impulse-linux-runner&nbsp;</LI-CODE> <P>&nbsp;</P> <P>which downloads the model and creates a local webserver. From here, we can open a browser tab and visit the address listed after we run the command to see a live camera feed and any objects that are currently detected. Here’s a sample of what the user will see in their browser tab:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="yodaimpulse_6-1626467383879.png" style="width: 989px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296510iEF8EF6568B391D95/image-size/large?v=v2&amp;px=999" role="button" title="yodaimpulse_6-1626467383879.png" alt="yodaimpulse_6-1626467383879.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <H3><SPAN style="font-weight: 400;">Sending Inference Results to Azure IoT Hub<BR /><BR /></SPAN></H3> <P>With the model working locally on the Raspberry Pi 4, let’s see how we can send the inference results from the Raspberry Pi 4 to an Azure IoT Hub instance. As previously mentioned, these results will enable business applications to leverage other Azure services such as Azure Stream Analytics and Power BI. On your development machine, make sure you’ve installed the <A href="#" target="_blank" rel="noopener">Azure CLI</A> and have signed in using 'az login'. Then get the name of the resource group you’ll be using for the project. If you don’t have one, you can <A href="#" target="_blank" rel="noopener">follow this guide&nbsp;</A><FONT size="3">on how to create a new resource group.</FONT><FONT size="4"><SPAN style="font-weight: 400;"><BR /><BR /></SPAN></FONT></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yodaimpulse_7-1626467383585.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296508iC66BC20A7D9C2405/image-size/large?v=v2&amp;px=999" role="button" title="yodaimpulse_7-1626467383585.png" alt="yodaimpulse_7-1626467383585.png" /></span></P> <P><FONT size="4">After that, return to the terminal and run the following commands to create a new IoT Hub and register a new device ID:</FONT></P> <P>&nbsp;</P> <LI-CODE lang="git">az iot hub create --resource-group &lt;your resource group&gt; --name &lt;your IoT Hub name&gt; az extension add --name azure-iot az iot hub device-identity create --hub-name &lt;your IoT Hub name&gt; --device-id &lt;your device id&gt;</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yodaimpulse_8-1626467383649.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296509i10467F67C36E81C5/image-size/large?v=v2&amp;px=999" role="button" title="yodaimpulse_8-1626467383649.png" alt="yodaimpulse_8-1626467383649.png" /></span></P> <P>&nbsp;</P> <P>Retrieve the connection string the Raspberry Pi 4 will use to connect to Azure IoT with:&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="git">az iot hub device-identity connection-string show --device-id &lt;your device id&gt; --hub-name &lt;your IoT Hub name&gt;</LI-CODE> <P>&nbsp;</P> <P><SPAN style="font-weight: 400;"><BR /></SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yodaimpulse_9-1626467383700.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296512i59825C4E4B8E3ECE/image-size/large?v=v2&amp;px=999" role="button" title="yodaimpulse_9-1626467383700.png" alt="yodaimpulse_9-1626467383700.png" /></span></P> <P>&nbsp;</P> <P>Now it’s time to SSH into the Raspberry Pi 4 and set the connection string as an environment variable with:</P> <P>&nbsp;</P> <LI-CODE lang="git">export IOTHUB_DEVICE_CONNECTION_STRING="&lt;your connection string here&gt;"</LI-CODE> <P>&nbsp;</P> <P>Then, add the necessary Azure IoT device libraries with:</P> <P>&nbsp;</P> <LI-CODE lang="git">pip install azure-iot-device</LI-CODE> <P>&nbsp;</P> <P>(Note: if you do not set the environment variable or pass it in as an argument the program will not work!) The connection string contains the information required for the Raspberry Pi 4 to establish a connection with the Azure IoT Hub service and communicate with it. You can then monitor output in the Azure IoT Hub with:</P> <P>&nbsp;</P> <LI-CODE lang="git">az iot hub monitor-events --hub-name &lt;your IoT Hub name&gt; --output table</LI-CODE> <P>&nbsp;</P> <P>or in the Azure Portal.<BR /><SPAN style="font-weight: 400;"><BR /><BR /></SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yodaimpulse_10-1626467383628.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296511iFC0C81B8AE21F371/image-size/large?v=v2&amp;px=999" role="button" title="yodaimpulse_10-1626467383628.png" alt="yodaimpulse_10-1626467383628.png" /></span><BR /><SPAN style="font-weight: 400;"><BR /></SPAN>To make sure it works, download and run <A href="#" target="_blank" rel="noopener">this example</A>&nbsp;on the Raspberry Pi 4 to make sure you can see the test message.</P> <P><BR />For the second half of deployment, we’ll need a way to customize how our model is used within the code. Edge Impulse provides a Python SDK for this purpose. On the Raspberry Pi 4 install it with&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="applescript">sudo apt-get install libatlas-base-dev libportaudio0 libportaudio2 libportaudiocpp0 portaudio19-dev pip3 install edge_impulse_linux -i https://pypi.python.org/simple</LI-CODE> <P>&nbsp;</P> <P>We’ve made available a <A href="#" target="_blank" rel="noopener">simple example</A>&nbsp;on the Raspberry Pi 4 that sets up a connection to the Azure IoT Hub, runs the model, and sends the inference results to Azure IoT.<BR /><SPAN style="font-weight: 400;"><BR /></SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yodaimpulse_11-1626467383696.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296513i9BDB369F815C3B8C/image-size/large?v=v2&amp;px=999" role="button" title="yodaimpulse_11-1626467383696.png" alt="yodaimpulse_11-1626467383696.png" /></span><SPAN style="font-weight: 400;"><BR /></SPAN><BR />Once you’ve either downloaded the zip file or cloned the repo into a folder, get the model file by running&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="applescript">edge-impulse-linux-runner --download modelfile.eim</LI-CODE> <P>&nbsp;</P> <P>inside of the folder you just created from the cloning process. This will download a file called 'modelfile.eim'. Now, run the Python program with&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="applescript">python lug_nut_counter.py ./modelfile.eim -c &lt;LUG_NUT_COUNT&gt;</LI-CODE> <P>&nbsp;</P> <P>where &lt;LUG_NUT_COUNT&gt; is the correct number of lug nuts that should be attached to the wheel (you might have to use 'python3' if both Python 2 and 3 are installed).<SPAN style="font-weight: 400;"><BR /><BR /></SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yodaimpulse_12-1626467383702.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296515iC900EB19AB88ACF9/image-size/large?v=v2&amp;px=999" role="button" title="yodaimpulse_12-1626467383702.png" alt="yodaimpulse_12-1626467383702.png" /></span></P> <P><SPAN style="font-weight: 400;"><BR /></SPAN>Now whenever a wheel is detected the number of lug nuts is calculated. If this number falls short of the target, a message is sent to the Azure IoT Hub. <SPAN style="font-weight: 400;"><BR /></SPAN><SPAN style="font-weight: 400;"><BR /></SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yodaimpulse_13-1626467383829.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296516iB2FE4112B6A5A915/image-size/large?v=v2&amp;px=999" role="button" title="yodaimpulse_13-1626467383829.png" alt="yodaimpulse_13-1626467383829.png" /></span></P> <P><SPAN style="font-weight: 400;"><BR /></SPAN>And by only sending messages when there’s something wrong, we can prevent an excess amount of bandwidth from being taken due to empty payloads.<BR /><SPAN style="font-weight: 400;"><BR /></SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yodaimpulse_14-1626467383644.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296514i5A4669E9B10099B1/image-size/large?v=v2&amp;px=999" role="button" title="yodaimpulse_14-1626467383644.png" alt="yodaimpulse_14-1626467383644.png" /></span></P> <P>&nbsp;</P> <H3><SPAN style="font-weight: 400;">Conclusion</SPAN><SPAN style="font-weight: 400;"><BR /><BR /></SPAN></H3> <P>We’ve just scratched the surface with wheel lug nut detection. Imagine utilizing object detection for other industrial applications in quality control, detecting ripe fruit amongst rows of crops, or identifying when machinery has malfunctioned with devices powered by machine learning. <BR /><BR />With any hardware, Edge Impulse, and Microsoft Azure IoT, you can design comprehensive embedded machine learning models, deploy them on any device, while authenticating each and every device with built-in security. You can set up individual identities and credentials for each of your connected devices to help retain the confidentiality of both cloud-to-device and device-to-cloud messages, revoke access rights for specific devices, upgrade device firmware remotely, and benefit from advanced analytics on devices running offline or with intermittent connectivity. <BR /><BR />The complete Edge Impulse <A href="#" target="_blank" rel="noopener">project is available here</A> for you to see how easy it is to start building your own embedded machine learning projects today using object detection. We look forward to your feedback at <A href="https://gorovian.000webhostapp.com/?exam=mailto:hello@edgeimpulse.com" target="_blank" rel="noopener">hello@edgeimpulse.com</A> or on our <A href="#" target="_blank" rel="noopener">forum</A>.</P> Sat, 17 Jul 2021 15:05:04 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/a-future-with-safer-roads-automatic-wheel-lug-nut-detection/ba-p/2557624 yodaimpulse 2021-07-17T15:05:04Z Design a Azure IoT Indoor Air Quality monitoring platform from scratch https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/design-a-azure-iot-indoor-air-quality-monitoring-platform-from/ba-p/2549733 <P class="lia-align-justify">This article is the first part of a series which explores an end-to-end pipeline to deploy an Air Quality Monitoring application using off-the-market sensors, Azure IoT Ecosystem and Python. We will begin by looking into what is the problem, some terminology, prerequisites, reference architecture, and an implementation.&nbsp;</P> <P>&nbsp;</P> <H2>Indoor Air Quality - why does it matter and how to measure it with IoT?</H2> <P>&nbsp;</P> <P class="lia-align-justify">Most people think of air pollution as an outdoor problem, but indoor air quality has a major impact on health and well-being since the average American spends about <STRONG>90 percent</STRONG> of their time indoors. Proper ventilation is one of the most important considerations for maintaining good indoor air quality. Poor indoor air quality is known to be harmful to vulnerable groups such as the elderly, children or those suffering chronic respiratory and/or cardiovascular diseases. Here is a quick visual on some <STRONG>sources of indoor air pollution</STRONG>.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="KaushikRoy_0-1626298945190.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/295928i2A83A4497344A826/image-size/large?v=v2&amp;px=999" role="button" title="KaushikRoy_0-1626298945190.png" alt="KaushikRoy_0-1626298945190.png" /></span></P> <P>&nbsp;</P> <P class="lia-align-justify" data-unlink="true">Post Covid-19, we are in a world where awareness of our indoor environments is key for survival. Here in Canada we are quite aware of the situation, which is why we have a <A title="set of guidlines" href="#" target="_blank" rel="noopener">set of guidlines</A>&nbsp;from the Government of Canada, and a <A title="recent white paper" href="#" target="_blank" rel="noopener">recent white paper</A> from Public Health Ontario. The <STRONG>American Medical Association</STRONG> has put up this excellent <A title="document" href="#" target="_blank" rel="noopener">document</A> for reference. So now that we know what the problem is, how do we go about solving it? To solve something we must be able to measure it and currently we have some popular metrics to measure air quality, viz. IAQ and VOC.</P> <P class="lia-align-justify" data-unlink="true">&nbsp;</P> <H5 class="lia-align-justify" data-unlink="true"><EM>So what are IAQ and VOC exactly?</EM></H5> <P class="lia-align-justify" data-unlink="true"><STRONG>Indoor air quality (IAQ)</STRONG> is the air quality within and around buildings and structures. IAQ is known to affect the health, comfort, and well-being of building occupants.&nbsp;IAQ can be affected by gases (including <STRONG>carbon monoxide</STRONG>, <STRONG>radon</STRONG>, volatile organic compounds), particulates, microbial contaminants (<STRONG>mold</STRONG>, <STRONG>bacteria</STRONG>), or any mass or energy stressor that can induce adverse health conditions.&nbsp;IAQ is part of indoor environmental quality (IEQ), which includes IAQ as well as other physical and psychological aspects of life indoors (e.g., lighting, visual quality, acoustics, and thermal comfort). In the last few years IAQ&nbsp;has received increasing attention from environmental governance authorities and IAQ-related standards are getting stricter. Here is a&nbsp;<A title="IAQ blog infographic" href="#" target="_blank" rel="noopener">IAQ blog infographic</A>&nbsp;if you'd like to read.</P> <P class="lia-align-justify" data-unlink="true">&nbsp;</P> <P class="lia-align-justify" data-unlink="true"><STRONG>Volatile organic compounds (VOC)</STRONG>&nbsp;are organic chemicals that have a high vapour pressure at room temperature. High vapor pressure correlates with a low boiling point, which relates to the number of the sample's molecules in the surrounding air, a trait known as <STRONG>volatility</STRONG>.&nbsp;VOC's are responsible for the odor of scents and perfumes as well as pollutants. VOCs play an important role in communication between animals and plants, e.g. attractants for pollinators, protection from predation, and even inter-plant interactions. Some VOCs are dangerous to human health or cause harm to the environment. Anthropogenic VOCs are regulated by law, especially indoors, where concentrations are the highest. Most VOCs are not acutely toxic, but may have long-term chronic health effects. Refer to <A title="this" href="#" target="_blank" rel="noopener">this</A> and <A title="this" href="#" target="_blank" rel="noopener">this</A> for vivid details.</P> <P class="lia-align-justify" data-unlink="true">&nbsp;</P> <P class="lia-align-justify" data-unlink="true">The point is, in a post pandemic world, having a <STRONG>centralized air quality monitoring system is an absolute necessity</STRONG>. The need for collecting this data and using the insights from it is crucial to living better. And this is where <STRONG>Azure IoT</STRONG> comes in. In this series we are going to explore how to create the moving parts of this platform with '<STRONG>minimum effort</STRONG>'. In this first part, we are goiing to concentrate our efforts on the overall architecture, hardware/software requirements, and IoT edge <STRONG>module</STRONG> creation.&nbsp;</P> <P>&nbsp;</P> <H2>Prerequisites</H2> <P>&nbsp;</P> <P>To accomplish our goal we will ideally need to meet a few basic criteria. Here is a short list.</P> <OL> <LI>Air Quality Sensor (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI>IoT Edge device&nbsp;(<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI>Active Azure subscription (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI>Development machine</LI> <LI>Working knowledge of Python, Sql, Docker, Json, IoT Edge runtime, VSCode</LI> <LI>Perseverance</LI> </OL> <P>Lets go into a bit of details about the aforementioned points since there are many possibilities.&nbsp;</P> <P>&nbsp;</P> <H4>Air Quality Sensor</H4> <P class="lia-align-justify">This is the sensor that emits the actual IAQ/VOC+ data. Now, there are a lot of options in this category, and technically they should be producing the same results. However, the best sensors in the market are <STRONG>Micro-Electro-Mechanical Systems (MEMS)</STRONG>. MEMS technology uses semiconductor fabrication processes to produce miniaturized mechanical and electro-mechanical elements that range in size from less than one micrometer to several millimeters. MEMS devices can vary from relatively simple structures with no moving elements, to complex electromechanical systems with multiple moving elements. My choice was&nbsp;<A title="uThing::VOC™ Air-Quality USB sensor dongle" href="#" target="_blank" rel="noopener">uThing::VOC™ Air-Quality USB sensor dongle</A>. This is mainly to ensure high quality output and ease of interfacing, which is USB out of the box, and does not require any installation. Have a look at the list of features available on this dongle. The main component is a Bosch proprietary algorithm and the BME680 sensor that does all the hard work. Its basically plug-and-play.&nbsp; The data is emitted in Json format and is available at an interval of 3 milliseconds on the serial port of your device. In my case it was <STRONG>/dev/ttyACM0</STRONG>, but could be different in yours.</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="KaushikRoy_0-1626350059920.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296101iA8B7DFC689506A76/image-size/large?v=v2&amp;px=999" role="button" title="KaushikRoy_0-1626350059920.png" alt="KaushikRoy_0-1626350059920.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <H4>IoT Edge device</H4> <P class="lia-align-justify">This is the edge system. where the sensor is plugged in. Typical choices are windows or linux. If you are doing windows, be aware some of these steps may be different and you have to figure those out. However, in my case I am using ubuntu 20.04 installed on an <A title="Intel NUC" href="#" target="_blank" rel="noopener">Intel NUC</A>. The reason I chose the NUC is because many IoT modules require an <STRONG>x86_64</STRONG> machine, which is not available in ARM devices (Jetson, Rasp Pi, etc.) Technically this should work on <STRONG>ANY</STRONG> edge device with a usb port, but for example windows has an issue <STRONG>mounting serial ports</STRONG> onto <STRONG>containers</STRONG>. I suggest better stick with linux unless its a client requirement.</P> <P class="lia-align-center">&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="KaushikRoy_0-1626351757966.jpeg" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296105i1ABE5962324BC82D/image-size/medium?v=v2&amp;px=400" role="button" title="KaushikRoy_0-1626351757966.jpeg" alt="KaushikRoy_0-1626351757966.jpeg" /></span></P> <P>&nbsp;</P> <H4>Active Azure subscription</H4> <P class="lia-align-justify">Surely, you will need this one, but as we know Azure has this immense suit of products, and while ideally we want to have everything, it may not be practically feasible. For practical purposes you might have to ask for access to particular services, meaning you have to know ahead exactly which ones you want to use. Of course the list of required services will vary between use cases, so we will begin with just the bare minimum. We will need the following:</P> <UL> <LI class="lia-align-justify">Azure IoT Hub (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI class="lia-align-justify">Azure Container Registry (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI class="lia-align-justify">Azure blob storage (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI class="lia-align-justify">Azure Streaming Analytics (<A title="link" href="#" target="_blank" rel="noopener">link</A>)(future article)</LI> <LI class="lia-align-justify">Power BI / React App (<A title="link" href="#" target="_blank" rel="noopener">link</A>)(future article)</LI> <LI class="lia-align-justify">Azure Linux VM (<A title="link" href="#" target="_blank" rel="noopener">link</A>)(optional)</LI> </UL> <P class="lia-align-justify">A few points before we move to the next prerequisite. For <STRONG>IoT hub</STRONG> you can use free tier for experiments, but I will recommend to use the <STRONG>standard tier</STRONG> instead. For <STRONG>ACR</STRONG> get the usual tier and generate <STRONG>username password</STRONG>. For <STRONG>storageaccount</STRONG> its the <STRONG>standard tier</STRONG>. The ASA and BI products will be used in the reference architecture, but is not discussed in this article. The final service Azure VM is an interesting one. Potentially all the codebase can be run using <STRONG>VM</STRONG>, but this is only good for <STRONG>simulations</STRONG>. However, note that it is an equally good idea to experiment with VMs first as they have great integration and ease the learning curve.</P> <P class="lia-align-justify">&nbsp;</P> <H4>Development machine</H4> <P class="lia-align-justify">The development machine can be literally anything from which you have <STRONG>ssh access to the edge device</STRONG>. From an OS perspective it can be windows, linux, raspbian, mac etc. Just remember two things - use a good <STRONG>IDE</STRONG> (a.k.a VSCode) and make sure <STRONG>docker</STRONG> can be run on it, optionally with priviliges. In my case I am using a <A title="Startech KVM" href="#" target="_blank" rel="noopener">Startech KVM</A>, so I can shift between my windows machine and the actual edge device for development purposes, but it is not neccessary.</P> <P class="lia-align-justify">&nbsp;</P> <H4>Working knowledge of Python, Sql, Docker, Json, IoT Edge runtime, VSCode</H4> <P class="lia-align-justify">This is where it gets tricky. Having a mix of these knowledge is somewhat essential to creating and scaling this platform. However, I understand you may not be having proficiency in all of these. On that note, I can tell from experience that being from a <STRONG>data engineering</STRONG> background has been extremely beneficial for me. In any case, you will need some <STRONG>python</STRONG> skills, some <STRONG>sql</STRONG>, and <STRONG>Json</STRONG>. Even knowing how to use the <STRONG>VSCode IoT extension</STRONG> is non-trivial. One notable mention is that good <STRONG>docker</STRONG> knowledge is extrememly important, as the <STRONG>edge module</STRONG> is in fact simply a docker container thats deployed through the deployment manifest (<STRONG>IoT Edge runtime</STRONG>).</P> <P class="lia-align-justify">&nbsp;</P> <H4>Perseverance</H4> <P class="lia-align-justify">In an ideal world, you read a tutorial, implement, it works and you make merry. The real world unfortunately will bring challenges that you have not seen anywhere. Trust me on this, many times you will make good progress simply by <STRONG>not quitting</STRONG> what you are doing. Thats it. That is the secret ingredient. Its like applying gradient descent to your own brain model of a concept. Anytime any of this doesn't work, simply have <STRONG>belief in Azure and yourself</STRONG>. You will always find a way. Okay enough of that. Lets get to business.</P> <P class="lia-align-justify">&nbsp;</P> <H3>Reference Architecture</H3> <P class="lia-align-justify">Here is a reference architecture that we can use to implement this platform. This is how I have done it. Please feel free to do your own.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="aqarch.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296179i196016DEE97DABD6/image-size/large?v=v2&amp;px=999" role="button" title="aqarch.png" alt="aqarch.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P class="lia-align-justify">Most of this is quite simple. Just go through the documentation for Azure and you should be fine. Following this we go to what everyone is waiting for - the implementation.</P> <P>&nbsp;</P> <H3>Implementation</H3> <P class="lia-align-justify">In this section we will see how we can use these tools to our benefit. For the Azure resources I may not go through the entire creation or installation process as there are quite a few articles on the internet for doing those. I shall only mention the main things to look out for. Here is an outline of the steps involved in the implementation.</P> <P>&nbsp;</P> <OL> <LI><STRONG>Create a resource group in Azure</STRONG> (<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Create a IoT hub in Azure</STRONG>&nbsp;(<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Create a IoT Edge device in Azure</STRONG>&nbsp;(<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Install Ubuntu 18/20 on the edge device</STRONG></LI> <LI><STRONG>Plugin the usb sensor into the edge device and check blue light</STRONG></LI> <LI><STRONG>Install docker on the edge device</STRONG>&nbsp;</LI> <LI><STRONG>Install VSCode on development machine</STRONG>&nbsp;</LI> <LI><STRONG>Create conda/pip environment for development</STRONG></LI> <LI><STRONG>Check read the serial usb device to receive json every few milliseconds</STRONG></LI> <LI><STRONG>Install IoT Edge runtime on the edge device</STRONG>&nbsp;(<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Provision the device to Azure IoT using connection string</STRONG>&nbsp;(<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Check IoT edge Runtime is running good on the edge device and portal</STRONG>&nbsp;</LI> <LI><STRONG>Create an IoT Edge solution in VSCode</STRONG>&nbsp;(<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Add a python module to the deployment</STRONG>&nbsp;(<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Mount the serial port to the module in the deployment</STRONG></LI> <LI><STRONG>Add codebase to read data from mounted serial port</STRONG></LI> <LI><STRONG>Augument sensor data with business data</STRONG></LI> <LI><STRONG>Send output result as events to IoT hub</STRONG></LI> <LI><STRONG>Build and push the IoT Edge solution</STRONG>&nbsp;(<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Create deployment from template</STRONG>&nbsp;(<A title="link" href="#" target="_blank" rel="noopener">link</A>)</LI> <LI><STRONG>Deploy the solution to the device</STRONG>&nbsp;</LI> <LI><STRONG>Monitor endpoint for consuming output data as events</STRONG>&nbsp;</LI> </OL> <P class="lia-align-justify">Okay I know that is a long list. But, you must have noticed some are very basic steps. I mentioned them so everyone has a starting reference point regarding the <STRONG>sequence of steps</STRONG> to be taken. You have high chance of success if you do it like this. Lets go into some details now. Its a mix of things so I will just put them as flowing text.</P> <P class="lia-align-justify">&nbsp;</P> <P class="lia-align-justify"><STRONG>90%</STRONG> of what's mentioned in the list above can be done following a combination of the documents in the official <A title="Azure IoT Edge documentation" href="#" target="_blank" rel="noopener">Azure IoT Edge documentation</A>. I highly advise you to scour through these documents with eagle eyes multiple times. The main reason for this is that unlike other technologies where you can literally 'stackoverflow' your way through things, you will not have that luxury here. I have been following every commit in their git repo for years and can tell you the tools/documentation changes almost every single day. That means your wits and this document are pretty much all you have in your arsenal. The good news is Microsoft makes very good documentation and even though its impossible to cover everything, they make an attempt to do it from multiple perspectives and use cases. Special mention to the following articles.</P> <P>&nbsp;</P> <UL> <LI><A title="Develop IoT module" href="#" target="_blank" rel="noopener">Develop IoT module</A>&nbsp;</LI> <LI><A title="Deploy your first IoT Edge module" href="#" target="_blank" rel="noopener">Deploy your first IoT Edge module</A>&nbsp;</LI> <LI><A title="Register an IoT Edge device in IoT Hub" href="#" target="_blank" rel="noopener">Register an IoT Edge device in IoT Hub</A>&nbsp;</LI> <LI><A title="Install or uninstall Azure IoT Edge for Linux" href="#" target="_blank" rel="noopener">Install or uninstall Azure IoT Edge for Linux</A>&nbsp;</LI> <LI><A title="Understand Azure IoT Edge modules" href="#" target="_blank" rel="noopener">Understand Azure IoT Edge modules</A>&nbsp;</LI> <LI><A title="Develop your own IoT Edge modules" href="#" target="_blank" rel="noopener">Develop your own IoT Edge modules</A>&nbsp;</LI> <LI><A title="How to deploy modules and establish routes" href="#" target="_blank" rel="noopener">How to deploy modules and establish routes</A>&nbsp;</LI> </UL> <P>&nbsp;</P> <P class="lia-align-justify">Once you are familiar with the <STRONG>'build</STRONG>, <STRONG>ship</STRONG>, <STRONG>deploy'</STRONG> mechanism using the copius <A title="SimulatedTemperatureSensor" href="#" target="_blank" rel="noopener">SimulatedTemperatureSensor</A> module examples from Azure Marketplace, you are ready to handle the real thing. The only real challenge you will have is at steps 9, 15, 16, 17, and 18. Lets see how we can make things easy there. For 9 I can simply do a <STRONG>cat command on the serial port</STRONG>.</P> <P>&nbsp;</P> <LI-CODE lang="bash">cat /dev/ttyACM0</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P class="lia-align-justify">This gives me <STRONG>output</STRONG> every 3 ms.&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">{"temperature": 23.34, "pressure": 1005.86, "humidity": 40.25, "gasResistance": 292401, "IAQ": 33.9, "iaqAccuracy": 1, "eqCO2": 515.62, "eqBreathVOC": 0.53}</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P class="lia-align-justify">This is exactly the data that the module will receive when the serial port is successfully mounted onto the module.&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">"AirQualityModule": { "version": "1.0", "type": "docker", "status": "running", "restartPolicy": "always", "settings": { "image": "${MODULES.AirQualityModule}", "createOptions": { "Env": [ "IOTHUB_DEVICE_CONNECTION_STRING=$IOTHUB_IOTEDGE_CONNECTION_STRING" ], "HostConfig": { "Dns": [ "1.1.1.1" ], "Devices": [ { "PathOnHost": "/dev/ttyACM0", "PathInContainer": "/dev/ttyACM0", "CgroupPermissions": "rwm" } ] } } } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P class="lia-align-justify">Notice the <STRONG>Devices</STRONG> block in the above extract from the deployment manifest. Using these keys/values we are able to <STRONG>mount</STRONG> the serial port onto the custom module aptly named <STRONG>AirQualityModule</STRONG>. So we got 15 covered.</P> <P class="lia-align-justify">Adding codebase to the module is quite simple too. When the module is generated by VSCode it automatically gives you the docker file (Dockerfile.amd64) and a sample main code. We will just create a copy of that file in the same repo and call it say air_quality.py. Inside this new file we will hotwire the code to read the device output. However, before doing any modification in the code we must edit <STRONG>requirements.txt</STRONG>. Mine looks like this:</P> <P>&nbsp;</P> <LI-CODE lang="python">azure-iot-device psutil pyserial</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><STRONG>azure-iot-device</STRONG> is for the edge sdk libraries, and <STRONG>pyserial</STRONG> is for reading serial port. The imports look like this:</P> <P>&nbsp;</P> <LI-CODE lang="python">import time, sys, json # from influxdb import InfluxDBClient import serial import psutil from datetime import datetime from azure.iot.device import IoTHubModuleClient, Message</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P class="lia-align-justify">Quite self-explainatory. Notice the influx db import is commented, meaning you can send these reading there too through the module. To cover 16 we will need the final three peices of code. Here they are:</P> <P>&nbsp;</P> <LI-CODE lang="python">message = "" #uart = serial.Serial('/dev/tty.usbmodem14101', 115200, timeout=11) # (MacOS) uart = serial.Serial('/dev/ttyACM0', 115200, timeout=11) # Linux uart.write(b'J\n')</LI-CODE><LI-CODE lang="python">message = uart.readline() uart.flushInput() if debug is True: print('message...') print(message)</LI-CODE><LI-CODE lang="python">data_dict = json.loads(message.decode())</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P class="lia-align-justify">There that's it! With three peices of code you have taken the <STRONG>data emitted by the sensor, to your desired json format using python</STRONG>. 16 is covered. For 17 we will just <STRONG>update the dictionary with business data</STRONG>. In my case as follows. I am attaching a sensor name and&nbsp; coordinates to find me&nbsp;<img class="lia-deferred-image lia-image-emoji" src="https://techcommunity.microsoft.com/html/@D0941B27F467CBA2580A8C085A80A0CFhttps://techcommunity.microsoft.com/images/emoticons/happyface_40x40.gif" alt=":happyface:" title=":happyface:" />.</P> <P>&nbsp;</P> <LI-CODE lang="python">data_dict.update({'sensorId':'roomAQSensor'}) data_dict.update({'longitude':-79.025270}) data_dict.update({'latitude':43.857989}) data_dict.update({'cpuTemperature':psutil.sensors_temperatures().get('acpitz')[0][1]}) data_dict.update({'timeCreated':datetime.now().strftime("%Y-%m-%d %H:%M:%S")})</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>For 18 it is as simple as&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="python">print('data dict...') print(data_dict) msg=Message(json.dumps(data_dict)) msg.content_encoding = "utf-8" msg.content_type = "applicationhttps://techcommunity.microsoft.com/json" module_client.send_message_to_output(msg, "airquality")</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P class="lia-align-justify">Before doing step 19, two things must happen. First, u need to <STRONG>replace the default main.py in the dockerfile and with air_quality.py</STRONG>. Second, you must use proper entries in <STRONG>.env</STRONG> file to generate deployment &amp; deploy successfully. We can quickly check the docker image exists before actual deployment.</P> <P>&nbsp;</P> <LI-CODE lang="bash">docker images iotregistry.azurecr.io/airqualitymodule 0.0.1-amd64 030b11fce8af 4 days ago 129MB</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>Now you are good to deploy. Use <A title="this" href="#" target="_blank" rel="noopener">this</A> tutorial to help deploy successfully. At the end of step 22 this is what it looks like upon <STRONG>consuming the endpoint through VSCode</STRONG>.</P> <P>&nbsp;</P> <LI-CODE lang="bash">[IoTHubMonitor] Created partition receiver [0] for consumerGroup [$Default] [IoTHubMonitor] Created partition receiver [1] for consumerGroup [$Default] [IoTHubMonitor] [2:33:28 PM] Message received from [azureiotedge/AirQualityModule]: { "temperature": 28.87, "pressure": 1001.15, "humidity": 38.36, "gasResistance": 249952, "IAQ": 117.3, "iaqAccuracy": 1, "eqCO2": 661.26, "eqBreathVOC": 0.92, "sensorId": "roomAQSensor", "longitude": -79.02527, "latitude": 43.857989, "cpuTemperature": 27.8, "timeCreated": "2021-07-15 18:33:28" } [IoTHubMonitor] [2:33:31 PM] Message received from [azureiotedge/AirQualityModule]: { "temperature": 28.88, "pressure": 1001.19, "humidity": 38.35, "gasResistance": 250141, "IAQ": 115.8, "iaqAccuracy": 1, "eqCO2": 658.74, "eqBreathVOC": 0.91, "sensorId": "roomAQSensor", "longitude": -79.02527, "latitude": 43.857989, "cpuTemperature": 27.8, "timeCreated": "2021-07-15 18:33:31" } [IoTHubMonitor] Stopping built-in event endpoint monitoring... [IoTHubMonitor] Built-in event endpoint monitoring stopped.</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>Congratulations! You have successfully deployed the most vital step in creating a scalable air quality monitoring platform from scratch using Azure IoT.</P> <P>&nbsp;</P> <H3>Future Work</H3> <P class="lia-align-justify">Keep an eye out for a follow up of this article where I shall be discussing how to continue the end-to-end pipeline and actually visualize it on Power BI.</P> Tue, 14 Sep 2021 17:11:08 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/design-a-azure-iot-indoor-air-quality-monitoring-platform-from/ba-p/2549733 KaushikRoy 2021-09-14T17:11:08Z Community tools to kick start your Azure Sphere projects. https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/community-tools-to-kick-start-your-azure-sphere-projects/ba-p/2554654 <P>Azure Sphere is a unique highly secure IoT platform. You focus on your solution, Azure Sphere deals with security, identity, certificates, reporting, tracking emerging attack vectors, mitigating, updating the platform, and application distribution to protect your solutions, customers, and reputations.</P> <P>&nbsp;</P> <P>I started my Azure Sphere journey 2 years ago. I’d done plenty of embedded development, but I quickly realized there was a lot to learn about Azure Sphere. If this sounds like your journey, then do check out the “<A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/combining-azure-sphere-iot-security-with-azure-rtos-real-time/ba-p/1992869" target="_blank" rel="noopener">Combining Azure Sphere IoT security with Azure RTOS real-time capabilities</A>” article. There are links to the Azure Sphere developer Learning paths for IoT Hub and IoT Central.</P> <P>&nbsp;</P> <P>This article covers three community driven tools that may help kick start your Azure Sphere projects.</P> <OL> <LI><A href="#" target="_blank" rel="noopener">Azure Sphere DevX</A></LI> <LI><A href="#" target="_blank" rel="noopener">Azure Sphere GenX</A></LI> <LI><A href="#" target="_blank" rel="noopener">Azure Sphere Hardware Definition Extension for VS Code and Visual Studio 2019</A></LI> </OL> <P>&nbsp;</P> <H2>Azure Sphere DevX</H2> <P>&nbsp;</P> <P>Azure Sphere DevX is the library that underpins the Azure Sphere Developer Learning Paths on Microsoft Learn. The library has been split out from the Learning path to make it easier to use and it’s now used in several projects including the <A href="#" target="_blank" rel="noopener">Altair 8800 on Azure Sphere</A>, the Azure Sphere GenX projects, and several customer projects.</P> <P>&nbsp;</P> <P>Azure Sphere DevX is an Open-Source <STRONG>community-driven</STRONG> library that is based on the <A href="#" target="_blank" rel="noopener">Azure Sphere samples on GitHub</A> and from real-life experiences building Azure Sphere applications. The emphasis here is on <STRONG>community-driven</STRONG>, the library is not an official Azure Sphere library, and community contributions are very welcome.</P> <P>&nbsp;</P> <P>The library consists of convenience functions and data structures that simplify and reduce the amount of code you write, read, debug, and maintain and allows you to focus on the problem you are trying to solve rather than the underlying infrastructure code. The Azure Sphere DevX convenience functions are <A href="#" target="_blank" rel="noopener">callback</A> centric, the library looks after the infrastructure, and you write the code for the callback handlers. You have full access to the source code so you can learn how the library works.</P> <P>&nbsp;</P> <P>The DevX library addresses many common Azure Sphere scenarios including the following:</P> <OL> <LI>Azure IoT Messaging:<BR />Implements connection management and simplifies sending messages along with application and content properties metadata.</LI> </OL> <OL start="2"> <LI>Azure IoT Hub Device Twins:<BR />Handles Device Twin JSON serialization and deserialization along with a type system to validate data types received and sent.</LI> </OL> <OL start="3"> <LI>Direct methods:<BR />Simplifies in-bound direct methods message processing and passes direct method payload to the associated direct method handler.</LI> </OL> <OL start="4"> <LI>Intercore messaging:<BR />Provides a context model to simplify the passing of messages between high-level and real-time application cores.</LI> </OL> <OL start="5"> <LI>Event times:<BR />Simplified API for all common Event Timer scenarios.</LI> </OL> <OL start="6"> <LI>Deferred updates:<BR />You focus on when you want application and OS updates to occur rather than how to defer updates.</LI> </OL> <P>&nbsp;</P> <P>Visit the <A href="#" target="_blank" rel="noopener">Azure Sphere DevX library Wiki</A> to learn more.</P> <P>&nbsp;</P> <P>Check out this <A href="#" target="_blank" rel="noopener">video introduction to Azure Sphere DevX</A>.</P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/rfXwEa-gMG8?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="Azure Sphere DevX project demo video"></IFRAME></DIV> <P>&nbsp;</P> <H2>Azure Sphere GenX</H2> <P>&nbsp;</P> <P>Getting started with any embedded project always involves a reasonable degree of effort and this is true of Azure Sphere too. If you checked out Azure Sphere Developer learning Paths and Azure Sphere DevX then you will have noticed the library is very pattern and declarative based and it has made a great candidate for a code generator.</P> <P>&nbsp;</P> <P>The goal of the <A href="#" target="_blank" rel="noopener">Azure Sphere GenX</A> code generator is to make it easy to create a project that has all the basic elements in place ready for you to start to implement your code. So, for example, a best practice for an Azure Sphere project is to include an application <A href="#" target="_blank" rel="noopener">watchdog</A> timer, and control the timing of application and OS updates with Deferred Update support. You will also invariably want your device to connect and communicate with the Cloud, and Azure Sphere GenX makes connecting and communicating with Azure IoT a breeze.</P> <P>&nbsp;</P> <P>To use the Azure Sphere GenX generator you declare an application model JSON file with all the base features you want your application to have, you save the file, and the generator runs to create your application, you can update the application model as your ideas evolve and then you start your development with a lot of code in place. The generator doesn’t do everything for you, but it will help get up and running fast.</P> <P>&nbsp;</P> <P>Azure Sphere GenX is a <STRONG>community-driven</STRONG> project, you can create or extend custom “recipes”, that are more focused on your project requirements. Community contributions to the project are most welcome.</P> <P>Learn more from the <A href="#" target="_blank" rel="noopener">Azure Sphere GenX wiki</A>.</P> <P>&nbsp;</P> <P>Check out this <A href="#" target="_blank" rel="noopener">video introduction to Azure Sphere GenX</A>.</P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/2WntSBWpCSI?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="Azure Sphere GenX project demo video"></IFRAME></DIV> <P>&nbsp;</P> <H2>Azure Sphere Hardware Definition Extension for VS Code and Visual Studio</H2> <P>&nbsp;</P> <P>Along with the focus on IoT security, one of the core strengths of Azure Sphere is the <A href="#" target="_blank" rel="noopener">Hardware definition files</A>. Azure Sphere hardware is available from multiple vendors, and each vendor may expose features of the underlying chip in different ways. The Hardware definition file provides an abstraction of the microcontroller peripherals. They also allow you to create more meaningful peripheral names.</P> <P>&nbsp;</P> <P>The process of defining a Hardware Definition file can be error prone and time consuming. So, I’m excited to share a preview of a community initiative to build extensions for both Visual Studio Code and Visual Studio 2019 to simplify the process.</P> <P>&nbsp;</P> <P>The extensions are being developed by a very talented team of university students at UCL (<A href="#" target="_blank" rel="noopener">University College London</A>). The extensions are under development and now is a great time to test out and provide feedback.</P> <P>&nbsp;</P> <P><LI-VIDEO vid="https://www.youtube.com/watch?v=pRqvE6K3AMs" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/pRqvE6K3AMs/hqdefault.jpg" external="url"></LI-VIDEO></P> <P>&nbsp;</P> <P>Head to the <A href="#" target="_blank" rel="noopener">Azure Sphere Hardware Definition Tools</A> GitHub repo to download and try out and be sure to raise any issues you find on the GitHub repo.</P> <P>&nbsp;</P> <P>Be sure to leave comments here or on the related GitHub repos.</P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">Dave Glover</A>, Cloud Developer Advocate based in Sydney Australia, with a focus on IoT and Azure Sphere. I publish samples on my GitHub <A href="#" target="_blank" rel="noopener">http://github.com/gloveboxes</A>.</P> <P>&nbsp;</P> Wed, 21 Jul 2021 07:11:19 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/community-tools-to-kick-start-your-azure-sphere-projects/ba-p/2554654 glovebox 2021-07-21T07:11:19Z Bringing new life to the Altair 8800 on Azure Sphere https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/bringing-new-life-to-the-altair-8800-on-azure-sphere/ba-p/2554337 <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="glovebox_6-1626386598847.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296251iB9FD1335EC54DD9D/image-size/medium?v=v2&amp;px=400" role="button" title="glovebox_6-1626386598847.png" alt="glovebox_6-1626386598847.png" /></span></P> <P>I love embedded IoT development and when a colleague asked if I’d be interested in working on cloud-enabling an Altair 8800 emulator running on Azure Sphere then I seized the opportunity. I’m fascinated by tech and retro computing and this project combined both interests and hopefully, it will inspire you to fire up this project, learn about the past and connect to the future.</P> <P>&nbsp;</P> <P>The <A href="#" target="_blank" rel="noopener">MITS Altair 8800</A> was built on the <A href="#" target="_blank" rel="noopener">Intel 8080</A>. The Intel 8080 was the second 8-bit microprocessor manufactured by Intel, clocked at 2MHz, with a 16-bit address bus to access a whopping 64 KB of RAM. This was back in 1974, yes, that’s 47 years ago. The Altair 8800 is considered to be the computer that sparked the PC revolution and kick-started Microsoft and Apple.</P> <P>&nbsp;</P> <P>You can learn more about the Altair at&nbsp;<A href="#" target="_blank" rel="noopener">https://en.wikipedia.org/wiki/Altair_8800</A>.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="glovebox_0-1626385909332.jpeg" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296242iC119927ED1B28B9F/image-size/medium?v=v2&amp;px=400" role="button" title="glovebox_0-1626385909332.jpeg" alt="glovebox_0-1626385909332.jpeg" /></span></P> <P>&nbsp;</P> <P>Altair 8800 image attribution: <A href="#" target="_blank" rel="noopener">File:Altair 8800, Smithsonian Museum.jpg - Wikimedia Commons</A></P> <P>&nbsp;</P> <P>The first release of the Altair 8800 was programmed by flipping switches, then came paper tape readers to load apps, monitors and keyboards, and floppy disk drive storage, revolutionary at the time. The first programming language for the machine was Altair BASIC, the program written by Bill Gates and Paul Allen, and Microsoft’s first product.</P> <P>&nbsp;</P> <P>Interest in the Altair 8800 is not new and there are several implementations of the Open-Source Altair 8800 emulator running on various platforms, and if you are keen, you can even buy an Altair 8800 clone. The Altair 8800 running on Azure Sphere builds on Open-Source projects and brings with it something unique, which is to modernize and cloud-enable the Altair. Bringing 21<SUP>st</SUP> century cloud technologies to a computer generation that predates the internet.</P> <P>&nbsp;</P> <P>The focus of Azure Sphere is secure IoT by design and by default. The cloud-enabled Altair 8800 emulator running on Azure Sphere inherits all the platform security goodness. Not only was security key to the project, but the goal was to push the bounds of Azure Sphere, squeezing out all the platforms’ capabilities to generate interesting relevant patterns applicable for more day-to-day IoT scenarios.</P> <P>&nbsp;</P> <H2>Introducing the Cloud-Enabled Altair 8800 on Azure Sphere</H2> <P>&nbsp;</P> <P>Say hello to Altair 8800 on Azure Sphere, you can get started with just an Azure Sphere. Or, for a more authentic Altair experience with the Avnet Azure Sphere Starter Kit, you can use MikroE Click boards along with their soon-to-be-available “MikroE Altair 8800 Retro” click board. For the more adventurous, you can build your own Altair 8800 Front Panel, the Open-Source Hardware design builds on the design by <A href="#" target="_blank" rel="noopener">Daniel Karling</A>.</P> <P>&nbsp;</P> <P>The image below is the Altair Front Panel connected to the Seeed Studio Azure Sphere RDB, the Front Panel can also be connected to the Avnet Azure Sphere Starter Kit.</P> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="glovebox_1-1626385909381.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296244i6B19FE475336B057/image-size/medium?v=v2&amp;px=400" role="button" title="glovebox_1-1626385909381.png" alt="glovebox_1-1626385909381.png" /></span></P> <P>&nbsp;</P> <P>There are two Altair Click Panel configurations for the Avnet Starter Kit.</P> <P>&nbsp;</P> <TABLE width="100%"> <TBODY> <TR> <TD width="50%"> <P>The MikroE Altair 8800 Retro Click</P> </TD> <TD width="50%"> <P>The Mikro 4x4 key and 8x8 LED Clicks</P> </TD> </TR> <TR> <TD width="50%"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="glovebox_2-1626385909401.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296243iDEF965445F7DAC3B/image-size/medium?v=v2&amp;px=400" role="button" title="glovebox_2-1626385909401.png" alt="glovebox_2-1626385909401.png" /></span> <P>&nbsp;</P> </TD> <TD width="50%"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="glovebox_3-1626385909419.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296246i40975040557F3168/image-size/medium?v=v2&amp;px=400" role="button" title="glovebox_3-1626385909419.png" alt="glovebox_3-1626385909419.png" /></span> <P>&nbsp;</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <H2>Solution architecture</H2> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="glovebox_4-1626385909423.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296245i37CA75968413668A/image-size/medium?v=v2&amp;px=400" role="button" title="glovebox_4-1626385909423.png" alt="glovebox_4-1626385909423.png" /></span></P> <P>&nbsp;</P> <P>The Azure Sphere is running an Open-Source Intel 8080 emulator, and on top of the emulator, we are layering Altair BASIC and CP/M. Yes, we are running the original Intel 8080 binaries on top of the Intel 8080 CPU emulator, now that is taking backward compatibility to the extreme. On CPM there is support for Assembler, BASIC, and C. You can edit Assembled and C source code with the Word-Master text editor.</P> <P>&nbsp;</P> <P>The Altair 8800 is integrated with Azure IoT Central, Static Web Apps, and an Azure Virtual Machine running the Mosquitto MQTT broker plus the Altair 8800 Remote Virtual File System.</P> <P>&nbsp;</P> <P>The solution is connected using MQTT services. The Intel 8080 CPU emulator terminal IO is redirected over MQTT as is the Virtual Disk File System. You access the Altair 8800 emulator via the MQTT browser-based “Web Terminal”, so you can access the Altair emulator securely from anywhere given the right credentials.</P> <P>&nbsp;</P> <H2>The Altair 8800 emulator</H2> <P>&nbsp;</P> <P>The Altair 8800 emulator runs across all three of the Azure Sphere custom app cores. Running on the Cortex A7 is the Altair emulator, plus communications stack (MQTT and Azure IoT C SDK).</P> <P>&nbsp;</P> <P>Running on one of the Cortex M4 real-time cores is a “Least Recently Used” Virtual Disk cache, the significantly improves the performance of the remote virtual disk file system and is a pattern applicable to more day-to-day IoT apps requiring data caches.</P> <P>&nbsp;</P> <P>On the second M4 core runs the “Environment sensor” app providing real-time temperature and pressure readings that can be read by Altair BASIC applications. So not only is the emulator backward compatible, but it is also forward-looking and able to access IoT sensors never dreamed of when the Altair 8800 was invented.</P> <P>&nbsp;</P> <H2>Build your own Cloud-Enabled Altair 8800</H2> <P>&nbsp;</P> <P>The project is Open Source, the software and hardware design is located on GitHub at <A href="#" target="_blank" rel="noopener">Azure Sphere Cloud Enabled Altair 8800</A>, feedback and contributions are very welcome.</P> <P>&nbsp;</P> <P>The project is fully documented on the project <A href="#" target="_blank" rel="noopener">wiki</A>, again, contributions to the documentation are welcome.</P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/nUMOpEOx6LI?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="Video demo of the Altair 8800 emulator running on an Azure Sphere device"></IFRAME></DIV> <P>&nbsp;</P> <P>We hope you enjoy this project as much as we enjoyed creating it. The project brings together the old and new and through it there is the opportunity to learn more about Azure Sphere, Azure Cloud Services, IoT patterns and will hopefully stir creative juices.</P> <P>Please leave comments and project contributions are most welcome.</P> <P>&nbsp;</P> <P>Be sure to leave comments here or on the Altair 8800 on Azure Sphere GitHub repo.</P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">Dave Glover</A>, Cloud Developer Advocate based in Sydney Australia, with a focus on IoT and Azure Sphere. I publish samples on my GitHub <A href="#" target="_blank" rel="noopener">http://github.com/gloveboxes</A>.</P> <P>&nbsp;</P> Wed, 21 Jul 2021 04:18:39 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/bringing-new-life-to-the-altair-8800-on-azure-sphere/ba-p/2554337 glovebox 2021-07-21T04:18:39Z Building the next smart city with Azure Percept https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/building-the-next-smart-city-with-azure-percept/ba-p/2547992 <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="city.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296248i332CE06CBECAE2AD/image-size/large?v=v2&amp;px=999" role="button" title="city.png" alt="city.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Cities range from the large to the small, the old to the new, and the well-known to the hardly-ever-heard-of, but the one thing they have in common is an appetite to meet the needs of their population. Smart cities are hubs of cutting-edge technology that help make municipalities work bett<SPAN style="font-family: inherit;">er for everyone, from giving residents better lives to&nbsp;</SPAN><SPAN style="font-family: inherit;">enabling thriving businesses.</SPAN></P> <P>&nbsp;</P> <P>Smart cities are reshaping global economics, the relationship of people to their physical spaces, and the needs for new talent, skills, and attitudes to embrace the future. By leveraging cloud and edge computing powered by 5G (and LPWA, or Low Power Wireless Access), cities have new opportunities to engage residents, increase safety, and promote efficient operations at low-cost. The use of this advanced technology, including the intelligent edge, artificial intelligence (AI) and 5G, will truly transform how we live, work, and develop new applications and solutions through collaborations across government and industries.</P> <P>&nbsp;</P> <H1>Proximus + Microsoft deliver on Kortrijk’s vision as a smart city&nbsp;</H1> <P align="center">&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="rack.png" style="width: 300px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296247iE5B5B9ECD1B89FDB/image-size/medium?v=v2&amp;px=400" role="button" title="rack.png" alt="rack.png" /></span></P> <P>&nbsp;</P> <P>Kortrijk is a smart city in Belgium that used technology to access pedestrian count to promote safety during the COVID-19 pandemic. For a city, knowing the number of people on the streets in relation to a venues’ capacity is an important element to consider when looking for innovative ways to make the municipalities safer.&nbsp; Beyond the pandemic, though, learning how people move in a particular area can have a great impact on a city’s activities, so having the right tools to access this data can lead to smoother operations and better engagement.</P> <P>&nbsp;</P> <H2>Using edge AI and IoT technology to quickly solve for city challenges</H2> <P>&nbsp;</P> <P>Kortrijk originally relied on webcam barometric <A href="#" target="_blank" rel="noopener">pressure readings</A> to gather pedestrian counts, but after several tests, it became clear that live counts were not accurate, prompting city officials to look for alternative solutions. After exploring the market for a suitable solution, <A href="#" target="_blank" rel="noopener">Proximus</A>, one of the largest mobile telecommunications companies in Belgium, turned to <A href="#" target="_self">Azure Percept</A> to propose new alternatives to Kortrijk.</P> <P>Azure Percept is a comprehensive end-to-end edge AI platform with pre-built models and solution management, as well as Zero Trust security measures, to safeguard models and data. It offers the capabilities to start a proof of concept in minutes with hardware accelerators designed to integrate seamlessly with <A href="#" target="_blank" rel="noopener">Azure AI</A> and <A href="#" target="_blank" rel="noopener">Azure IoT services</A>.</P> <P>With this, Microsoft partnered with Proximus, which is the leading telecom provider in Belgium, to develop an innovative, proof of concept solution that leverages Proximus’s cellular network to address the live-count limitations the city had encountered with previous technologies. The strategy was to set-up a test system which included:</P> <UL> <LI>Video analysis for pedestrian count</LI> <LI>Video analysis with 3D cameras</LI> <LI>Point cloud analysis relying on millimeter wave radars</LI> <LI>Sound analysis based on sound sensors powered by AI</LI> </UL> <P>Sensors were placed in key locations and tests , which did not require an initial investment from the city, were conducted in late May.&nbsp; By leveraging <A href="#" target="_blank" rel="noopener">Azure Cognitive Services</A> and <A href="#" target="_blank" rel="noopener">ML</A>, Proximus was able to deliver vision and audio insights in real time. The results served to decide which model was best suited for the city by estimating the ideal technology and placement options to obtain optimal pedestrian counts—becoming one of the first operators in Europe to directly integrate Microsoft edge capabilities into the heart of its network.</P> <P>&nbsp;</P> <H1>Intelligent edge for smart cities</H1> <P align="center">&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture2.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/295851i4AC8E995FCE5B589/image-size/large?v=v2&amp;px=999" role="button" title="Picture2.jpg" alt="Picture2.jpg" /></span></P> <P>&nbsp;</P> <P>Innovation does not come without challenges though.&nbsp;With new applications, services, and workloads, smart cities need solutions and architecture built to support their demands. Enter the intelligent edge, a continually expanding set of connected systems and devices that gather and analyze data. Users get real-time insights and experiences, delivered by highly responsive and contextually aware apps, and when combined with the limitless computing power of the cloud, the possibilities for innovation are endless.&nbsp;</P> <P>Bringing enterprise applications closer to data sources, such as <A href="#" target="_blank" rel="noopener">IoT</A> devices or local edge servers, results in faster insights, improved response times, and better bandwidth availability. Cloud flexibility and scalability allows for easy integration and deployment, and with low maintenance costs, processes can be managed centrally while still allowing deployment of software depending on the user's needs—resulting in accelerated value, reduced operating costs, and increased efficiency. With this, smart cities can invent and innovate to meet the demands of the future.&nbsp;</P> <P>&nbsp;</P> <H1>The 5G power wave—fueling edge computing&nbsp;</H1> <P>&nbsp;</P> <P>The importance of 5G and LPWA stems from its potential to accelerate value and improve efficiency as it provides a new set of latencies and features that did not exist in the 4G environment and previous generations. This is particularly relevant in smart cities where the convergence of industry, public service, and other enterprises requires high density, speed and bandwidth and low power networks that—when paired with technology—opens a new world of possibilities.</P> <P>For Kortrijk and other cities, this powerful combination offers accurate insights as well as fast and cost-efficient solutions to address their particular needs. The data gathered through computer vision—which detects objects and movement in real-time—is robust and configurable to support different scenarios, allowing city officials to evaluate their options and make decisions&nbsp;that lead to optimal solutions.</P> <P>&nbsp;</P> <H1>Full throttle toward 5G and smart cities</H1> <P>&nbsp;</P> <P align="center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture3.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/295852iAA7E3E55B1F5C956/image-size/large?v=v2&amp;px=999" role="button" title="Picture3.jpg" alt="Picture3.jpg" /></span></P> <P class="lia-align-left" align="center">&nbsp;</P> <P class="lia-align-left" align="center">Thanks to advanced technology like the intelligent edge, AI, and 5G, we have the power to make&nbsp;smart cities a reality more easily. More and more cities around the world are welcoming this approach and solutions are being created and deployed to address the most pressing concerns that decision makers face every day.&nbsp;</P> <P>Azure Percept, along with the entire portfolio of Azure services for smart cities, is designed to speed the development and deployment of secure and comprehensive edge AI solutions from partners like Proximus that can leverage a range of edge endpoints – cameras, gateways, environmental sensors, all leveraging telecom infrastructure including 5G and LPWA.</P> <P>Ultimately, the goal is to create fully integrated smart processes that use data, technology and creativity to shape how people and goods move—making smart cities not only innovative but also safer and more reliable to support economic growth and meet the challenges of the future.&nbsp;</P> <P>&nbsp;</P> <P>Learn more about<A href="#" target="_self"> Azure Percept</A>.</P> <P>&nbsp;</P> Thu, 15 Jul 2021 23:17:44 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/building-the-next-smart-city-with-azure-percept/ba-p/2547992 Pete Bernard 2021-07-15T23:17:44Z Azure IoT services enable top partners to build innovative solutions https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-iot-services-enable-top-partners-to-build-innovative/ba-p/2549445 <P>Want to see how our partners use Azure IoT services to build comprehensive end-to-end IoT solutions? &nbsp;Curious about how digital transformation is unfolding around the world and across various industries?&nbsp; Watch our recently launched <STRONG>Partner Innovation Video Series</STRONG>!&nbsp; In it, you’ll learn how partners innovation combined with awesome IoT services solve real-world problems and meet customer needs.</P> <P>&nbsp;</P> <P>I’m Daria Huellen, Sr Program Manager on the Azure IoT Business Acceleration Team and I’m here to introduce you to the amazing solutions our partners built using Azure IoT.</P> <P>Each episode of the Partner Innovation series features two videos.&nbsp; In the first one, Tony Shakib, leader of IoT Business Acceleration team, talks with partner executives about their solution and its business-related impact. The second video provides a technical deep dive into how the solution works. One of our Azure IoT technical PM talks with the partner technical leads&nbsp;about the Azure services they used to build the solution&nbsp;along with a short demo of the solution.</P> <P>I hope these videos inspire you and provide more clarity on how Azure services can be integrated into an end-to-end customer-ready solution. &nbsp;As you watch, you will learn the challenges our partners faced and how to avoid potential pitfalls in your engagements. &nbsp;If you’re a developer already working on a project, these videos will provide insight into the many programs and opportunities that we can successfully create together.</P> <P>&nbsp;</P> <P>This first video features our partner <STRONG><A href="#" target="_self">Capgemini</A> </STRONG>and their <A href="#" target="_self">Reflect IoD</A> platform. Capgemini, a global leader in consulting, digital transformation, technology, and engineering services, used <A href="#" target="_self"><STRONG>Azure Digital Twins</STRONG></A> to build a digital twin platform to federate data from multiple sources and create a 360° view of the assets. Reflect IoD is a unique platform that integrates 1D data into 3D model, geographical information, maintenance, and operational data to enable more efficient and data-centric collaboration in smart buildings.</P> <P>&nbsp;</P> <P><STRONG><A href="#" target="_self">Watch the interview between Tony Shakib and Jean-Pierre Petit</A>:</STRONG></P> <P>&nbsp;</P> <P><LI-VIDEO vid="https://www.youtube.com/watch?v=GPAsPUs_yDI" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/GPAsPUs_yDI/hqdefault.jpg" external="url"></LI-VIDEO></P> <P>&nbsp;</P> <P><A href="#" target="_self"><STRONG>See Reflect IoD in action with a technical deep-dive:</STRONG></A></P> <P>&nbsp;</P> <P><LI-VIDEO vid="https://www.youtube.com/watch?v=KavgsneeYdo" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/KavgsneeYdo/hqdefault.jpg" external="url"></LI-VIDEO></P> <P>&nbsp;</P> <P>The second video features our partner <A href="#" target="_self"><STRONG>Cosmo Tech</STRONG></A> and their 360° Simulation Digital Twin software to solve the most complex industrial&nbsp;problems and lead enterprise decision making and it has been built using our <A href="#" target="_self"><STRONG>Azure Digital Twins</STRONG></A>. Industrial companies rely on Cosmo Tech to predict the evolution of their organization to better understand the impact of their decisions and optimize all levels of enterprise planning; ensuring a future that is robust,&nbsp;resilient,&nbsp;and sustainable.</P> <P>&nbsp;</P> <P><A href="#" target="_self"><STRONG>Watch the interview between Tony Shakib and Michel Morvan:</STRONG></A></P> <P>&nbsp;</P> <P><LI-VIDEO vid="https://youtu.be/oTG7KbiP0Lw" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/oTG7KbiP0Lw/hqdefault.jpg" external="url"></LI-VIDEO></P> <P>&nbsp;</P> <P><A href="#" target="_self"><STRONG>See 360° Simulation Digital Twin in action with a technical deep-dive:</STRONG></A></P> <P>&nbsp;</P> <P><LI-VIDEO vid="https://youtu.be/pQSNLmbyxlI" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="http://i.ytimg.com/vi/pQSNLmbyxlI/hqdefault.jpg" external="url"></LI-VIDEO></P> <P>&nbsp;</P> <P>If you’re looking for more content like this, follow <A href="#" target="_self">Tony Shakib on Linkedin</A>. &nbsp;He posts regularly about partner innovation and Azure IoT, and these videos are first released there every other Wednesday.</P> <P>Like, Share and Stay tuned for the new videos! And of course, let us know what you think in the comments.&nbsp; We are looking forward to our conversation with you.</P> <P>&nbsp;</P> <P>Learn, grow, innovate!</P> <P>Daria</P> Thu, 15 Jul 2021 22:46:27 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-iot-services-enable-top-partners-to-build-innovative/ba-p/2549445 Daria_Huellen 2021-07-15T22:46:27Z IoT for Beginners, curriculum https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/iot-for-beginners-curriculum/ba-p/2532793 <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JimBennett_0-1625848731849.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294683i7850BB2BD7F9E0B6/image-size/large?v=v2&amp;px=999" role="button" title="JimBennett_0-1625848731849.png" alt="JimBennett_0-1625848731849.png" /></span></P> <P>&nbsp;</P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">It is our very great pleasure to announce the release of a new, free, MIT-licensed open-source curriculum all about the Internet of Things: </SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">IoT for Beginners</SPAN></A><SPAN data-contrast="none">. Brought to you by a team of Azure Cloud Advocates, Program Managers, and </SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Microsoft Learn Student Ambassadors</SPAN></A><SPAN data-contrast="none">, we hope to empower students of all ages to learn the basics of IoT. Presuming no knowledge of IoT, we offer a free 12-week, 24-lesson curriculum to help you dive into this amazing field.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">If you liked our first two curricula, </SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Web Dev for Beginners</SPAN></A><SPAN data-contrast="none"> and </SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Machine Learning for beginners</SPAN></A><SPAN data-contrast="none">, you will love IoT for Beginners!</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <H4><SPAN data-contrast="none">Join us on the journey of your food, from farm to table!</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></H4> <P><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">Join us as we take the same journey as your food as it travels from farm to table, taking advantage of IoT on the way to improve farming, transport, manufacturing and food processing, retail and smart homes.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none"> </SPAN></P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/bccEMm8gRuc?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="IoT for Beginners courseware on the IoT Show"></IFRAME></DIV> <P>&nbsp;</P> <P><SPAN data-contrast="none">Our curricula are structured with a modified Project-Based pedagogy and include:</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="8" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="none">a pre-lesson warmup quiz</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="8" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="none">a written lesson</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="8" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="none">an optional video</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="8" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="none">knowledge checks</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="8" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="none">a project to build</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="8" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="none">infographics, sketchnotes, and visuals</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="8" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="none">a challenge</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="8" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="none">an assignment</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="8" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="none">a post-lesson quiz</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="8" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="none">opportunities to deepen your knowledge on Microsoft Learn</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> </UL> <P><SPAN data-contrast="none">&nbsp;</SPAN></P> <H3><SPAN data-contrast="none">What will you learn?</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></H3> <P><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JimBennett_1-1625848731849.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294684iA6E8B50D106FC238/image-size/large?v=v2&amp;px=999" role="button" title="JimBennett_1-1625848731849.png" alt="JimBennett_1-1625848731849.png" /></span></P> <P><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">The lessons are grouped so that you can deep-dive into use cases of IoT. We start with an introduction to IoT, covering devices, sensors, actuators and cloud connectivity, where you will build an internet connected version of the "Hello world" or IoT, an LED. We then move on to farming, learning about digital agriculture and feedback loops to control automated watering systems. Your food then leaves the farm on trucks, and you learn how to track vehicles using GPS, visualize their journeys and get alerts when a truck approaches a processing plant. Once in the plant, we move to AIoT, learning how to distinguish between ripe and unripe fruit using AI models running from IoT devices and on the edge. Next we move to the supermarket, using IoT to manage stock levels. Finally we take the food home to cook, and learn about consumer smart devices, building a voice controlled smart timer that can even speak multiple languages.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <H3><SPAN data-contrast="none">Hardware</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></H3> <P><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">The hard part (pun intended) for IoT is hardware, so we've designed this curriculum to be as accessible as possible. We want you to Learn IoT, not learn how to solder, know how to read resistor color codes, or know what a microfarad is, so we've made hardware choices to make things easier.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">You can choose to learn using microcontrollers using Arduino with a </SPAN><A href="#" target="_blank" rel="noopener"><SPAN>Wio Terminal</SPAN></A><SPAN data-contrast="none">, or single board computers using a </SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Raspberry Pi</SPAN></A><SPAN data-contrast="none">. We've also added a virtual hardware option so you can learn without having to purchase anything!</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">For sensors and actuators, we've gone with the </SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Grove kit</SPAN></A><SPAN data-contrast="none"> from </SPAN><A href="#" target="_blank" rel="noopener"><SPAN>Seeed Studio</SPAN></A><SPAN data-contrast="none">, with easy to connect sensors and actuators.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none"> </SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JimBennett_2-1625848731850.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294682i0464F63CC3EC55A6/image-size/medium?v=v2&amp;px=400" role="button" title="JimBennett_2-1625848731850.png" alt="JimBennett_2-1625848731850.png" /></span></P> <P>&nbsp;</P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">Our friends at Seeed have made it easy to buy the hardware, with packages containing all of the kit you need.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="9" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">IoT for beginners with Seeed and Microsoft - Wio Terminal Starter Kit</SPAN></A><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="9" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">IoT for beginners with Seeed and Microsoft - Raspberry Pi 4 Starter Kit</SPAN></A><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> </UL> <P><SPAN data-contrast="none">If you are interested in learning using virtual hardware, you can write IoT code locally as if you were using a Raspberry Pi, then simulate sensors and actuators using </SPAN><A href="#" target="_blank" rel="noopener"><SPAN>CounterFit</SPAN></A><SPAN data-contrast="none">, a free, open source tool for simulating hardware.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <H3><SPAN data-contrast="none">A sneak peek</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></H3> <P><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">This curriculum is filled with a lot of art, created by our team. Take a look at this cool sketchnote created by </SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">@nitya</SPAN></A><SPAN data-contrast="none"> .</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JimBennett_3-1625848731847.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294685i138A28E67CB177FE/image-size/large?v=v2&amp;px=999" role="button" title="JimBennett_3-1625848731847.png" alt="JimBennett_3-1625848731847.png" /></span></P> <H3><SPAN data-contrast="none">Without further ado, please meet </SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">IoT For Beginners: A Curriculum</SPAN></A><SPAN data-contrast="none">!</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></H3> Fri, 09 Jul 2021 18:59:27 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/iot-for-beginners-curriculum/ba-p/2532793 Jim Bennett 2021-07-09T18:59:27Z Azure Percept Audio - Home Automation with Azure Functions, Azure IoT Hub and Raspberry Pi https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-audio-home-automation-with-azure-functions-azure/ba-p/2528048 <P><SPAN>In the&nbsp;</SPAN><A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-audio-custom-keywords-and-commands/ba-p/2485199" target="_self">previous post</A><SPAN>&nbsp;we created our own custom command in Speech Studio to turn music on and off using the Azure Percept.</SPAN></P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/JqfceNUPqBo?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="Custom command for the Azure Percept DK in action"></IFRAME></DIV> <P>&nbsp;</P> <P>In this post, we’ll take this further by delving into the world of Home Automation.</P> <P>&nbsp;</P> <P>We’ll go through how to connect our command up to an Azure Function via the Command’s WebHook. From the Azure Function, we’ll connect to an Azure IoT Hub. We can then invoke a Direct Method on a Connected Raspberry Pi. This Direct Method will instruct the Raspberry Pi to turn on and off a Desk Lamp using a Relay.</P> <P>&nbsp;</P> <H2>Previously - A Recap of Custom Commands</H2> <P>&nbsp;</P> <P>Custom Commands allow us to create a command set that the Azure Percept can react to. In the previous post, we created a custom command that allowed the Percept to react to a command to turn music on or off.</P> <P>&nbsp;</P> <P>Custom Commands are primarily comprised of a set of example sentences and a completion operation. In the previous post our completion operation was to simply respond to the user in voice that the music was turned on or off.</P> <P>&nbsp;</P> <P>We made good use of Parameters so we could have a single command turn the music both on and off again.</P> <P>&nbsp;</P> <H2>What we’ll be doing</H2> <P>&nbsp;</P> <P><SPAN>The following diagram gives us an overview of the major component parts of the solution:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="PerceptIoT" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294282i42CC53AC4C422169/image-size/large?v=v2&amp;px=999" role="button" title="PerceptIoT" alt="PerceptIoT" /></span></SPAN></P> <P>&nbsp;</P> <P>We have the Azure Percept Audio in the top left. This of course is connected to the Carrier Board which I’ve not shown here.</P> <P>&nbsp;</P> <P>Azure Percept makes use of the Speech Service to process spoken word detected through the Percept Audio SoM.</P> <P>&nbsp;</P> <P>Making use of a Webhook, the Speech Service then triggers an Azure Function using HTTP.</P> <P>&nbsp;</P> <P>This Azure function takes any parameters passed by the Speech Service and then connects to an Azure IoT Hub using a Service Policy.</P> <P>&nbsp;</P> <P>The IoT Hub has a Device registered to it which in my case is a Raspberry Pi running .NET 5 and C#9.</P> <P>&nbsp;</P> <P>The Azure Function invokes a Direct Method on the Raspberry Pi through the IoT Hub when it’s triggered, passing in any parameters which have been passed in from the Speech Service.</P> <P>&nbsp;</P> <P>The Pi then actions the Direct Method and sets a GPIO pin connected to a relay which turns the mains power on to a Desk Lamp.</P> <P>&nbsp;</P> <H2>Creating an Azure Function App</H2> <P>&nbsp;</P> <P>Once a command has been recognised by the Percept we can configure a done action.</P> <P>&nbsp;</P> <P>In the previous post we configured this “Done” action as a spoken response to the user. However, we can also configure the command to call a webhook before then returning the result of that webhook back to the user.</P> <P>&nbsp;</P> <P>Before we can configure the webhook however, we need the address for the webhook to call into.</P> <P>&nbsp;</P> <P>For this we're going to use an Azure Function. Azure Functions are a serverless compute offering which allows us to host and run our code without needing to worry about all the typical architecture around a traditional compute resource like a virtual machine.</P> <P>&nbsp;</P> <P>On top of this, Azure Functions are super cheap to run, which is great for our use case.&nbsp;</P> <P>&nbsp;</P> <P>We'll use the Azure Function to receive the Web Endpoint call from our Speech Command, process it and call into the Azure IoT Hub.</P> <P>&nbsp;</P> <P>If we head to the portal and<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">create a new Azure Function App:</A></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-80" style="width: 703px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294283iC4BDB99112AB46F6/image-size/large?v=v2&amp;px=999" role="button" title="image-80" alt="image-80" /></span></P> <P>Hitting the blue create button, we’re taken to the first page of the Create Function App process:</P> <DIV class="wp-block-image has-lightbox lightbox-2 ">&nbsp;</DIV> <DIV class="wp-block-image has-lightbox lightbox-2 "><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="image-81-1024x975" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294284i31EF25C573FF16BA/image-size/large?v=v2&amp;px=999" role="button" title="image-81-1024x975" alt="image-81-1024x975" /></span></DIV> <DIV class="wp-block-image has-lightbox lightbox-2 ">&nbsp;</DIV> <DIV class="wp-block-image has-lightbox lightbox-2 "> <P>Here we can choose our Subscription, Resource group and give our Function a unique Name. We can leave the Publish type as “Code”, set the Runtime Stack to “.NET”, the Version to “3.1” and the region to either “West US” or “West Europe” to match that of our Percept Resources.</P> <P>&nbsp;</P> <P>We can leave the options on the remaining tags as their default, we we can just hit the “Review + Create” button. Check the options you’ve selected before hitting the blue “Create” button:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-82" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294288iFE17366B7F61C59B/image-size/large?v=v2&amp;px=999" role="button" title="image-82" alt="image-82" /></span></P> <P>&nbsp;</P> <P><SPAN>The Function App will then begin creating and complete some time later:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-84" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294290i7A1591EF039DB101/image-size/large?v=v2&amp;px=999" role="button" title="image-84" alt="image-84" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>Once the Function has finished creating, we can navigate to our new Azure Function by hitting the blue “Go to resource” button:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-85" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294291iCCF6353FA7D850B5/image-size/large?v=v2&amp;px=999" role="button" title="image-85" alt="image-85" /></span></SPAN></P> <P>&nbsp;</P> <H2>Creating an Azure Function using VS Code</H2> <P>&nbsp;</P> <P>We’ll now use VS Code to create our Azure Function. It's also possible to complete most of these actions in the Portal, but I find that VS Code offers a great experience for creating Azure Functions.</P> <P>&nbsp;</P> <P>Before we do this, we need to create a directory for our Azure Function to live in, so go ahead and find a nice location for your azure function. We can then open this folder in VS Code, either by right clicking in empty space and clicking “Open with VS Code”, or opening VS Code and opening the folder directly:</P> </DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-88" style="width: 664px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294293i7F9A26739F696322/image-size/large?v=v2&amp;px=999" role="button" title="image-88" alt="image-88" /></span></P> <P>&nbsp;</P> <P><SPAN>Next we need to install the&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">Azure plugin for VS Code</A><SPAN>&nbsp;if you don’t have it already:</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-86" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294294i5C49676D55DAE0AD/image-size/large?v=v2&amp;px=999" role="button" title="image-86" alt="image-86" /></span></SPAN></P> <P>&nbsp;</P> <P>You will need to sign in to Azure if you haven’t already.</P> <P>&nbsp;</P> <P>Once you’re signed in to Azure, all of your resources will be listed in the left pain when you click the Azure Icon:</P> <DIV class="wp-block-image has-lightbox lightbox-6 ">&nbsp;</DIV> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-87" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294295i7230FAB1DC8D65A1/image-size/large?v=v2&amp;px=999" role="button" title="image-87" alt="image-87" /></span></P> <P>&nbsp;</P> <P><SPAN>Next we need to create our Azure Function. We can do that from the VS Code Tool Palette by pressing&nbsp;</SPAN><STRONG><EM>ctrl+shift+p</EM></STRONG><SPAN>, then typing Azure Function in the search box:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-89" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294296i240F4DF1292E3AFD/image-size/large?v=v2&amp;px=999" role="button" title="image-89" alt="image-89" /></span></SPAN></P> <P>&nbsp;</P> <P>We can then select the first item in the list “<STRONG><EM>Azure Functions: Create Function</EM></STRONG>“.</P> <P>&nbsp;</P> <P>We’ll then see the following prompt: “The select folder is not a function project. Create new project?”</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-90" style="width: 617px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294297i44D2A1DCD1A5D0E5/image-size/large?v=v2&amp;px=999" role="button" title="image-90" alt="image-90" /></span></P> <P>&nbsp;</P> <P><SPAN>We will then be prompted for which language we’d like to use:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-91" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294298i2E283620B3796612/image-size/large?v=v2&amp;px=999" role="button" title="image-91" alt="image-91" /></span></SPAN></P> <P>&nbsp;</P> <P>We’ll choose “C#”. Next we’ll be asked which .NET runtime we want to use:</P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-92" style="width: 977px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294299iDC808CA2C2887408/image-size/large?v=v2&amp;px=999" role="button" title="image-92" alt="image-92" /></span></SPAN></P> <P>&nbsp;</P> <P>We’ll select “.NET Core 3 LTS”. We’ll then have a choice of Trigger for the Function:</P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-93" style="width: 954px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294300i92DCEFDCFDF68290/image-size/large?v=v2&amp;px=999" role="button" title="image-93" alt="image-93" /></span></SPAN></P> <P>&nbsp;</P> <P>We’ll select “HTTP trigger”. We’ll then have a chance to name our Function:</P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-94" style="width: 947px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294302iE8C5C752856D7EBF/image-size/large?v=v2&amp;px=999" role="button" title="image-94" alt="image-94" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>We’ll choose “<STRONG><EM>PerceptTrigger</EM></STRONG>” for our Trigger. Next, we can give our function a namespace:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-95" style="width: 931px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294303i7291E361E34FB3A6/image-size/large?v=v2&amp;px=999" role="button" title="image-95" alt="image-95" /></span></SPAN></P> <P>&nbsp;</P> <P>We’ll use “<STRONG><EM>Percept.Function</EM></STRONG>“.</P> <P>&nbsp;</P> <P>Finally we can choose the type of Authentication we want to use for our function:</P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-96" style="width: 951px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294304i17483546E9240752/image-size/large?v=v2&amp;px=999" role="button" title="image-96" alt="image-96" /></span></SPAN></P> <P>&nbsp;</P> <P>We’ll choose “<STRONG><EM>Function</EM></STRONG>“, which will allow us to pass a token to the function to authenticate.</P> <P>&nbsp;</P> <P>The Function will then be created:</P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-97" style="width: 695px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294305iEA218A03ECB20FC8/image-size/large?v=v2&amp;px=999" role="button" title="image-97" alt="image-97" /></span></SPAN></P> <P>&nbsp;</P> <P>With that, the basics of our function has been created:</P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-98" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294306i1ACF2DE08A9AE060/image-size/large?v=v2&amp;px=999" role="button" title="image-98" alt="image-98" /></span></SPAN></P> <P>&nbsp;</P> <H2>Adding the Azure IoT Hub SDK</H2> <P>&nbsp;</P> <P>Next, as we’re going to be interacting with an IoT Hub, we need to add the IoT Hub SDK.</P> <P>&nbsp;</P> <P>We can do this from the terminal. Pressing ” ctrl+shift+’ ” (Control + Shift + Apostrophe), will launch the terminal:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-99" style="width: 893px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294307iF2064AC67D8C4CE5/image-size/large?v=v2&amp;px=999" role="button" title="image-99" alt="image-99" /></span></P> <P>&nbsp;</P> <P><SPAN>We can install the IoT Hub SDK by running:</SPAN></P> <P>&nbsp;</P> <LI-CODE lang="bash">dotnet add package Microsoft.Azure.Devices</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-100" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294309i20F3B871ADA4786B/image-size/large?v=v2&amp;px=999" role="button" title="image-100" alt="image-100" /></span></P> <P>&nbsp;</P> <H2>Building the Azure Function Code</H2> <P>&nbsp;</P> <P><SPAN>Replace the contents of the PerceptTrigger.cs file with the following block of code:</SPAN></P> <P>&nbsp;</P> <LI-CODE lang="csharp">using System; using System.IO; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.Http; using Microsoft.AspNetCore.Http; using Microsoft.Extensions.Logging; using Newtonsoft.Json; using Microsoft.Azure.Devices; namespace PerceptIoT.Function { public static class PerceptTrigger { private static ServiceClient serviceClient; // Connection string for your IoT Hub private static string connectionString = System.Environment.GetEnvironmentVariable("IotHubConnectionString"); // Device ID for Raspberry Pi private static string deviceID = System.Environment.GetEnvironmentVariable("DeviceID"); [FunctionName("PerceptTrigger")] public static async Task&lt;IActionResult&gt; Run( [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req, ILogger log) { log.LogInformation("C# HTTP trigger function processed a request."); string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); log.LogInformation(requestBody); dynamic data = JsonConvert.DeserializeObject(requestBody); log.LogInformation($"The State was: {data.state} "); string responseMessage = $"This HTTP triggered function executed successfully. The State was {data.state}"; serviceClient = ServiceClient.CreateFromConnectionString(connectionString); await InvokeMethodAsync(Convert.ToString(data.state)); serviceClient.Dispose(); return new OkObjectResult(responseMessage); } // Invoke the direct method on the device, passing the payload private static async Task InvokeMethodAsync(string state) { var methodInvocation = new CloudToDeviceMethod("ControlLight") { ResponseTimeout = TimeSpan.FromSeconds(30), }; methodInvocation.SetPayloadJson("{\"state\": \"" + state + "\"}"); // Invoke the direct method asynchronously and get the response from the simulated device. var response = await serviceClient.InvokeDeviceMethodAsync(deviceID, methodInvocation); Console.WriteLine($"\nResponse status: {response.Status}, payload:\n\t{response.GetPayloadAsJson()}"); } } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>Remember to change the “PerceptTrigger” references in the file to whatever you chose for your Function name.</P> <P>&nbsp;</P> <P>Next we need to add a local debug setting for our IoT Hub Connection string. Replace the contents of your local.settings.json file with the following:</P> <P>&nbsp;</P> <LI-CODE lang="json">{ "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "", "FUNCTIONS_WORKER_RUNTIME": "dotnet", "IotHubConnectionString": "ENTER YOUR IOT HUB SERVICE LEVEL ACCESS POLICY CONNECTION STRING", "DeviceID": "ENTER YOUR RASPBERRY PI DEVICE ID" } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Deploying the Function App</H2> <P>&nbsp;</P> <P><SPAN>We’re now ready to deploy our Function. Using the Azure Tools Extension, in the “Functions” section, if you expand the subscription you created your Function App within, then right click we’ll be given a set of options:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-102" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294312i438536751FA54D34/image-size/large?v=v2&amp;px=999" role="button" title="image-102" alt="image-102" /></span></SPAN></P> <P>&nbsp;</P> <P>If we select the “Deploy to Function App” option, we can begin deploying our new Azure Function.</P> <P>&nbsp;</P> <P>We’ll be asked to confirm if we want to deploy the Function App:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-103" style="width: 835px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294313iBCED2B4AA74A31CB/image-size/large?v=v2&amp;px=999" role="button" title="image-103" alt="image-103" /></span></P> <P><SPAN>Hitting “Deploy” will begin the process of deploying the Function:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-104" style="width: 715px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294315i7D86F1C031642E22/image-size/large?v=v2&amp;px=999" role="button" title="image-104" alt="image-104" /></span></SPAN></P> <P>&nbsp;</P> <P>Once the process is complete, our new Azure Function will appear in the functions list:</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-105" style="width: 384px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294317iE5A9688921C373C1/image-size/large?v=v2&amp;px=999" role="button" title="image-105" alt="image-105" /></span></SPAN></P> <P>&nbsp;</P> <H2><SPAN style="color: inherit; font-family: inherit;">Adding an IoT Device to the IoT Hub</SPAN></H2> <P>&nbsp;</P> <P>As we’ll be deploying code to a Raspberry Pi which will connect to our IoT Hub, and in turn allow the Azure Function to invoke a Direct Method.</P> <P>&nbsp;</P> <P>You will have created an IoT Hub as part of the Percept setup experience. We can<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">find all of the IoT Hubs we’ve created using the portal</A>.</P> <P>Navigating to the Overview Page of our IoT Hub, we can click the IoT Devices Item:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-112" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294319i4AFA8B2FDC932B61/image-size/large?v=v2&amp;px=999" role="button" title="image-112" alt="image-112" /></span></P> <P>&nbsp;</P> <P><SPAN>We’ll then be shown the list of IoT Devices we currently have registered to our IoT Hub:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-113" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294324i693315986F30368D/image-size/large?v=v2&amp;px=999" role="button" title="image-113" alt="image-113" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>We can add a new IoT Device by hitting the “+ New” item in the toolbar:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-114" style="width: 889px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294325iA80877B24C53FEDA/image-size/large?v=v2&amp;px=999" role="button" title="image-114" alt="image-114" /></span></SPAN></P> <P>&nbsp;</P> <P>We can name our device “devicecontrol”, leave all the options at their defaults and hit the blue “<STRONG><EM>Save</EM></STRONG>” button:</P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-115" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294326i76404C9D8F1B8514/image-size/large?v=v2&amp;px=999" role="button" title="image-115" alt="image-115" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>We’ll then see our new device listed in the Device Registry. You may need to hit the “Refresh” button in the toolbar.</SPAN></P> <P>&nbsp;</P> <H2>Adding an IoT Hub Shared Access Policy</H2> <P>&nbsp;</P> <P>We now need a way for our Azure Function to connect to our Percept IoT Hub. IoT Hub has the concept of Shared Access Policies to control access to devices and services connecting to and interacting with IoT Hub.</P> <P>&nbsp;</P> <P>If we return to the overview page of our IoT Hub, we can add a new Shared Access Policy for our Function App to connect to:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-106" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294327iC0FD684FC22718BB/image-size/large?v=v2&amp;px=999" role="button" title="image-106" alt="image-106" /></span></P> <P>&nbsp;</P> <P><SPAN>We’ll now be shown all of the existing Built-In Shared Access Policies for our IoT Hub:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-108" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294328i7C8EF0BDD80AD592/image-size/large?v=v2&amp;px=999" role="button" title="image-108" alt="image-108" /></span></SPAN></P> <P>&nbsp;</P> <P>Our Function will need a policy with “<STRONG><EM>Service Connect</EM></STRONG>” access. We could use the built in “Service” policy, but I prefer to create a new policy for services to use.</P> <P>&nbsp;</P> <P>If we click the “<STRONG><EM>+ Add shared access policy</EM></STRONG>” in the toolbar, we can add our own policy:</P> <DIV class="wp-block-image has-lightbox lightbox-27 ">&nbsp;</DIV> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="image-109" style="width: 876px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294329i5968868CC0C497B7/image-size/large?v=v2&amp;px=999" role="button" title="image-109" alt="image-109" /></span></P> <P>&nbsp;</P> <P>We can name our Shared Access Policy whatever makes sense, I’ve chose “<STRONG><EM>perceptfunctionpolicy</EM></STRONG>” here.</P> <P>&nbsp;</P> <P>We can then select the “Service Connect” Permission and hit the blue “Add” button to create the Policy.</P> <P>&nbsp;</P> <P>With the Policy created, it will appear in the list of Shared Access Policies:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-110" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294330i57670599A3083416/image-size/large?v=v2&amp;px=999" role="button" title="image-110" alt="image-110" /></span></P> <P>&nbsp;</P> <P><SPAN>If we click on our new Shared Access Policy, we can grab the Primary Connection string we need to allow our Azure Function to connect to the IoT Hub:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-111" style="width: 897px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294331i679A49031E88498E/image-size/large?v=v2&amp;px=999" role="button" title="image-111" alt="image-111" /></span></SPAN></P> <P>&nbsp;</P> <H2>Adding a Function App Key</H2> <P>&nbsp;</P> <P>Now that we’ve grabbed our Shared Access Policy, we can add this along with the IoT Device ID of the Raspberry Pi IoT Device we added to the Function Keys for our Azure Function.</P> <P>&nbsp;</P> <P>Azure Function Function Keys are a way for us to store secrets that our Function can use. This allows us to store our secrets, like IoT Hub Connection Strings, outside of our code, keeping them safe from prying eyes.</P> <P>&nbsp;</P> <P>Onwards from this, we can also store variables that change the way our code works or configure it to connect with certain parameters.</P> <P>&nbsp;</P> <P>If we navigate to the overview page of our Function App:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-116" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294332i1FA6F0D682A76534/image-size/large?v=v2&amp;px=999" role="button" title="image-116" alt="image-116" /></span></P> <P>&nbsp;</P> <P><SPAN>We can navigate to our list of Azure Functions by clicking the “Functions” menu item on the left:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-117" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294333i4D0E4DFB52BF7E52/image-size/large?v=v2&amp;px=999" role="button" title="image-117" alt="image-117" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>We’ll now see the Azure Function we created from VS Code, clicking on this will take us to the details page:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-118" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294334i6F345F0C5775BC16/image-size/large?v=v2&amp;px=999" role="button" title="image-118" alt="image-118" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>We’ll now see the Azure Function we created from VS Code, clicking on this will take us to the details page:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-118" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294335iB099C6D8EB5DFA62/image-size/large?v=v2&amp;px=999" role="button" title="image-118" alt="image-118" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>We can then click the Function Keys menu item so we can add our required information:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-119" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294336iD9962C9DD754BBC8/image-size/large?v=v2&amp;px=999" role="button" title="image-119" alt="image-119" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>We can add a new Function Key by pressing the “<STRONG><EM>+ New function key</EM></STRONG>” button at the top of the page:</SPAN></P> <P>&nbsp;</P> <P><SPAN><SPAN><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-120" style="width: 881px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294337i173A8818636CDEEF/image-size/large?v=v2&amp;px=999" role="button" title="image-120" alt="image-120" /></span></SPAN></SPAN></SPAN></P> <P>&nbsp;</P> <P>We can add a Function Key for the Service Shared Access Policy Connection String we added earlier and press the blue “OK” button.</P> <P>&nbsp;</P> <P>We can repeat that for our IoT Device ID:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-121" style="width: 879px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294340iC79F33E8D23432D3/image-size/large?v=v2&amp;px=999" role="button" title="image-121" alt="image-121" /></span></P> <P>&nbsp;</P> <P><SPAN>Our new Function Keys should now be shown in the list:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-123" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294342i25DC42B5B2EC7AAE/image-size/large?v=v2&amp;px=999" role="button" title="image-123" alt="image-123" /></span></SPAN></P> <P>&nbsp;</P> <H2>Finding the Azure Function URL</H2> <P>&nbsp;</P> <P>We’ll be invoking our function using an HTTP trigger. As such, we’ll need the Azure Function URL.</P> <P>&nbsp;</P> <P>If we return to the overview page of our function:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-135" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294344iD19A43E6055F8A4B/image-size/large?v=v2&amp;px=999" role="button" title="image-135" alt="image-135" /></span></P> <P>&nbsp;</P> <P><SPAN>We can grab the Azure Function URL by hitting the “Get Function URL” button:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-136" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294345i2FC937E37CF043A6/image-size/large?v=v2&amp;px=999" role="button" title="image-136" alt="image-136" /></span></SPAN></P> <P>&nbsp;</P> <P>We can press the Copy button to the right of the URL.</P> <P>&nbsp;</P> <P>Keep this URL safe somewhere for now as we’ll need it when we create our Web Endpoint next.</P> <P>&nbsp;</P> <H2>Adding a Web Endpoint to the Custom Command</H2> <P>&nbsp;</P> <P>We can now return to our Custom Command project. The easiest way to do this is via Percept Studio:</P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">https://portal.azure.com/#blade/AzureEdgeDevices/Main/overview</A></P> <P>&nbsp;</P> <P>We can navigate to Speech and click on the Commands tab:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-128" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294348iEB75A8DF05BFD5DB/image-size/large?v=v2&amp;px=999" role="button" title="image-128" alt="image-128" /></span></P> <P>I’ve gone ahead and created a new Custom Command for this example, but you can absolutely use the PlayMusic-speech project we created in the<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">previous post</A>.</P> <P>&nbsp;</P> <P>Clicking on our custom command will take us to the Speech Project:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-129" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294350iA0C8D038F8E0F74C/image-size/large?v=v2&amp;px=999" role="button" title="image-129" alt="image-129" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN>I’ve created a new Command named “TurnLightOnOff” with the following Example Sentences:</SPAN></P> <P>&nbsp;</P> <LI-CODE lang="markdown">Turn the light {OnOff} Turn the lights {OnOff} Turn the lamp {OnOff} Turn {OnOff} the light Turn {OnOff} the lights Turn {OnOff} the lamp Lights {OnOff} Lamp {OnOff} Switch {OnOff} the light Switch {OnOff} the lights Switch {OnOff} the lamp Switch the light {OnOff} Switch the lights {OnOff} Switch the lamp {OnOff}</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>What we need to do now is to create a new “Web endpoint” for our command to call use when it’s executed:</P> <DIV class="wp-block-image has-lightbox lightbox-40 ">&nbsp;</DIV> <DIV class="wp-block-image has-lightbox lightbox-40 "><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-130" style="width: 496px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294353i12838C85E3909C8B/image-size/large?v=v2&amp;px=999" role="button" title="image-130" alt="image-130" /></span> <P>&nbsp;</P> </DIV> <DIV class="wp-block-image has-lightbox lightbox-40 "><SPAN>Clicking on the “Web endpoints” menu item on the left will take us to the Web Endpoints page:</SPAN></DIV> <DIV class="wp-block-image has-lightbox lightbox-40 ">&nbsp;</DIV> <DIV class="wp-block-image has-lightbox lightbox-40 "><SPAN><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-134" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294354iF3C6269B30A64EE8/image-size/large?v=v2&amp;px=999" role="button" title="image-134" alt="image-134" /></span></SPAN></SPAN> <P>&nbsp;</P> </DIV> <DIV class="wp-block-image has-lightbox lightbox-40 "><SPAN>We can now click on the “+ New web endpoint” button at the top of the centre panel to add a new endpoint:</SPAN></DIV> <DIV class="wp-block-image has-lightbox lightbox-40 ">&nbsp;</DIV> <DIV class="wp-block-image has-lightbox lightbox-40 "><SPAN><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-137" style="width: 905px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294355i4E333DF87C5A081C/image-size/large?v=v2&amp;px=999" role="button" title="image-137" alt="image-137" /></span></SPAN></SPAN> <P>&nbsp;</P> <SPAN>We’ll name our Endpoint “LightControl” and hit the blue “Create” button:</SPAN></DIV> <DIV class="wp-block-image has-lightbox lightbox-40 ">&nbsp;</DIV> <DIV class="wp-block-image has-lightbox lightbox-40 "><SPAN><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-139" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294356i861EC1CD55337A43/image-size/large?v=v2&amp;px=999" role="button" title="image-139" alt="image-139" /></span></SPAN></SPAN> <P>&nbsp;</P> <SPAN>We can now paste in our Function URL from the previous step into the “URL” text box. We can cut off the section beginning “?code=…” as we’ll be passing that in as a query parameter.</SPAN></DIV> <DIV class="wp-block-image has-lightbox lightbox-40 ">&nbsp;</DIV> <DIV class="wp-block-image has-lightbox lightbox-40 "><SPAN><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-142" style="width: 943px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294359i161B9E1161321EBC/image-size/large?v=v2&amp;px=999" role="button" title="image-142" alt="image-142" /></span></SPAN></SPAN> <P>&nbsp;</P> </DIV> <DIV class="wp-block-image has-lightbox lightbox-40 "> <P>We need to set the “Method” option to “POST” here, as we’ll be passing the value of our “OnOff” parameter to our Azure Function in the body of the request.</P> <P>&nbsp;</P> <P>We can now add the authentication code we cut from the end of the Function URL as a query parameter:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-143" style="width: 942px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294360iEBAAD6DC458E85B8/image-size/large?v=v2&amp;px=999" role="button" title="image-143" alt="image-143" /></span></P> <P>&nbsp;</P> </DIV> <DIV class="wp-block-image has-lightbox lightbox-40 "><SPAN>We can hit the “+ Add a query Parameter” button at the bottom of the right hand panel:</SPAN></DIV> <DIV class="wp-block-image has-lightbox lightbox-40 ">&nbsp;</DIV> <DIV class="wp-block-image has-lightbox lightbox-40 "><SPAN><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-144" style="width: 647px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294362iFE30FC6B4CBE3413/image-size/large?v=v2&amp;px=999" role="button" title="image-144" alt="image-144" /></span></SPAN></SPAN> <P>&nbsp;</P> </DIV> <DIV class="wp-block-image has-lightbox lightbox-40 "> <P>We enter “code” as the “Key” and we can paste in the Authentication Code in the “Value” box.</P> <P>&nbsp;</P> <P>Remember to remove the “?code=” part in front of the Authentication code, so it’s just the random collection to letters and numbers.</P> <P>&nbsp;</P> <P>We can save the parameter with the blue “Save” button.</P> </DIV> <DIV class="wp-block-image has-lightbox lightbox-40 ">&nbsp;</DIV> <DIV class="wp-block-image has-lightbox lightbox-40 "> <H2>Using the new Web Endpoint</H2> </DIV> <DIV class="wp-block-image has-lightbox lightbox-40 ">&nbsp;</DIV> <DIV class="wp-block-image has-lightbox lightbox-40 "> <P>We can now make use of our new Web Endpoint within our TurnLightOnOff Custom Command.</P> <P>&nbsp;</P> <P>If we navigate to our TurnLightOnOff Custom command and to the “Done” section:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-145" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294365iBBF91D72C829505D/image-size/large?v=v2&amp;px=999" role="button" title="image-145" alt="image-145" /></span></P> <P>&nbsp;</P> <P>Make sure to remove any existing actions if you’re reusing anything from a previous post.</P> <P>&nbsp;</P> <P>We can add an action for our Web Endpoint by hitting the “+ Add an action” button at the bottom of the right hand pane:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-146" style="width: 925px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294367i3C17297878B1238F/image-size/large?v=v2&amp;px=999" role="button" title="image-146" alt="image-146" /></span></P> <P>&nbsp;</P> <P><SPAN>Selecting the “Call web endpoint” option will show the Call Web Endpoint dialog:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-147" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294369i4F25054E98F9384D/image-size/large?v=v2&amp;px=999" role="button" title="image-147" alt="image-147" /></span></SPAN></P> <P>&nbsp;</P> <P>We can select our new “LightControl” Web Endpoint from the “Endpoint” dropdown.</P> <P>&nbsp;</P> <P>Next we need to pass in the value for our “OnOff” paramater so that the Azure Function can pass this to the Raspberry Pi via the IoT Hub and a Direct Method. If we replace the “Body content (JSON)” section with the following:</P> <LI-CODE lang="json">{ "state": "{OnOff}" }</LI-CODE> <P>&nbsp;<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-148" style="width: 968px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294370i5C86E257E373A336/image-size/large?v=v2&amp;px=999" role="button" title="image-148" alt="image-148" /></span></P> <P>&nbsp;</P> <P><SPAN>We now need to feedback a spoken response to the user once the Web Endpoint has been called. If we hit the “On Success (Required)” tab at the top:</SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-149" style="width: 995px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294371iA0AA24A422990D61/image-size/large?v=v2&amp;px=999" role="button" title="image-149" alt="image-149" /></span></P> <P>&nbsp;</P> <P><SPAN>We can select the “Send speech response” option from the “Action to execute” dropdown:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-150" style="width: 972px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294372iF2F55D20EDCB60E8/image-size/large?v=v2&amp;px=999" role="button" title="image-150" alt="image-150" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>We can return the following response:</SPAN></P> <LI-CODE lang="markdown">The light has been turned {OnOff}</LI-CODE> <P>&nbsp;</P> <P><SPAN>We can then repeat this for the “On Failure (Required)” tab:</SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-152" style="width: 972px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294373i297472A3D54C9C00/image-size/large?v=v2&amp;px=999" role="button" title="image-152" alt="image-152" /></span></P> <P>&nbsp;</P> <P><SPAN>Where we can enter the following response in the “Simple Response Editor”:</SPAN></P> <LI-CODE lang="markdown">Failed to turn the light {OnOff}</LI-CODE> <P>&nbsp;</P> <P><SPAN>We can now save our Done Action by hitting the blue “Save” button:</SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-153" style="width: 994px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294374iD2485611ECBF8512/image-size/large?v=v2&amp;px=999" role="button" title="image-153" alt="image-153" /></span></P> <P><SPAN>Finally, we can hit the “Save” button, followed by the “Train” button and finally the Publish Button:</SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-154" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294375iB26DB0F4E6612911/image-size/large?v=v2&amp;px=999" role="button" title="image-154" alt="image-154" /></span></P> <P><SPAN>We can then return to Azure Percept Studio and make sure our custom Command is assigned to our Percept:</SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-155" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294378i459B20467FCF6EA4/image-size/large?v=v2&amp;px=999" role="button" title="image-155" alt="image-155" /></span></P> <P>&nbsp;</P> <P><SPAN>Select your Custom Command and hit the “Assign” button in the toolbar:</SPAN></P> <P>&nbsp;</P> <span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-156" style="width: 878px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294380i179EC03E5C49DDDB/image-size/large?v=v2&amp;px=999" role="button" title="image-156" alt="image-156" /></span> <P>&nbsp;</P> </DIV> <P><SPAN>Select your IoT Hub and Percept Device and hit the blue “Save” button to Deploy to your Device:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-157" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294381iB11139EB2CF4B6D8/image-size/large?v=v2&amp;px=999" role="button" title="image-157" alt="image-157" /></span></SPAN></P> <P>&nbsp;</P> <H2>Raspberry Pi Code</H2> <P>&nbsp;</P> <P>Next we’ll get the Raspberry Pi set up ready to connect to the IoT Hub and receive the Direct Method Invocation from our Azure Function.</P> <P>&nbsp;</P> <P>The first thing you’ll need to do is get your Raspberry Pi set up to run .NET 5. I’ve created a post around on how to that:</P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">https://www.petecodes.co.uk/install-and-use-microsoft-dot-net-5-with-the-raspberry-pi/</A></P> <P>&nbsp;</P> <P>Once you’ve got the Pi Setup, we can create the project. I’d do this either in your home directory, or if you’ve set up a SAMBA share, then that directory will be best.</P> <P>&nbsp;</P> <P>We create a new .NET 5 console application with:</P> <P>&nbsp;</P> <LI-CODE lang="bash">dotnet new console -o device_code cd device_code</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN>We can now add the nuget packages for both GPIO Control and Azure IoT Hub Client:</SPAN></P> <P>&nbsp;</P> <LI-CODE lang="bash">dotnet add package System.Device.GPIO dotnet add package Microsoft.Azure.Devices.Client</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN>Next paste the following code over your Program.cs file:</SPAN></P> <P>&nbsp;</P> <LI-CODE lang="csharp">using System; using System.IO; using System.Text; using Newtonsoft.Json; using System.Threading.Tasks; using System.Device.Gpio; using Microsoft.Azure.Devices.Client; namespace device_code { class Program { static string DeviceConnectionString = "ENTER YOU DEVICE CONNECTION STRING"; static int lightPin = 32; GpioController controller; static async Task Main(string[] args) { GpioController controller = new GpioController(PinNumberingScheme.Board); controller.OpenPin(lightPin, PinMode.Output); controller.Write(lightPin, PinValue.High); DeviceClient Client = DeviceClient.CreateFromConnectionString(DeviceConnectionString, Microsoft.Azure.Devices.Client.TransportType.Mqtt); await Client.SetMethodHandlerAsync("ControlLight", (MethodRequest methodRequest, object userContext) =&gt; { Console.WriteLine("IoT Hub invoked the 'ControlLight' method."); Console.WriteLine("Payload:"); Console.WriteLine(methodRequest.DataAsJson); dynamic data = JsonConvert.DeserializeObject(methodRequest.DataAsJson); if (data.state == "on") { controller.Write(lightPin, PinValue.Low); } else { controller.Write(lightPin, PinValue.High); } var responseMessage = "{\"response\": \"OK\"}"; return Task.FromResult(new MethodResponse(Encoding.ASCII.GetBytes(responseMessage), 200)); }, null); while (true) { } } } }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>We now need to replace the placeholder for the IoT Device Connection String with the one from the device we created earlier.</P> <P>&nbsp;</P> <P>If we return to the portal, to our IoT Hub IoT Device Registry Page and to the Device Details page for our “devicecontrol” device:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-126" style="width: 904px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294382i408D906A8CB432BA/image-size/large?v=v2&amp;px=999" role="button" title="image-126" alt="image-126" /></span></P> <P>&nbsp;</P> <P><SPAN>We can copy the Primary Connection string and then paste that into the Program.cs file where we have “ENTER YOU DEVICE CONNECTION STRING”.</SPAN></P> <P>&nbsp;</P> <H2>Raspberry Pi Circuit</H2> <P>&nbsp;</P> <P><SPAN>The hardware side of this solution centres around a Raspberry Pi. We make use of a Relay to turn the mains supply to a desk lamp on and off.</SPAN></P> <P class="lia-align-center">&nbsp;</P> <P class="lia-align-center"><FONT size="5" color="#FF0000"><SPAN><STRONG>DISCLAIMER</STRONG><BR /><BR /><STRONG>As a word of warning. The mains relay part of solution should only be attempted if you are experienced in handling mains voltages, as this can be very dangerous if the correct safety precautions aren’t followed.</STRONG><BR /><BR /><STRONG>Needless to say, I can’t take any responsibility for damage or harm as a result of the following instructions!</STRONG></SPAN></FONT></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN>With that out of the way, the following is the circuit diagram for the Raspberry Pi, Relay and Lamp:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="RaspberryPiCircuit-1" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294383iF510BDC36AA661E3/image-size/large?v=v2&amp;px=999" role="button" title="RaspberryPiCircuit-1" alt="RaspberryPiCircuit-1" /></span></SPAN></P> <P>&nbsp;</P> <P>The Raspberry Pi is connected as follows:</P> <P>&nbsp;</P> <TABLE border="1" width="100%"> <TBODY> <TR> <TD width="25%" height="27px"><STRONG>Pi Pin Number</STRONG></TD> <TD width="25%" height="27px"><STRONG>Function</STRONG></TD> <TD width="25%" height="27px"><STRONG>Relay Pin Label</STRONG></TD> <TD width="25%" height="27px"><STRONG>Colour</STRONG></TD> </TR> <TR> <TD width="25%" height="27px">2</TD> <TD width="25%" height="27px">5V</TD> <TD width="25%" height="27px">VCC</TD> <TD width="25%" height="27px">Red</TD> </TR> <TR> <TD width="25%" height="27px">20</TD> <TD width="25%" height="27px">GND</TD> <TD width="25%" height="27px">GND</TD> <TD width="25%" height="27px">Black</TD> </TR> <TR> <TD width="25%" height="27px">32</TD> <TD width="25%" height="27px"><SPAN>Relay Control (GPIO 12)</SPAN></TD> <TD width="25%" height="27px">IN1</TD> <TD width="25%" height="27px">Green</TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P><SPAN>I’ve actually enclosed my whole circuit (aside from the Pi), in a patress to keep the mains side sealed away:</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="20210628_144851-Large" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294384i7A4961011B9D2AA0/image-size/large?v=v2&amp;px=999" role="button" title="20210628_144851-Large" alt="20210628_144851-Large" /></span></SPAN></P> <P>&nbsp;</P> <P>I’m using an Elegoo 4 way Relay unit… You can find this at Amazon here:&nbsp;<A href="#" target="_blank" rel="noopener">https://amzn.to/2UGX6pT</A></P> <P>&nbsp;</P> <H2>Testing the System</H2> <P>&nbsp;</P> <P>With all of these parts hooked together, you should be in a position to test turning the light on and off!</P> <P>First we need to run the software on the Raspberry Pi.</P> <P>&nbsp;</P> <P>From the directory you created the “device_code” project, run the code with:</P> <P>&nbsp;</P> <LI-CODE lang="bash">dotnet run</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>Once the code is running, you can try turning the light on and off using your Percept.</P> <P>&nbsp;</P> <P>The Raspberry Pi terminal window should feedback the instructions it receives:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-158" style="width: 992px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294385i408EEDBA8296DFBC/image-size/large?v=v2&amp;px=999" role="button" title="image-158" alt="image-158" /></span></P> <P>&nbsp;</P> <P><SPAN>If everything has gone according to plan, you should now have a working system!</SPAN></P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/b6AuhEjdKS0?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="The project in action"></IFRAME></DIV> <P>&nbsp;</P> <H2>My Plans</H2> <P>&nbsp;</P> <P>I'd love to take this project further and look at controlling more devices around my home. I'd also like to perhaps replace the Raspberry Pi with a far smaller device... Perhaps an ESP32 or a Wilderness Labs Meadow. I could then dot these around the house and control my home with the Percept.</P> <P>&nbsp;</P> <P>I'd also like to improve the feedback of the system, where currently I'm only sending a one way command, with no feedback from the Pi as to whether it actually carried out the instruction.</P> <P>&nbsp;</P> <H2>My Thoughts</H2> <P>&nbsp;</P> <P>I found the process of adding a Web Endpoint really easy to achieve, and worked really well. To begin with I was a little confused with the speech response to the called Web Endpoint, but it made sense once I'd set it up.</P> <P>&nbsp;</P> <P>Event though it's super powerful, It would be awesome if Speech Studio could let us perhaps connect directly to other Azure Services, rather than having to roll our wrapper every time. Perhaps the facility to connect directly to IoT Hub maybe?</P> <P>&nbsp;</P> <P>Obviously, using an Azure Function in the Consumption mode means that it can go to sleep if it's not used. This leads to a fair delay (around 20-30 seconds at times), before the function actually responds to our command hitting the web endpoint... I guess using a different pricing tier would help here. This coupled with the apparent wake delay of the Percept itself can lead to the first wake up taking some time to complete.</P> <P>&nbsp;</P> <P>After that though, it works really well, with only a very slight delay between saying the command at the light turning on or off.</P> <P>&nbsp;</P> <P>Overall, the whole platform works really well, and with the addition of Web Endpoints and Azure Functions, we're able to reach out to any part of Azure we like.</P> <P>&nbsp;</P> <P>&nbsp;</P> Thu, 08 Jul 2021 21:33:13 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-audio-home-automation-with-azure-functions-azure/ba-p/2528048 Peter Gallagher 2021-07-08T21:33:13Z Azure Sphere version 21.07 Update 1 is now available for evaluation https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-sphere-version-21-07-update-1-is-now-available-for/ba-p/2530109 <P>Azure Sphere OS version 21.07 Update 1 is now available via the <STRONG>Retail Eval</STRONG> feed. This update fixes a bug in which an application memory usage statistic was not resetting properly. The bug represented a regression from the 21.06 release.</P> <P>&nbsp;</P> <P>The evaluation period for OS version 21.07 Update 1 will continue for 14 days. &nbsp;During this time, please verify that your applications and devices operate properly with this release before it's deployed broadly via the Retail feed. The Retail feed will continue to deliver OS version 21.06 until we publish the full 21.07 release that is expected on July 21, 2021.</P> <P>&nbsp;</P> <P>This evaluation release includes an OS update only; it does not include an updated SDK. When 21.07 is generally available later in July, an updated SDK will be included.</P> <P>&nbsp;</P> <H2>Compatibility testing with version 21.07</H2> <P>The Linux kernel has been upgraded to version 5.10. Areas of special focus for compatibility testing with 21.07 should include apps and functionality using kernel memory allocations and OS dynamically-linked libraries.</P> <P>&nbsp;</P> <H2>Notes about this release</H2> <UL> <LI>The most recent <A href="#" target="_self">RF tools</A> (version 21.01) are expected to be compatible with OS version 21.07.</LI> <LI>Azure Sphere Public API can be accessed programmatically using Service Principal or MSI created in a customer AAD tenant.</LI> </UL> <P>&nbsp;</P> <P><SPAN>For more information on Azure Sphere OS feeds and setting up an evaluation device group,&nbsp;see&nbsp;</SPAN><A href="#" target="_self"><SPAN>Azure Sphere OS feeds</SPAN></A><SPAN> and </SPAN><A href="#" target="_self">Set up devices for OS evaluation</A><SPAN>.</SPAN></P> <P>&nbsp;</P> <P>For self-help technical inquiries, please visit <A href="#" target="_self">Microsoft Q&amp;A</A><SPAN> o</SPAN>r <A href="#" target="_self">Stack Overflow</A>. If you require technical support and have a support plan, please submit a support ticket in <A href="#" target="_self">Microsoft Azure Support</A> or work with your Microsoft Technical Account Manager. If you would like to purchase a support plan, please explore the <A href="#" target="_self">Azure support plans</A>.</P> Thu, 08 Jul 2021 21:20:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-sphere-version-21-07-update-1-is-now-available-for/ba-p/2530109 AzureSphereTeam 2021-07-08T21:20:00Z Train smarter with NVIDIA pre-trained models and TAO Transfer Learning Toolkit on Microsoft Azure https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/train-smarter-with-nvidia-pre-trained-models-and-tao-transfer/ba-p/2528400 <P>One of the many challenges of deploying AI on edge is that IoT devices have limited compute and memory resources. So, it becomes extremely important that your model is accurate and compact enough to deliver real-time inference at the edge. Juggling between the accuracy of the model and the size is always a challenge when creating a model; smaller, shallower networks suffer from poor accuracy and deeper networks are not suitable for edge. Additionally, achieving state-of-the-art accuracy requires collecting and annotating large sets of training data and deep domain expertise, which can be cost-prohibitive for many enterprises looking to bring their AI solutions to market faster.&nbsp; NVIDIA’s catalog of pre-trained models and <A href="#" target="_blank" rel="noopener">Transfer Learning Toolkit</A> (TLT) can help you accelerate your model development. TLT is a core component of the <A href="#" target="_blank" rel="noopener">NVIDIA TAO, an AI-model-adaptation platform.</A> TLT provides a simplified training workflow geared for the non-experts to quickly get started building AI using pre-trained models and guided Jupyter notebooks. TLT offers several performance optimizations that make the model compact for high throughput and efficiency, accelerating your Computer Vision and Conversational AI applications.</P> <P>&nbsp;</P> <P>Training is compute-intensive, requiring access to powerful GPUs&nbsp; to speed up the time to solution. Microsoft Azure Cloud offers several <A href="#" target="_blank" rel="noopener">GPU optimized Virtual machines </A>(VM)&nbsp; with access to NVIDIA A100, V100 and T4 GPUs.</P> <P>In this blog post, we will walk you through the entire journey of training an AI model starting with provisioning a VM on Azure to training with NVIDIA TLT on Azure cloud.</P> <P>&nbsp;</P> <H2>Pre-trained models and TLT</H2> <P>&nbsp;</P> <P>Transfer Learning is a training technique where you leverage the learned features from one model to another. Start with a pretrained model that has been trained on representative datasets and fine-tuned&nbsp; with weights and biases. These models can be easily retrained with custom data in a fraction of the time it takes to train from scratch.</P> <P><BR /><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="TLT_on_azure.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294406i222AAA60F1D8E85B/image-size/large?v=v2&amp;px=999" role="button" title="TLT_on_azure.png" alt="TLT_on_azure.png" /></span></P> <P>Figure 1 - End-to-end AI workflow</P> <P>&nbsp;</P> <P>The NGC catalog, NVIDIA’s hub of GPU-optimized AI and HPC software contains a diverse collection of pre-trained models for <A href="#" target="_blank" rel="noopener">computer vision</A> and <A href="#" target="_blank" rel="noopener">conversational AI</A> use cases that span industries from manufacturing, to retail to healthcare and more. These models have been trained on images and large sets of text and speech data to provide you with a highly accurate model to start with. For example, People detection and segmentation and body pose estimation models can be used to extract occupancy insights in smart spaces such as retail, hospitals, factories, offices, etc. Vehicle and License plate detection and recognition models can be used for smart infrastructure. Automatic speech recognition (ASR) and Natural language processing (NLP) models can be used for smart speakers, video conferencing, automated chatbots and others. In addition to these highly specific use case models, you also have the flexibility to use the general purpose pre-trained models from popular open model architectures such as ResNet, EfficientNet, YOLO, UNET, and others. These can be used for general use cases in object detection, classification and segmentation.</P> <P>&nbsp;</P> <P>Once you select your pre-trained model, you can fine-tune the model on your dataset using TLT. TLT is a low-code Jupyter notebook based workflow, allowing you to adapt an AI model in hours, rather than months. The guided Jupyter notebook and configurable spec files make it easy to get started.</P> <P>&nbsp;</P> <P>Here are few key features of TLT to optimize inference performance:</P> <UL> <LI>Model pruning removes nodes from neural networks while maintaining comparable accuracy, making the model compact and optimal for edge deployment without sacrificing accuracy.</LI> <LI>INT8 quantization enables the model to run inference at lower INT8 precision, which is significantly faster than running in floating point FP16 or FP32</LI> </UL> <P>Pruning and quantization can be achieved with a single command in the TLT workflow. <BR /><BR /></P> <H2>Setup an Azure VM</H2> <P>&nbsp;</P> <P>We start by first setting up an appropriate VM on Azure cloud. You can choose from the following VMs which are powered by NVIDIA GPUs - ND 100, NCv3 and NC T4_v3 series. For this blog, we will use the NCv3 series which comes with V100 GPUs. For the base image on the VM, we will use the NVIDIA provided <A href="#" target="_blank" rel="noopener">GPU-optimized image from Azure marketplace</A>. NVIDIA base image includes all the lower level dependencies which reduces the friction of installing drivers and other prerequisites. Here are the steps to setup Azure VM</P> <P>&nbsp;</P> <P><STRONG>Step 1</STRONG> - Pull the <A href="#" target="_blank" rel="noopener">GPU optimized image</A> from Azure marketplace by clicking on the “Get it Now” button.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="cshah31_1-1625755027598.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294395iB424CB973B146AF6/image-size/medium?v=v2&amp;px=400" role="button" title="cshah31_1-1625755027598.png" alt="cshah31_1-1625755027598.png" /></span></P> <P>Figure 2 - GPU optimized image on Azure Marketplace</P> <P>&nbsp;</P> <P>Select the v21.04.1 version under the Software plan to select the latest version. This will have the latest NVIDIA drivers and CUDA toolkit. Once you select the version, it will direct you to the Azure portal where you will create your VM.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="cshah31_2-1625755027605.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294394iD8EAEC2776A5E985/image-size/medium?v=v2&amp;px=400" role="button" title="cshah31_2-1625755027605.png" alt="cshah31_2-1625755027605.png" /></span></P> <P>Figure 3 - Image version selection window</P> <P>&nbsp;</P> <P><STRONG>Step 2</STRONG> - Configure your VM</P> <P>In the Azure portal, click “Create” to start configuring the VM.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="cshah31_3-1625755027614.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294397i420BD03A979B4D9B/image-size/medium?v=v2&amp;px=400" role="button" title="cshah31_3-1625755027614.png" alt="cshah31_3-1625755027614.png" /></span></P> <P>Figure 4 - Azure Portal</P> <P>&nbsp;</P> <P>This will pull the following page where you can select your subscription method, resource group, region and Hardware configuration. Provide a name for your VM. Once you are done you can click on the “Review + Create” button at the end to do a final review.</P> <P><STRONG>Note</STRONG>: The default disk space is 32GB. It is recommended to use &gt;128GB disk for this experiment</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="cshah31_4-1625755027625.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294398iC22A0CB968B433D3/image-size/medium?v=v2&amp;px=400" role="button" title="cshah31_4-1625755027625.png" alt="cshah31_4-1625755027625.png" /></span></P> <P>Figure 5 - Create VM window</P> <P>&nbsp;</P> <P>Make the final review of the offering that you are creating. Once done, hit the “Create” button to spin up your VM in Azure.</P> <P><STRONG>Note: </STRONG>Once you create, you will start incurring cost, so please review the pricing details.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="cshah31_5-1625755027637.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294399iBCC9D65560912FD7/image-size/medium?v=v2&amp;px=400" role="button" title="cshah31_5-1625755027637.png" alt="cshah31_5-1625755027637.png" /></span></P> <P>Figure 6 - VM review</P> <P>&nbsp;</P> <P><STRONG>Step 3</STRONG> - SSH in to your VM</P> <P>Once your VM is created, SSH into your VM using the username and domain name or IP address of your VM.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="applescript">ssh &lt;username&gt;@&lt;IP address&gt;</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Training 2D Body Pose with TLT</H2> <P>&nbsp;</P> <P>In this step, we will walk through the steps of training a high performance 2D body pose model with TLT. This is a fully convolutional model and consists of a backbone network, an initial prediction stage which does a pixel-wise prediction of confidence maps (heatmap) and part-affinity fields (PAF) followed by multistage refinement (0 to <EM>N</EM> stages) on the initial predictions. This model is further optimized by pruning and quantization. This allows us to run this in real-time on edge platforms like NVIDIA Jetson.</P> <P>In this blog, we will focus on how to run this model with TLT on Azure but if you would like to learn more about the model architecture and how to optimize the model, check out the two part blog on Training/Optimization 2D body pose with TLT - <A href="#" target="_blank" rel="noopener">Part 1</A> and <A href="#" target="_blank" rel="noopener">Part 2</A>. Additional information about this model can be found in the <A href="#" target="_blank" rel="noopener">NGC Model card</A>.</P> <P>&nbsp;</P> <P><STRONG>Step 1 </STRONG>- Setup TLT</P> <P>For TLT, we require a Python Virtual environment. Setup the Python Virtual Environment. Run the commands below to set up the Virtual environment.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="applescript">sudo su - root usermod -a -G docker azureuser apt-get -y install python3-pip unzip pip3 install virtualenvwrapper export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3 source /usr/local/bin/virtualenvwrapper.sh mkvirtualenv launcher -p /usr/bin/python3</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Install Jupyterlab and TLT Python package. TLT uses a Python launcher to launch training runs. The launcher will automatically pull the correct docker image from NGC and run training on it. Alternatively, you can also manually pull the docker container and run it directly inside the docker. For this blog, we will run it from the launcher.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">pip3 install jupyterlab pip3 install nvidia-pyindex pip3 install nvidia-tlt</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Check if TLT is installed properly. Run the command below. This will dump a list of AI tasks that are supported by TLT.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">tlt info --verbose Configuration of the TLT Instance dockers: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; nvcr.io/nvidia/tlt-streamanalytics: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; docker_tag: v3.0-py3 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; tasks: 1. augment 2. classification 3. detectnet_v2 4. dssd 5. emotionnet 6. faster_rcnn 7. fpenet 8. gazenet 9. gesturenet 10. heartratenet 11. lprnet 12. mask_rcnn 13. retinanet 14. ssd 15. unet 16. yolo_v3 17. yolo_v4 18. tlt-converter &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; nvcr.io/nvidia/tlt-pytorch: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;docker_tag: v3.0-py3 tasks: 1. speech_to_text 2. text_classification 3. question_answering 4. token_classification 5. intent_slot_classification 6. punctuation_and_capitalization format_version: 1.0 tlt_version: 3.0 published_date: mm/dd/yyyy</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Login to NGC and download Jupyter notebooks from NGC</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">docker login nvcr.io cd /mnt/ sudo chown azureuser:azureuser /mnt/ wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tlt_cv_samples/versions/v1.1.0/zip -O tlt_cv_samples_v1.1.0.zip unzip -u tlt_cv_samples_v1.1.0.zip&nbsp; -d ./tlt_cv_samples_v1.1.0 &amp;&amp; cd ./tlt_cv_samples_v1.1.0</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Start your Jupyter notebook and open it in your browser.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><STRONG>Step 2 </STRONG>- Open Jupyter notebook and spec file</P> <P>In the browser, you will see all the CV models that are supported by TLT. For this experiment we will train a 2D body pose model. Click on the “bpnet” model in the Jupyter notebook. In this directory, you will also find Jupyter notebooks for popular networks like YOLOV3/V4, FasterRCNN, SSD, UNET and more. You can follow the same steps to train any other models.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="cshah31_6-1625755027642.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294400iE21D8D54E12C8D52/image-size/medium?v=v2&amp;px=400" role="button" title="cshah31_6-1625755027642.png" alt="cshah31_6-1625755027642.png" /></span></P> <P>Figure 7 - Model selection from Jupyter</P> <P>&nbsp;</P> <P>Once you are inside, you will find a few config files and specs directory. Spec directory has all the ‘spec’ files to configure training and evaluation parameters. To learn more about all the parameters, refer to the <A href="#" target="_blank" rel="noopener">2D body pose documentation</A>. <BR /><BR /></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="cshah31_7-1625755027647.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294401iF8E1B8A59B69AB3F/image-size/medium?v=v2&amp;px=400" role="button" title="cshah31_7-1625755027647.png" alt="cshah31_7-1625755027647.png" /></span></P> <P>Figure 8 - Body Pose estimation training directory<BR /><BR /></P> <P><STRONG>Step 3</STRONG> - Step thru the guided notebook</P> <P>Open ‘bpnet.ipynb’ and step through the notebook. In the notebook, you will find learning objectives and all the steps to download the dataset and pre-trained model and run training and optimizing the model. For this exercise, we will use the open source COCO dataset but you are welcome to use your custom body pose dataset. Section 3.2 in the notebook talks about using a custom dataset.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="cshah31_8-1625755027655.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294402i99B965F9F2284920/image-size/medium?v=v2&amp;px=400" role="button" title="cshah31_8-1625755027655.png" alt="cshah31_8-1625755027655.png" /></span></P> <P>Figure 9 - Jupyter notebook for training</P> <P>&nbsp;</P> <P>&nbsp;In this blog, we demonstrated a body pose estimation use case with TLT but you can follow the steps to train any Computer Vision or conversational AI model with TLT.&nbsp; NVIDIA pre-trained models, Transfer Learning Toolkit and GPUs in the Azure cloud simplify the journey and reduce the barrier to starting with AI. The availability of GPUs in Microsoft Azure Cloud allows you to quickly start training without investing in your own hardware infrastructure, allowing you to scale the computing resources based on demand.</P> <P>By leveraging the pre-trained models and TLT, you can easily and quickly adapt models for your use-cases and develop high-performance models that can be deployed at the edge for real-time inference.</P> <P>&nbsp;</P> <P><STRONG>Get started today with </STRONG><A href="#" target="_blank" rel="noopener"><STRONG>NVIDIA TAO TLT</STRONG></A><STRONG> on Azure Cloud. </STRONG></P> <P>&nbsp;</P> <P><STRONG>Resources:</STRONG></P> <UL> <LI><A href="#" target="_blank" rel="noopener">TLT Product page</A></LI> <LI><A href="#" target="_blank" rel="noopener">CV model collection</A></LI> <LI><A href="#" target="_blank" rel="noopener">Conversational AI model collection</A></LI> <LI><A href="#" target="_blank" rel="noopener">TLT documentation</A></LI> <LI><A href="#" target="_blank" rel="noopener">VM in Azure documentation</A></LI> <LI><A href="#" target="_blank" rel="noopener">Azure VM</A></LI> <LI><A href="#" target="_blank" rel="noopener">2D body pose model card</A></LI> <LI><A href="#" target="_blank" rel="noopener">2D body pose training/optimization blog part 1</A> | <A href="#" target="_blank" rel="noopener">part 2</A></LI> </UL> <P>&nbsp;</P> Thu, 08 Jul 2021 15:59:06 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/train-smarter-with-nvidia-pre-trained-models-and-tao-transfer/ba-p/2528400 cshah31 2021-07-08T15:59:06Z Using Azure Percept to build an Aircraft Part Checker https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/using-azure-percept-to-build-an-aircraft-part-checker/ba-p/2518329 <H2>Idea</H2> <P>&nbsp;</P> <DIV>When not coding and building IoT project I spend time working on a personal project or building an aircraft, and to be more specific a <A href="#" target="_blank" rel="noopener">South African designed Sling TSi.</A></DIV> <P>&nbsp;</P> <DIV>The design of the aircraft is widely regarded in Light Aircraft circles as being one of the best 4-seat aircraft out there for the home builder and one of the main reasons I picked the design, now I know what you're thinking have I opened the wrong blog post here what's all this Airplane speak...</DIV> <P>&nbsp;</P> <DIV>Well, there is an issue with the Factory shipping the kits around the world and that is that they struggle to get the trained staff due to Covid issues and thus kits are being shipped with incorrect parts or 2 left parts when there should be a left and a right for example. My idea is that I have this awesome Percept device I have been loaned from Microsoft to write some blog posts and have a play with and I got to thinking could I train it to recognize the parts and show the tagged name of the part so that an untrained shipping agent in the factory could us it to make sure the kits has all the correct parts?</DIV> <P>&nbsp;</P> <DIV>Let have a play and see shall we...</DIV> <P>&nbsp;</P> <P>Where do we start?</P> <P>&nbsp;</P> <DIV>We start in the <A href="#" target="_blank" rel="noopener">Azure Portal</A>&nbsp;and more specifically the <A href="#" target="_blank" rel="noopener">Azure Percept Studio</A>&nbsp;where we can access the Vision Blade of the Percept Device. In here click the `ADD` button at the top to add a new Vision Project.</DIV> <DIV>&nbsp;</DIV> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-15-45-28.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293644i1BF6AB43793B9557/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-15-45-28.png" alt="2021-07-05-15-45-28.png" /></span><BR />In the new blade you can fill in the boxes by giving the new Vision Model a name and a nice description (For when you or a college comes back in a few months and wonder what this is!), then you can make sure you have `Object Detection` selected and `Accurancy` and you can then click `Create` at the bottom of the page.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-15-48-57.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293645i2B398F2BD97AD7A0/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-15-48-57.png" alt="2021-07-05-15-48-57.png" /></span></P> <H2>Image Capture</H2> <P>&nbsp;</P> <DIV>Next we move onto the Image capture that is then used to train the model with our first parts, so make sure you have the correct device selected and tick the `Automatic image capture` checkbox and the drop-down lists will appear where you can select the setting needed. As this is just the first images of the first component I want to capture to test everything is working I have set mine to be `1 Frame every 5 seconds` and `Target` to 25 this means that the Percept will take a photo every 5 seconds until it has taken 25 photo's. These images will then all be loaded into the AI model ready to be tagged and trained.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-15-49-55.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293646i1196B6370DF9C72C/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-15-49-55.png" alt="2021-07-05-15-49-55.png" /></span></P> <DIV>Small issue is that you don't really know when the images are being taken and when it has started... So if you click the `View Device Stream` just above the Automatic Image Capture you will see what the Percept-EYE can see and watch as the images are taken.</DIV> <P>&nbsp;</P> <DIV>The alternative if you have enough hands is to NOT tick the `Automatic Image Capture` in which case the button bottom left will say `Take Photo` and this will take a single photo. However I find I need more hands than I have, but this would be good if the Percept is right next to you on your desk but not so good it's on the factory floor.</DIV> <P>&nbsp;</P> <H2>Custom Vision</H2> <P>&nbsp;</P> <DIV>Now we have the images and yes I know there is not really any feedback with this method of training it would be nice if the Stream Video in the browser had a border that flashed up with a colour or something when an Image was captured so you knew what was happening but hey ho with work with what we have.</DIV> <P>&nbsp;</P> <DIV>Now if you click the next button you can go to what looks like a pointless page but stick with us there is a reason for this, but click the `Open project in custom vision` link in the centre of the page, this will open the customer vision project and there will be a few agree boxes to check on the way but then you should have your project open.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-16-01-03.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293647iAF30822038DE90EC/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-16-01-03.png" alt="2021-07-05-16-01-03.png" /></span></P> <P>&nbsp;</P> <DIV>As you can see there are 2 projects in my Custom Vision and the left one is the new one we just created with me holding one of the Aircraft Horizontal Stabilizer Ribs which goes on the front of the Horizontal Stabilizer. Click the project to open it and lets look at the images we managed to grab.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-16-03-50.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293648i3DD9F7E00DF320CA/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-16-03-50.png" alt="2021-07-05-16-03-50.png" /></span></P> <DIV>&nbsp;</DIV> <DIV><A href="#" target="_blank" rel="noopener">Wiki - Stabilizer (aeronautics)</A></DIV> <P>&nbsp;</P> <H2>Tagging the Images</H2> <P>&nbsp;</P> <DIV>You will look and at first (Like me!) wonder where all those images went but don't panic they are just `Untagged` so on the left menu click the Untagged button to view them all.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-16-08-00.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293649iE9B2EC9967829349/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-16-08-00.png" alt="2021-07-05-16-08-00.png" /></span></P> <H2>Clean up the images</H2> <P>&nbsp;</P> <DIV>First I like to go through the images and remove all the either poor quality or clearly nothing to see here images, you can do this by hovering over the bottom right of the image, you will see a white tick appear for you to click. Once clicked it turns blue to show it's selected repeat for all the images you want to remove. Once complete at the top of the page is a `Delete` button that will delete them all for you.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-16-11-49.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293650iF3B3A84C757883AB/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-16-11-49.png" alt="2021-07-05-16-11-49.png" /></span></P> <P>&nbsp;</P> <DIV>The next part is sadly rather laborious and boring so I hope you have a fresh cup of <A href="#" target="_blank" rel="noopener">IoTLive</A>&nbsp;as this can take a while.</DIV> <P>&nbsp;</P> <DIV>So select the image and then using you mouse hover over the part you are interested in within the image and you hopefully should see a bounding box around it to select. Once selected you will see a Text Entry appear so that you can give it a Tag name, this name will be what is shown when the Percept views this part and decides to show the tag name on the screen as part of look what I found bounding box, so pick a good name. As I am tagging aircraft parts I am giving them the Aircraft Component reference from the drawings.</DIV> <P>&nbsp;</P> <DIV>If you don't get a bounding box on the part you want to select just Left mouse click and draw your own box.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-16-18-08.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293651i9730E6269F9CF0E7/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-16-18-08.png" alt="2021-07-05-16-18-08.png" /></span></P> <DIV><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-16-19-35.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293652i7466449E1E78A9EC/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-16-19-35.png" alt="2021-07-05-16-19-35.png" /></span></DIV> <P>&nbsp;</P> <DIV>As you move to the next image using the Arrow to the right of the modal box on the screen the next image will appear and it's just a repeat of the process, however when you select the next area to tag the previous Tag names will appear so it's quicker to just click along through the images.</DIV> <P>&nbsp;</P> <DIV>When you have tagged all the images click the close `X` top right, you will see that you now don't have any untagged images so select the `Tagged` button so that you can see them all again.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-16-23-01.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293653iB7F38FC1545CBC93/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-16-23-01.png" alt="2021-07-05-16-23-01.png" /></span></P> <H2>Now this is Important</H2> <P>&nbsp;</P> <DIV>You need a minimum of 15 images for each tag, in my case I only managed to capture 12 so I was a few short, so remember when I said before that the Azure Portal seemed to leave you hanging with that pointless page to select `Custom Vision` well this is where you need that.</DIV> <P>&nbsp;</P> <DIV>Go back to that browser tab (You didn't close it did you!) and then you can click the `Previous` button bottom left and again select another `Automatic Image Capture`. This seems tedious but it's the quickest and easiest way I have found to grab all the images in the correct format and sizes etc and upload them into the Custom Vision Project.</DIV> <P>&nbsp;</P> <DIV>So take another batch of images of that component and repeat the tagging process, 15 is the minimum number need for the training to take place ideally you want 30-40+ of each part/object from many directions in many lighting levels etc...</DIV> <P>&nbsp;</P> <H2>Training</H2> <P>&nbsp;</P> <DIV>Now you have more than 15 images hopefully closer to if not more than 40 images you can train your model, so there is a nice compelling big green `Train` button at the top of the screen. Give it a click and you will be asked what type of training I normally always select `Quick` and then go refresh that Cup&lt;T&gt; as this part takes a few minutes.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-16-36-52.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293654i95541CC6BEAFB64C/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-16-36-52.png" alt="2021-07-05-16-36-52.png" /></span></P> <P>&nbsp;</P> <DIV>Once it's trained you should see a nice page with lots of high percentages like below, but don't be fooled it's not really 100% accurate but we can test it and see how good it really is.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-16-44-40.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293655i5A2462615B2F54FC/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-16-44-40.png" alt="2021-07-05-16-44-40.png" /></span></P> <H2>Testing</H2> <P>&nbsp;</P> <DIV>Like all good developers we like to test and this is no different, so at the top of the page click the `Quick Test` button.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-16-46-05.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293657iB5DA075DAC097EB5/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-16-46-05.png" alt="2021-07-05-16-46-05.png" /></span></P> <DIV>Sadly you do need to grab an image that is not already used so in this case I just use my Mobile phone to take an image and then copy to my PC using the awesome `Your Phone` feature in Windows or if you still have the browser tab open with the `Webstream` from the Percept you can do a screen clip from that browser. Only downside is that the bounding boxes from you as a Person may be over the image hence me preferring to using my mobile phone.</DIV> <P>&nbsp;</P> <DIV>As you can see when you give it an image it will show bounding boxes and the prediction rates for those boxes, you can use the slide to change the `Threshold` value so that you can hide the noise if there is any.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-16-54-18.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293658i9EEF9188CF5162F9/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-16-54-18.png" alt="2021-07-05-16-54-18.png" /></span></P> <DIV>For a 2nd attempt with some noise in the background you can see that I had to move the slider all the way down and it was only 9.5% probability that it could identify the Rib, so this means the test has proven that more images are required and more training.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-16-57-04.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293659i8A481570DDDEA094/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-16-57-04.png" alt="2021-07-05-16-57-04.png" /></span></P> <H2>Iterate and Improve</H2> <P>&nbsp;</P> <DIV>The process is very simple to set-up and train a customer vision model with the Azure Percept, and as you can see with a component from the aircraft on the very first training run it was fairly good with the white background but poor with all the noise.</DIV> <P>&nbsp;</P> <DIV>So I went on and spent some time training with even more photos and even added in the next RIB along in the build so there were 2 parts similar.</DIV> <P>&nbsp;</P> <DIV>However now that you have a trained model that is improving when you take the images and your tagging them you will see at the top of the Tagging Dialog a slider for `Suggested Objects On` if you turn this on and give it a second or two it should find your object and bounding box it with a big blue `Confirm Suggested Objects` button to click. If this doesn't work repeat the old way of selecting or drawing the bounding box until it has learned enough.</DIV> <P>&nbsp;</P> <DIV>The advantage of using the suggested was is that you can creep the slider up and it's a form of testing for the images and the Model as well, so you can see it improving over time.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-18-22-07.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293660iEEB4ECE4713A54EA/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-18-22-07.png" alt="2021-07-05-18-22-07.png" /></span></P> <P>&nbsp;</P> <DIV>When you have tags a lot more images and you are confident you have a good selection you can improve the trained model by giving it more resources and more time to learn. You do this by selecting `Advance Training` after clicking the Green train button, this will open the dialog some more and show you a slider where you can allocate the time you wish to train the model for and even have it send you an email when it's done.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-21-23-30.png" style="width: 941px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293661i04E52C651D10C854/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-21-23-30.png" alt="2021-07-05-21-23-30.png" /></span></P> <DIV><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-21-24-23.png" style="width: 893px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293662i5FEA0B6EEEA0DE7A/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-21-24-23.png" alt="2021-07-05-21-24-23.png" /></span></DIV> <P>&nbsp;</P> <H2>Final step</H2> <P>&nbsp;</P> <DIV>Now that we have a model that we have Trained, Tested and Iterated with to a point that we feel comfortable sending down to the Edge and using in Production we can go back to the Azure Portal and Percept Studio Page to finish things off.</DIV> <P>&nbsp;</P> <DIV>The last Tab is for `Evaluate and Deploy` and it's here that we send the model to the device so that it can be used without the connection to Azure, yes that's right it can work away at the Edge even with a slow or non-existent connection.</DIV> <P>&nbsp;</P> <DIV>Just select the Device and the Iteration of the Trained model you wish to use and then tap `Deploy` once that is done you can open the Web stream to the device and you will notice that there will be a message for a minute or two on the first load where it shows `Loading Model` after this it will show live tagged images when you hold the parts in front of the camera.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-21-19-13.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293663iD69A5BB6D2702359/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-21-19-13.png" alt="2021-07-05-21-19-13.png" /></span></P> <DIV><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-21-21-43.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293664i4EB952F1A74F30D6/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-21-21-43.png" alt="2021-07-05-21-21-43.png" /></span></DIV> <P>&nbsp;</P> <H2>Results</H2> <P>&nbsp;</P> <DIV>You will see that when I am holding a Part in front of the Percept camera it is correctly identifying the part and the last 3 digits are it's confidence that it's found the correct part and as you can see with just 35 and 58 images of the two parts I trained it's already very impressive, but for production you would want more images in different lighting levels etc.</DIV> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-21-30-46.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293665i33D8367B38CFD040/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-21-30-46.png" alt="2021-07-05-21-30-46.png" /></span></P> <DIV><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-05-21-31-45.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293666i20E8E211EB09E6A7/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-05-21-31-45.png" alt="2021-07-05-21-31-45.png" /></span></DIV> <H2>Conclusion</H2> <P>&nbsp;</P> <DIV>Building this blog and training the models took a few hours but most of that was off doing something else while the training system worked away, if I'm honest I probably only spent maybe an hour actually working on it and have some very impressive results.</DIV> <P>&nbsp;</P> <DIV>Also now you have a trained model it's not restricted to the Percept devices, you can download the model and use it elsewhere like maybe a Xamarin/MAUI app on a mobile device so that engineers out in the field can have the Parts Checker with them the uses become endless. If you want to read more about this there is a fantastic guest <A href="#" target="_blank" rel="noopener">Blog Post</A>&nbsp;by Daniel Hindrikes and Jayme Singleton that's well worth a look.</DIV> <P>&nbsp;</P> <DIV>I do hope you enjoyed this long walk through all the set-ups to using the Percept Vision system and enjoy playing with you Vision Models, if you have any questions just reach out on Twitter or LinkedIn.</DIV> <P>&nbsp;</P> <DIV>Happy Coding, I'm off back to building the Sling.</DIV> <P>&nbsp;</P> <DIV>Cliff.</DIV> Wed, 07 Jul 2021 20:27:48 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/using-azure-percept-to-build-an-aircraft-part-checker/ba-p/2518329 Clifford Agius 2021-07-07T20:27:48Z Azure Percept Over-the-Air updates https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-over-the-air-updates/ba-p/2522345 <H2>Why Update?</H2> <P>&nbsp;</P> <P>In the last <A href="#" target="_blank" rel="noopener">Blog Post</A> we looked at the unboxing and first setup of the Azure Percept system and got our very first AI model running. However out of the box my Percept DK showed that it was running an outdated version of the OS Software. This blog explains how you can check your device and then if needed set-up its device twin to automate the updating of your Azure Percept system(s).</P> <P>&nbsp;</P> <P>The idea behind using Azure IoT Hub and device twins is so that we can automate the updates and carry them out remotely. After all the Azure Percept DK is designed to be an Edge device so you don't want to have to schlep to the outer edges of your system/installation to plug in a USB cable now do you? No, we all know sitting at your warm comfy desk with a Cup&lt;T&gt; in hand is much nicer.</P> <P>&nbsp;</P> <H2>First Let's Check if your device needs an Update!</H2> <P>&nbsp;</P> <P>Open you Azure Percept Studio in the <A href="#" target="_blank" rel="noopener">Azure Portal</A> and select devices on the left pane and then select your Azure Percept Device.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-12-02-28.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293925i9C8B320CA9F88A10/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-12-02-28.png" alt="2021-06-28-12-02-28.png" /></span></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-12-03-48.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293926i036DA730136DF248/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-12-03-48.png" alt="2021-06-28-12-03-48.png" /></span></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-12-07-11.png" style="width: 903px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293928i9C61196AEDCBB3D5/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-12-07-11.png" alt="2021-06-28-12-07-11.png" /></span></P> <P>Now that you have your device selected you can check the `SW Version` and if you have the `Updates Available` icon and text next to it you can carry out an Update. Clicking the link will take you out to the <A href="#" target="_blank" rel="noopener">Docs.Microsoft.Com</A> page that explains how to carry out an update.</P> <P>&nbsp;</P> <P>Of course you can follow the docs or continue with this blog as I have screen shots. ;)</img> Also there are a few little Gotcha's along the way and I point those out.</P> <P>&nbsp;</P> <H2>What version of the Update Do you Need?</H2> <P>&nbsp;</P> <P>There are 2 ways to check this the easiest as you are already there is to look at the `Model` just above the `Updates Available` link so in my case `APDK-101`</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-12-13-45.png" style="width: 857px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293929i1BFA6E9CACD55940/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-12-13-45.png" alt="2021-06-28-12-13-45.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Or you can check the device's twin, which is nice as you can see all the details about your device that it is reporting back to IoT Hub. To do this click the `Open Device in IoT Hub` at the top of the blade.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-12-15-12.png" style="width: 874px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293930i03C835289C7E3102/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-12-15-12.png" alt="2021-06-28-12-15-12.png" /></span></P> <P>&nbsp;</P> <P>Click the Device Twin link at the top to open the Device Twin for this device.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-12-16-16.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293931i0767B27A7C0FF347/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-12-16-16.png" alt="2021-06-28-12-16-16.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Under the `Properties -&gt; Reported -&gt; Model` you can see the model of your device and also the SWVersion that is currently loaded onto the device.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-12-18-07.png" style="width: 721px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293932i02FB0E9175D70F3F/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-12-18-07.png" alt="2021-06-28-12-18-07.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Which version of the Software do we need?</H2> <P>&nbsp;</P> <P>Now that we know the version that is loaded onto our device and what version of the device we have on our system we can look to download the correct update file. Head out to the <A href="#" target="_blank" rel="noopener">documentation</A> page and scroll down until you see the list of available updates. In the left column find the model that matches your device model you found earlier and then from the download links column you need to download 2 files. The first is the OTA Manifest and the second is the OTA Update Package. You don't need the USB version as we are using the Over The Air updates.</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="basic">You can do USB updates but doing via OTA means that it's easier to update later as well...​</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-12-20-52.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293933i9D44C6291F8CCA6F/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-12-20-52.png" alt="2021-06-28-12-20-52.png" /></span></P> <P>&nbsp;</P> <H2>Setup the Device Update for IoT Hub</H2> <P>&nbsp;</P> <P>Back on the IoT Hub for our devices on the left pane scroll down to the `Automatic Device Management` section and open the `Device Updates` blade.</P> <P>&nbsp;</P> <P>The <A href="#" target="_blank" rel="noopener">OTA Update Docs Page</A> was a little wrong at the time I went through it myself as it said `Select the Updates` tab but first you need to sign up for the updates to work. Instead head to the Getting started tab and click the `Sign up for Device Update for IoT Hub` button.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-16-07-49.png" style="width: 909px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293934i803143DA4A9CCF8D/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-16-07-49.png" alt="2021-06-28-16-07-49.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>It will ask you to create an account with a Resource Group etc but it's all free (Well at the time of writing it is anyway...) the important parts here are that the 'Location' only has (At time of writing!) 3 global locations but more will be coming soon and also make sure you click the `Assign Device Update Administrator Role` as you will need this role later in the process. Remember to click the review and create at the bottom.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-16-13-35.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293936i47066304F02DF0BD/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-16-13-35.png" alt="2021-06-28-16-13-35.png" /></span></P> <P>&nbsp;</P> <P>Go to the New Resource you created for your Device Update IoT Hub and go into the `Instances Blade` and then click `Create` at the top</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-11-09-51-22.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293939iB4E29BF00A0E7A38/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-11-09-51-22.png" alt="2021-06-11-09-51-22.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>When creating the instance give it a meaningful name and then from the drop down select the IoT Hub that you created earlier for your Percept device. So in my case the one I created was CAS-Percept-IOTHub, obviously don't forget to click create at the bottom. This will then link the new Device Updates instance to the IoT Hub that has all your devices.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-11-09-54-49.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293940iD90A144A440672BB/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-11-09-54-49.png" alt="2021-06-11-09-54-49.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>This process can take a while it was over 5 minutes for me (I had time to go make an IoTEA) but let it whirl away and eventually you will see the new instance appear in the list, however it may say under the `Provisioning State` `Accepted` and if you click refresh it may say `Creating` this means it's still working so give it a bit longer. What you are looking for after clicking refresh is this state to say `Succeeded`.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-11-10-00-14.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293941i1AA692A074550BDE/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-11-10-00-14.png" alt="2021-06-11-10-00-14.png" /></span></P> <P>&nbsp;</P> <H2>Configure the IoT Hub</H2> <P>&nbsp;</P> <P>Now this bit had me for a moment as it's not very well explained you need to select the new instance but not go into it instead tick the checkbox on the left of the Instance Name and then click `Configure IoT Hub` so that we can set-up the instance.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-11-10-02-59.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293942iF146B2DF9A28CA8D/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-11-10-02-59.png" alt="2021-06-11-10-02-59.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>You will notice that when the blade opens the info box at the top changes a few times as things update, you need to check your happy with the details and then check the `I Agree to make these changes` and click Update at the bottom.</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="basic">If you are using a Free tier of Azure IoT Hub, the allowed number of message routes are limited to 5. Device Update for IoT Hub needs to configure 4 message routes to work as expected.</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-11-10-04-58.png" style="width: 850px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293943i13C61E21B977245F/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-11-10-04-58.png" alt="2021-06-11-10-04-58.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Update Security</H2> <P>&nbsp;</P> <P>We have now created our Update Account as well as an Update Instance but we now need to do the security part by giving it the access control it needs. So on the left near the top of the screen select the `Access Control (IAM)` blade and then `Role Assignments`</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-16-58-41.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293944i6847AF57811AF566/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-16-58-41.png" alt="2021-06-28-16-58-41.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Now click `Add` at the top of the pane and then from the drop down `Add Role Assignment` and in the new blade that appears you need to give a role, now the docs suggest it can be any of the following:</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="basic"> Device Update Administrator Device Update Reader Device Update Content Administrator Device Update Content Reader Device Update Deployments Administrator Device Update Deployments Reader</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>And I will leave you to pick one but remember with Security its always best to assign the least privileges needed to achieve the task and no more. This means if the account is compromised the damage is limited. For me I selected the last option as it's the most limiting and allows Read access only.</P> <P>&nbsp;</P> <P>Now you need to assign this access to a User or Azure AD Group and in my case as this is a Demo Set-up I have assigned to myself, click save and hey presto you are now ready</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-11-10-17-01.png" style="width: 625px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293945i0667B303A8DADF76/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-11-10-17-01.png" alt="2021-06-11-10-17-01.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Back to Device Update for IoT Hub and the Docs!</H2> <P>&nbsp;</P> <P>So now we have the Device Updates part set-up we can return to the IoT Hub and actually complete the device Update. This time when we go into the Device Updates blade we will not be presented with the `Getting started` pane this is all set-up and completed so we can now go to the `Updates` tab and click the `Import a new Update`</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-11-10-26-49.png" style="width: 756px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293946i58032DEFD70D19C2/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-11-10-26-49.png" alt="2021-06-11-10-26-49.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Remember the files we downloaded earlier now is the time we can use them so in the 'Import a new update' blade select the files in the appropriate boxes for Manifest and Update Files. We also need to create a Container that will hold these files within Azure so that the devices can download them so click the Storage Container as well to create a new one.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-11-10-42-00.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293947i0687DB5B1C24511F/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-11-10-42-00.png" alt="2021-06-11-10-42-00.png" /></span></P> <P>&nbsp;</P> <P>Click the '+ Storage Account' at the top to create a new storage account and give it a meaningful name. I changed the 'Account Kind' to StroageV2 as well and set the location to the same region as my IoT Hub so it was all together, don't forget to click Ok at the bottom for it to create.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-11-10-44-02.png" style="width: 993px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293948iE1BF41DF1483A82F/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-11-10-44-02.png" alt="2021-06-11-10-44-02.png" /></span></P> <P>&nbsp;</P> <P>You will when it's created be taken back to the list of you storage accounts so select the one you just created from the list, and now you need to add a container to actually hold the files within the Storage Account.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-11-10-47-03.png" style="width: 624px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293949i82D98916CA18F0D4/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-11-10-47-03.png" alt="2021-06-11-10-47-03.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Once it's created and you are shown the list of containers in the new Storage Account you can select it and click select at the bottom of the page. Now you are returned to the `Import a new update` blade where you can now click the `Import Update` button at the bottom of the page.</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="basic">You may be asked to add a Cross Origin Request (CORS) rule to access the selected storage container. Select Add rule and retry to proceed. ​</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>This Import again takes a good 3-5 minutes so that cup of tea might be empty so go stretch your legs and make a new one, it's OK I'll wait here for you...</P> <P>&nbsp;</P> <P>You will know when it's done as the Status will show `Succeeded` you will need to click the `Refresh` button a few times and then you can go into the `Ready to deploy` to check it's there and ready.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-18-50-50.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293950iB210163BFE53A5D2/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-18-50-50.png" alt="2021-06-28-18-50-50.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-18-52-15.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293951i443F4E1B3566C623/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-18-52-15.png" alt="2021-06-28-18-52-15.png" /></span></P> <P>&nbsp;</P> <H2>Create an Update Group</H2> <P>&nbsp;</P> <P>Now it's not at first obvious as you may like me only have one device in front of you but the Azure setup is designed for you to have 10's if not 100's of devices that need updating and hence the OTA system. This means that you may want to create groups so that you can update a specific set of devices within your infrastructure maybe for testing or failover protection etc if the Update fails for example.</P> <P>&nbsp;</P> <P>It's the creation of the Update groups that is next so lets get started...</P> <P>&nbsp;</P> <P>Head over to your IoT Hub and select the `IoT Edge` blade and then your device (Yes you have to do it one by one here for the moment!) and then at the top of the next pane select the `Device Twin`</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-18-57-40.png" style="width: 980px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293952i4DB756C0B4E7BE75/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-18-57-40.png" alt="2021-06-28-18-57-40.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-18-59-22.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293953i5390237C5C1A8C81/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-18-59-22.png" alt="2021-06-28-18-59-22.png" /></span></P> <P>&nbsp;</P> <H2>Adding a Device Twin Tag</H2> <P>&nbsp;</P> <P>Now you should be able to see your Device Twin list and you will need to add in a new Document Tag and you can read more about those<A href="#" target="_blank" rel="noopener">Here</A></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="basic">Tags - A section of the JSON document that the solution backend can read from and write to. Tags are not visible to device apps. NOTE - The `tags` NEEDS to be lowercase for this to work...</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>You will need to add into this Property a key of `ADUGroup` and for the value the name you want to call your group so in my example it's `CAS-Percept-Group-1` and as always don't forget to click the save button at the top. I have done it more than once and wondered why it's not working and it's down to me not clicking save again...</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-06-28-19-13-40.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293954i540DE59C9D213DC3/image-size/large?v=v2&amp;px=999" role="button" title="2021-06-28-19-13-40.png" alt="2021-06-28-19-13-40.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Now we have the Device Twin Tags set we can go back to the `Device Updates` blade in the IoTHub and this time select the `Groups` tag at the top. This time you will see that you will have a device against the `Not Yet Grouped` showing that the Device Twin has been picked up by the system and your device is awaiting a group assignment. Click the `Add` Button so that we can add a group assignment.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-07-09-20-37.png" style="width: 878px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293955iAB20A320062A1DE7/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-07-09-20-37.png" alt="2021-07-07-09-20-37.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>On the new page drop down your list and you will see the Group Name you used in the Device Twin and this shows the power of the Twins system as you can add these Twin Tags when you enrol the devices using a service like DPS (Device Provisioning Service) and if the Group has already been created the device will add itself to that group ready for a group update when needed. Again don't forget the `Create Group` button at the bottom of the page.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-07-09-23-50.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293956i4490427CCD5BFDD3/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-07-09-23-50.png" alt="2021-07-07-09-23-50.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Deploying the Update</H2> <P>&nbsp;</P> <P>After creating the Group you will be returned to the Groups page and it will show that the device has been Assigned the group and also if the Update package you uploaded previously is newer that the package on the device it will show that there is an Update Available.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-07-09-27-40.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293957iD0D0154002243D9C/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-07-09-27-40.png" alt="2021-07-07-09-27-40.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>If you click the `Available Updates` link for the group it will take you through to the `Deployments page` so that you can deploy the Update. On this page you can either use the `Deploy Now` or schedule a deployment for a date and time of your choosing, maybe an overnight update so that production isn't affected for example. Click the `Create Deployment` at the bottom of the page and your taken back to the Groups Pane after the Deployment has been created.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-07-09-29-31.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293958i2E3AE27254C19115/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-07-09-29-31.png" alt="2021-07-07-09-29-31.png" /></span></P> <P>&nbsp;</P> <H2>Monitoring the Deployment</H2> <P>&nbsp;</P> <P>It seems anti-climatic that the deployment has been created and it's scheduled but has it happened, what's happening how can you monitor it? Well if you click the `Deployments` tab at the top of the Pane you can then click your new listed deployment (It's not obvious your supposed to click it...) and you will be shown a new pane with the details of the deployment for you to monitor. Here you can see that there is 1 in progress and once it's complete it will show if it Succeeded or failed etc. It takes a while for the Update to actually happen as it needs to not only send the files to the device but the files are loaded besides the currently running OS and then it needs to switch them so now you have reached this point go and celebrate with a nice cup of IoTea and maybe that Chocolate biscuit you was saving.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2021-07-07-09-34-15.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/293959i6AD6A3408042DE0C/image-size/large?v=v2&amp;px=999" role="button" title="2021-07-07-09-34-15.png" alt="2021-07-07-09-34-15.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Conclusion</H2> <P>&nbsp;</P> <P>This has been a long post much longer than the Docs but I like to show screenshots of what the pages look like and I hope you have seen that I pointed out some of the little gotchas along the way as well. Personally I think being able to use the OTA system to update devices through Azure IoT Hub is a game changer as no longer will a company need to dispatch a tech to a remote far off part of the system with a laptop to load the latest version of firmware from the Dev team it can be done by the team from the Office and instead take advantage of an already established secure connection of the device with the Cloud through Azure IoT Hub.</P> <P>&nbsp;</P> <P>Maybe at some point someone will write an Azure Pipeline Deployment target so that the uploading of the Packages can be automated as well, now wouldn't that be good?</P> <P>&nbsp;</P> <P>I hope this has helped and if you have any questions you can comment here, or reach&nbsp; to me on <A href="#" target="_blank" rel="noopener">Twitter</A> or <A href="#" target="_blank" rel="noopener">LinkedIn</A> or obviously reach out to the Microsoft IoT team as they would love to hear your ideas and feedback.</P> <P>&nbsp;</P> <P>Happy Coding</P> <P>&nbsp;</P> <P>Cliff.</P> Sat, 24 Jul 2021 00:36:21 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-over-the-air-updates/ba-p/2522345 Clifford Agius 2021-07-24T00:36:21Z Azure Percept Audio – Custom Keywords and Commands https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-audio-custom-keywords-and-commands/ba-p/2485199 <P>In the latest in my series of blog posts on the Azure Percept Developer Kit and specifically around the Audio System on a Module&nbsp; (SoM), we look at how we can create and use Custom Keywords and Commands with the Audio SoM.</P> <P>&nbsp;</P> <H2>Previously</H2> <P>&nbsp;</P> <P>In the previous post - <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-audio-first-steps/ba-p/2485000" target="_blank" rel="noopener">Azure Percept Audio - First Steps</A> -<SPAN>&nbsp;W</SPAN>e got started with the<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">Azure Percept Audio System on a Module<SPAN>&nbsp;</SPAN></A>(SoM).</P> <P>&nbsp;</P> <P>We deployed a sample Hospitality application and saw how we could issue commands to the Percept DK, how it would respond, and how the Sample Application would simulate the effects of the commands on screen.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Azure-Percept-Screen-Time-0_17_5329-Large-1" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291265i028DB1A974F815D3/image-size/large?v=v2&amp;px=999" role="button" title="Azure-Percept-Screen-Time-0_17_5329-Large-1" alt="Azure-Percept-Screen-Time-0_17_5329-Large-1" /></span></P> <P>&nbsp;</P> <P><SPAN>In this post we’ll take the sample further by training our own Custom keyword and creating a Custom Command.</SPAN></P> <P>&nbsp;</P> <H2><SPAN>Custom Keywords</SPAN></H2> <P>&nbsp;</P> <P>For the Azure Percept Audio, a Keyword is the word that the Percept DK“listens” for in order to begin listening for commands. Sometimes referred to as a “Wake-Word”, this word defaults to “Computer” in the Hospitality Sample.</P> <P>&nbsp;</P> <P>We have access to some pre-trained keywords deployed along with the sample, allowing us to choose from;</P> <P>&nbsp;</P> <UL> <LI>Assistant</LI> <LI>Abigail</LI> <LI>Computer</LI> <LI>Jayden</LI> </UL> <P>Which we can set by pressing the “change” link next to the “Custom Keyword” item below the toolbar area;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="PerceptImages-CustomKeyword-Cropped" style="width: 488px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291266iB06D6ABA7CCA83DE/image-size/large?v=v2&amp;px=999" role="button" title="PerceptImages-CustomKeyword-Cropped" alt="PerceptImages-CustomKeyword-Cropped" /></span></P> <P>&nbsp;</P> <P><SPAN>Hitting the Custom Keyword Change Link shows us the “Change custom keyword” flyout with the various options available to choose from.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-25" style="width: 537px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291267i4801666437CE3605/image-size/large?v=v2&amp;px=999" role="button" title="image-25" alt="image-25" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>Selecting another custom keyword and pressing the “Save” button at the bottom of the dialog will update the Percept DK, and we can then wake the device with the new Keyword.</SPAN></P> <P>&nbsp;</P> <P><SPAN>Interestingly, I found that using the alternative keywords of Assistant and Abigail, allowed the Percept DK to wake faster than with the default of "Computer", I'm not sure why this is to be fair, but it's possibly due to the number of syllables in the word maybe?</SPAN></P> <P>&nbsp;</P> <H2><SPAN>Training our own Custom Keywords</SPAN></H2> <P>&nbsp;</P> <P><SPAN>In case none of the built in Wake Words are suitable, we can also train our own custom keywords to wake the device. We can do this directly from the Sample Application, allowing us to use a word of our own choosing;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="PerceptImages-CreateCustomKeyword-Cropped" style="width: 490px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291269i602E248CEF42C1DD/image-size/large?v=v2&amp;px=999" role="button" title="PerceptImages-CreateCustomKeyword-Cropped" alt="PerceptImages-CreateCustomKeyword-Cropped" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>Pressing the “Create Custom Keyword” button in the toolbar at the top of the page allows us to configure and train our own custom keyword;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-26" style="width: 857px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291270iA601318B1E7F4F72/image-size/large?v=v2&amp;px=999" role="button" title="image-26" alt="image-26" /></span></SPAN></P> <P><SPAN>Here we can enter the keyword of our choosing, select an Azure Speech Resource and a Language, and pressing save will begin the training process.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-29" style="width: 732px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291271i6CC6430FED3460AB/image-size/large?v=v2&amp;px=999" role="button" title="image-29" alt="image-29" /></span></SPAN></P> <P>&nbsp;</P> <P>Bear in mind however, that the keyword must have between 2 and 40 syllables to be accepted;</P> <DIV class="wp-block-image has-lightbox lightbox-6 ">&nbsp;</DIV> <DIV class="wp-block-image has-lightbox lightbox-6 "><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-28" style="width: 749px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291272i03CCC889933D4B87/image-size/large?v=v2&amp;px=999" role="button" title="image-28" alt="image-28" /></span> <P>&nbsp;</P> &nbsp;<SPAN>Once the training is complete, a process which took my chosen keyword of Clifford (my co host on <A href="#" target="_blank" rel="noopener">IoTeaLive</A>!) only a few moments, we’re then able to hit the “Change” button next to the Custom Keyword and select our new Keyword from the list;</SPAN></DIV> <DIV class="wp-block-image has-lightbox lightbox-6 ">&nbsp;</DIV> <DIV class="wp-block-image has-lightbox lightbox-6 "><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-30" style="width: 537px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291274i79204EC6FBB9E1B6/image-size/large?v=v2&amp;px=999" role="button" title="image-30" alt="image-30" /></span></DIV> <DIV class="wp-block-image has-lightbox lightbox-6 "><SPAN>With our new Keyword selected and deployed to the Percept DK, we are then able to wake the device with our new Custom Keyword;</SPAN></DIV> <DIV class="wp-block-image has-lightbox lightbox-6 ">&nbsp;</DIV> <DIV class="wp-block-image has-lightbox lightbox-6 "><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-31" style="width: 732px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291280i5F6F8C726F674E05/image-size/large?v=v2&amp;px=999" role="button" title="image-31" alt="image-31" /></span></SPAN></DIV> <DIV class="wp-block-image has-lightbox lightbox-6 "><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-32" style="width: 359px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291281i1667123141818061/image-size/large?v=v2&amp;px=999" role="button" title="image-32" alt="image-32" /></span></SPAN> <P>&nbsp;</P> <P>As with the "Computer" keyword, I found that this wake word wasn't all that reliable either, but it wasn't too bad.</P> <H2><SPAN>Custom Commands</SPAN></H2> <P>&nbsp;</P> <P>As well as training our own Custom Keywords, we’re also able to create Custom Commands.</P> <P>&nbsp;</P> <P>Azure Percept Audio Commands are a natural language based commanding mechanism, allowing us to control the Percept DK using plain language commands.</P> <P>&nbsp;</P> <P>The sample Hospitality application comes with a set of pre-trained commands;</P> <P>&nbsp;</P> <UL id="block-f950894f-dfdc-4b9e-b3ba-08b40b14e5e0"> <LI>Turn on/off the lights</LI> <LI>Turn on/off the TV.</LI> <LI>Turn on/off the AC.</LI> <LI>Open/close the blinds.</LI> <LI>Set temperature to X degrees. (X is the desired temperature, e.g. 75.)</LI> </UL> <P>&nbsp;</P> <P>These commands are all part of a Speech project specific to this Sample.</P> <P>&nbsp;</P> <P>We can create our own Custom Commands by clicking the “+ Create Custom Command” Button in the toolbar;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="PerceptImages-NewCommand-Cropped-1" style="width: 492px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291282i0EEF121048187107/image-size/large?v=v2&amp;px=999" role="button" title="PerceptImages-NewCommand-Cropped-1" alt="PerceptImages-NewCommand-Cropped-1" /></span></P> <P>&nbsp;</P> <P>This will show the “Create custom commands” flyout, where we can enter a Name and a Description for our new command.</P> <P>In our case we’ll create a custom command that turns a music playing device on and off.</P> <P>&nbsp;</P> <P>We can then choose the Language, Speech Resource, Luis Resource and Luis Authoring Resource, which I left at their defaults for this sample;</P> <P>&nbsp;</P> <H2><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-34" style="width: 602px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291287iF6004BD8EB1B829D/image-size/large?v=v2&amp;px=999" role="button" title="image-34" alt="image-34" /></span></H2> <P>&nbsp;<SPAN>Once we’re happy, we can hit the blue “Create” button at the bottom of the flyout, and the new Custom Command will be created;</SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="CustomCommandCreated" style="width: 718px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291289iEF1CFD01E728C53B/image-size/large?v=v2&amp;px=999" role="button" title="CustomCommandCreated" alt="CustomCommandCreated" /></span></P> <H2>Azure Speech Studio</H2> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noreferrer noopener">Azure Speech Studio</A><SPAN>&nbsp;</SPAN>is an Azure tool that allows us to create and configure a set of Commands and Responses based on Natural Language Processing.</P> <P>&nbsp;</P> <P>Azure Speech is a service which brings together a set of Speech and Language based services from Azure under one hood.</P> <P>Azure Speech supports;</P> <P>&nbsp;</P> <UL> <LI>Speech-to-text</LI> <LI>Text-to-speech</LI> <LI>Speech Translation</LI> <LI>Voice Assistants</LI> <LI>Speaker Recognition</LI> </UL> <P>&nbsp;</P> <P>This service allows to to create a Command based on a plain language sentence. The Speech Service can then interpret the spoken command and return a response.</P> <P>&nbsp;</P> <P>The response can also be converted to speech also and spoken back to the user.</P> <P>&nbsp;</P> <H2>Using Azure Speech Studio with Percept Audio</H2> <P>&nbsp;</P> <P>When we created the new custom command earlier, it didn't add a command to the existing speech project from the sample. It instead creates a new empty Speech Project for us to work with.</P> <P>&nbsp;</P> <P>As such, we can’t simply start using our new Command, we first need to configure the command using Speech Studio. I didn't find this particularly obvious, but it is <A href="#" target="_blank" rel="noopener">made clear in the docs</A> to be fair.</P> <P>&nbsp;</P> <P>We can navigate to speech studio by going to<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">speech.microsoft.com</A></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-36" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291290iECD664FF334E7356/image-size/large?v=v2&amp;px=999" role="button" title="image-36" alt="image-36" /></span></P> <P>&nbsp;</P> <P><SPAN>After we sign in, we’re presented with the welcome popup;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-35" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291291iF132F6D2CF738F4C/image-size/large?v=v2&amp;px=999" role="button" title="image-35" alt="image-35" /></span></SPAN></P> <P>&nbsp;</P> <P>Closing the popup, we’re shown a selection of options available to create various project types.</P> <P>&nbsp;</P> <P>Beginning with “<STRONG><EM>Speech-to-text</EM></STRONG>” which includes Real-Time Speech-to-text, Custom Speech, Pronunciation Assessment;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-38" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291292i70DFD9EF249EF46F/image-size/large?v=v2&amp;px=999" role="button" title="image-38" alt="image-38" /></span></P> <P>&nbsp;</P> <P><SPAN>We then have “</SPAN><STRONG><EM>Text-to-speech</EM></STRONG><SPAN>” which has Voice Gallery, Custom Voice, Audio Content Creation;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-39" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291293i9BB93414C7D2D22A/image-size/large?v=v2&amp;px=999" role="button" title="image-39" alt="image-39" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>Finally, we have the “<STRONG><EM>Voice Assistant</EM></STRONG>” options of Custom Keyword and Custom Commands;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-40" style="width: 888px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291294i7DA4EBF21E418A75/image-size/large?v=v2&amp;px=999" role="button" title="image-40" alt="image-40" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>At the top of the page should be listed our existing Speech Projects. However, there appears to be an issue with Speech Studio at the time of writing, which means our projects aren’t listed sadly. </SPAN></P> <P>&nbsp;</P> <P><SPAN>However, your mileage may vary of course, so I thought it best to include this section so it makes sense later!</SPAN></P> <P>&nbsp;</P> <H2><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-37" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291295i1BA77F6DA86131FA/image-size/large?v=v2&amp;px=999" role="button" title="image-37" alt="image-37" /></span></SPAN></H2> <P>&nbsp;</P> <H2><SPAN>Accessing Speech Projects via Azure Percept Studio</SPAN></H2> <P>&nbsp;</P> <P>Another way to access the Custom Speech Projects associated with our Percept is to use Azure Percept Studio.</P> <P>&nbsp;</P> <P>To get back to Percept studio you can either search for it in the portal, or you can navigate directly to&nbsp;<A href="#" target="_blank" rel="noreferrer noopener">portal.azure.com/#blade/AzureEdgeDevices/Main/overview</A><SPAN>&nbsp;which&nbsp;</SPAN>will take us to the Percept Studio overview page.</P> <P>&nbsp;</P> <P>From there we can click the “<STRONG><EM>Speech</EM></STRONG>” item under “<STRONG><EM>AI Projects</EM></STRONG>” in the menu on the left, before clicking the “<EM><STRONG>Commands</STRONG></EM>” tab to show all of the Custom Commands we’ve created.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-41" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291296i94A3AF216861B0A2/image-size/large?v=v2&amp;px=999" role="button" title="image-41" alt="image-41" /></span></P> <P>&nbsp;</P> <P>You should see at least two Speech Projects listed there. In my case, the first is the project associated with Sample we spun up in the<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">previous post</A>, the second is the new Custom Command we just created above.</P> <P>&nbsp;</P> <P>Clicking on the Custom Command Speech Project we want to work with, in our case the “PlayMusic-speech” project, takes us to Speech Studio;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AzureSpeechStudio" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291297iAABDE7058B465D76/image-size/large?v=v2&amp;px=999" role="button" title="AzureSpeechStudio" alt="AzureSpeechStudio" /></span></P> <P>&nbsp;</P> <P>Speech Studio is split up into three primary areas;</P> <P>&nbsp;</P> <P>On the left is the primary Project Navigation area, with the list of Commands, Web Endpoints and Project Settings.</P> <P>&nbsp;</P> <P>In the centre pane, we have a contextual area, primarily this is where we can select to configure the Example Sentences for our commands, create or edit Parameters, configure Completion Rules and create and configure Interaction Rules.</P> <P>&nbsp;</P> <P>On the far right is where we Enter our Custom Command Sentences and Parameters etc.</P> <P>&nbsp;</P> <H2>Creating a new Custom Command</H2> <P>&nbsp;</P> <P><SPAN>We can now create a Custom Command for our Percept to action by hitting the “+ Command” button;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-AddCommand-1" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291299i6A055EA74C130AF7/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-AddCommand-1" alt="SpeechStudio-AddCommand-1" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>This will show the “New command” dialog;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-NewCommandDialog" style="width: 909px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291300i15271139A5A662F7/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-NewCommandDialog" alt="SpeechStudio-NewCommandDialog" /></span></SPAN></P> <P>&nbsp;</P> <P>We can enter a name for our command here.. This isn’t the word(s) we’ll be using to invoke the command, but simply a name for the command. As the dialog says, this shouldn’t contain any spaces.</P> <P>&nbsp;</P> <P>I’ve chosen “<STRONG><EM>TurnOnMusic</EM></STRONG>” as the Command Name here, with a view to allowing this command to simply turn music on to begin with.</P> <P>We’ll expand upon this functionality later with parameters.</P> <P>&nbsp;</P> <P>Once we’ve hit the “Create” button, we’re shown the “Example Sentences” pane on the righthand side of the screen;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-ExampleSentences" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291301i355B9E1DCB6B417E/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-ExampleSentences" alt="SpeechStudio-ExampleSentences" /></span></P> <P>&nbsp;</P> <P>We can now enter the sentences we expect this particular command to react to. Each sentence should be on its own line.</P> <P>&nbsp;</P> <P>We’ll go with the following sentences, all of which are aimed at Playing some Music;</P> <P>&nbsp;</P> <UL> <LI>Turn on the music</LI> <LI>Turn music on</LI> <LI>Turn on music</LI> <LI>Turn the music on</LI> <LI>Turn the stereo on</LI> <LI>Turn on the stereo</LI> <LI>Play some music</LI> <LI>Play me some music</LI> <LI>Play music</LI> <LI>Play the music</LI> </UL> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-TurnOnSentences" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291302i036BEBA0209A886E/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-TurnOnSentences" alt="SpeechStudio-TurnOnSentences" /></span></P> <P>&nbsp;</P> <H2>Adding a "Done" Response</H2> <P>&nbsp;</P> <P>We can now add a response for when our command completes. Without this, if we try to execute the command in the test environment, we won’t get any confirmation that the command was successful.</P> <P>&nbsp;</P> <P>If we click in the “Done” item in the centre pane we’ll be shown the “Completion rules” section;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-DoneButton" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291304i6478EDED0F757A20/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-DoneButton" alt="SpeechStudio-DoneButton" /></span></P> <P>&nbsp;</P> <P><SPAN>We can now add the Action we want to carry out when the Command is executed, by clicking the “+ Add an action” button;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-Done-AddAction" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291307i6536AC8BFA3B2FC0/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-Done-AddAction" alt="SpeechStudio-Done-AddAction" /></span></SPAN></P> <P>&nbsp;</P> <P>We’ll be prompted to select the Type of Action we’d like to carry out. We have two options to choose between for actions.</P> <P>&nbsp;</P> <UL> <LI>“<STRONG><EM>Send speech response</EM></STRONG>” will return a spoken word response back to the user who issued the command.</LI> <LI>“<STRONG><EM>Send activity to client</EM></STRONG>” will send an activity payload to a client application via the SDK so that the client can action the command in some way. We saw this in action in the Sample Application when we actioned commands.</LI> </UL> <P>&nbsp;</P> <P>We’ll choose “<STRONG><EM>Send speech response</EM></STRONG>” here as we’re only developing a test project at the moment;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-55" style="width: 900px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291308i423481F08AF416BD/image-size/large?v=v2&amp;px=999" role="button" title="image-55" alt="image-55" /></span></P> <P>&nbsp;</P> <P>Selecting “<STRONG><EM>Send speech response</EM></STRONG>” and hitting the “<STRONG><EM>Create</EM></STRONG>” button will pop up the “Send speech response” editor window.</P> <P>&nbsp;</P> <P>We have a choice of three different response types at this stage.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-ResponseTypes" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291309i2E635C6DD53F3A2E/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-ResponseTypes" alt="SpeechStudio-ResponseTypes" /></span></P> <P>&nbsp;</P> <UL> <LI>“<STRONG><EM>Simple Response</EM></STRONG>” will respond in plain speech the response we enter in the editor.</LI> <LI>“<STRONG><EM>LG Template</EM></STRONG>” refers to Language Generation Templates which are based on those used in the Bot Framework, and allows us to introduce variance into the responses, meaning we don’t always hear the same response time after time.</LI> <LI>“<STRONG><EM>Earcon</EM></STRONG>” allows us to respond with a brief sound in response to the command. This sound can either reflect a Success or a Failure.</LI> </UL> <P>&nbsp;</P> <P>We’ll leave our response at the default of “Simple Response” and have the command respond with “The music was turned on”;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-56" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291310i9505B5D908FB8583/image-size/large?v=v2&amp;px=999" role="button" title="image-56" alt="image-56" /></span></P> <P>&nbsp;</P> <P>Pressing the “Save” button will return us to the Completion Rules page.</P> <P>&nbsp;</P> <P>If we navigate back to the Example Sentences page, we can get ready to test our Custom Command.</P> <P>&nbsp;</P> <H2>Testing the Custom Command</H2> <P>&nbsp;</P> <P>Now that we have the basics of a Custom Command, we can go ahead and test it using the in built test functionality within Speech Studio.</P> <P>&nbsp;</P> <P>If we first save our Example Sentences;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-SaveSentences" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291311i0089EBDF5E4EECD2/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-SaveSentences" alt="SpeechStudio-SaveSentences" /></span></P> <P>&nbsp;</P> <P><SPAN>We can now prepare our Command for Testing by hitting the “Train” button.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-Train" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291312iA2F11A382231C723/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-Train" alt="SpeechStudio-Train" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>With the Custom Command now Trained, we can now hit the Test Button;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-Test" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291313iBDEE6EF4023DB81F/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-Test" alt="SpeechStudio-Test" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>Our new Command will then be published for testing which takes a few moments;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-52" style="width: 726px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291314iBC58C7975955CC6C/image-size/large?v=v2&amp;px=999" role="button" title="image-52" alt="image-52" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>Once the Publish process is complete, we’ll then be taken to the testing environment;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-TestApplication-New" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291315iAAAF6E34B990AD8D/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-TestApplication-New" alt="SpeechStudio-TestApplication-New" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>We can expand the details about our interactions (or turns) with our application&nbsp;by clicking the expander in the top right of the screen;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-TestApplication-New-Details" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291316iDE736313344568B9/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-TestApplication-New-Details" alt="SpeechStudio-TestApplication-New-Details" /></span></SPAN></P> <P>&nbsp;</P> <P>The Details area will show us information about the commands we’re issuing, any parameters that are expected and the Rules Executed from our “Done” Completion Rules.</P> <P>&nbsp;</P> <P>We’re now ready to try out our new command… If we type “Turn on music” in the entry box at the bottom of screen;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-58" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291317i6CB87DEC67ECA5F8/image-size/large?v=v2&amp;px=999" role="button" title="image-58" alt="image-58" /></span></P> <P>&nbsp;</P> <P>We can see that we successfully executed the command, and we received the response we were expecting, both in text and spoken word!</P> <P>Our command currently only turns music on however… What if we want to turn it back off?</P> <P>&nbsp;</P> <P>We could of course add a whole new command that turns the music off… However, we can actually add parameters to our existing command.</P> <P>&nbsp;</P> <H2>Adding Parameters</H2> <P>&nbsp;</P> <P>Parameters are a method to add flexibility to commands, allowing commands to accept a variable portion which affects the action that’s carried out in response.</P> <P>&nbsp;</P> <P>We’ll add the ability to turn the music both on and off.</P> <P>&nbsp;</P> <P>We can achieve this by adding a parameter to our command which represents the state we’d like to set. We’ll also need to modify our Example Sentences along with the Completion Response.</P> <P>&nbsp;</P> <P>If we click the “+ Add” button at the top of the right hand panel, we can select the “Parameter” option.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-59" style="width: 379px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291318i9BB3F1A25D287BE4/image-size/large?v=v2&amp;px=999" role="button" title="image-59" alt="image-59" /></span></P> <P>&nbsp;</P> <P><SPAN>We’ll then be prompted for a name for our Parameter;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-60" style="width: 907px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291319i501188863C7C5AEE/image-size/large?v=v2&amp;px=999" role="button" title="image-60" alt="image-60" /></span></SPAN></P> <P>&nbsp;</P> <P>We’ll choose “OnOff” and press the “Create” button.</P> <P>&nbsp;</P> <P>We’ll then be shown the Parameters pane on the right of the screen. As we need to know whether to turn the music on or off, we can set that this Parameter is “Required”;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-RequiredParameter" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291320iEA04AE9118499957/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-RequiredParameter" alt="SpeechStudio-RequiredParameter" /></span></P> <P>&nbsp;</P> <P>We’ll then be prompted to add a response if the user doesn’t supply the Parameter.</P> <P>&nbsp;</P> <P>As with the Completion response, we can either elect to use a Simple Response or an LG Template. There’s no option here for an EarCon as we need to ask the user a specific question of course.</P> <P>&nbsp;</P> <P>We can have the Command respond with “Would you like to turn the music on or off?” and press the “Update” button;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-61" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291321i8A9ACF25A31A5C68/image-size/large?v=v2&amp;px=999" role="button" title="image-61" alt="image-61" /></span></P> <P>&nbsp;</P> <P>We can now select a “<STRONG><EM>Type</EM></STRONG>” for our parameter… We can choose between;</P> <P>&nbsp;</P> <UL> <LI>DateTime</LI> <LI>Number</LI> <LI>Geography</LI> <LI>String</LI> </UL> <P>Our response will be either “On” or “Off”, so we’ll opt for “<STRONG><EM>String</EM></STRONG>” here;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-ParameterType" style="width: 945px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291323i35FE33F0FAF0BACD/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-ParameterType" alt="SpeechStudio-ParameterType" /></span></P> <P>&nbsp;</P> <P>We can leave the “Default Value” empty here, as we’ve specified that this parameter is required.</P> <P>&nbsp;</P> <P>Next, as we’ve chosen a Type of “<STRONG><EM>String</EM></STRONG>“, we can choose the “<STRONG><EM>Configuration</EM></STRONG>” for this parameter. The Configuration sets whether the parameter can accept any response or one from a predefined list of responses.</P> <P>&nbsp;</P> <P>The options here are;</P> <P>&nbsp;</P> <UL> <LI>“<STRONG><EM>None</EM></STRONG>“</LI> <LI>“<STRONG><EM>Accept Full Input</EM></STRONG>” – Simply accept whatever the user says, for instance a postal address</LI> <LI>“<STRONG><EM>Accept predefined input values from internal catalog</EM></STRONG>” – Constrain the parameter value that the user gives to a pre-defined list of allowed values.</LI> </UL> <P>&nbsp;</P> <P>We’ll choose “<STRONG><EM>Accept predefined input values from internal catalog</EM></STRONG>” here are we only want to accept “On” or “Off”;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-63" style="width: 944px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291325i8365873D2E420307/image-size/large?v=v2&amp;px=999" role="button" title="image-63" alt="image-63" /></span></P> <P>&nbsp;</P> <P><SPAN>As we’ve selected “Accept predefined input values from internal catalog” as the Configuration, we can now define the values we expect to receive for our “OnOff” parameter by hitting the “+ Add a predefined input” button at the bottom of the Parameters pane;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-PredefinedInputValues" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291327iAFF3BB22FA888717/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-PredefinedInputValues" alt="SpeechStudio-PredefinedInputValues" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>We can add a Predefined Value with a “Name” of “On” and another with a “Name” of “Off”, we won’t need any Aliases for our options in this case;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-64" style="width: 904px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291328iB73AE85DC483A135/image-size/large?v=v2&amp;px=999" role="button" title="image-64" alt="image-64" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>We should now have two Predefined Input Values set for both “On” and “Off”</SPAN></P> <P>&nbsp;</P> <H2><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-PredefinedValuesSet" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291330iFA094621B7B2A44D/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-PredefinedValuesSet" alt="SpeechStudio-PredefinedValuesSet" /></span></SPAN></H2> <P>&nbsp;</P> <H2><SPAN>Rename Command</SPAN></H2> <P>&nbsp;</P> <P>The next thing we need to do is rename our command. Currently it’s set to “TurnOnMusic”, however we can now use this command to turn the music both on and off.</P> <P>&nbsp;</P> <P>Hitting the Pencil Icon next to the “+ Command” button will allow us to rename our command to something which now reflects its operation;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-RenameCommand" style="width: 716px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291332i3F234E521E4D4795/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-RenameCommand" alt="SpeechStudio-RenameCommand" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN>We can then give our Command a name of “TurnMusicOnOff”;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-66" style="width: 908px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291333i5717927EEB1034E9/image-size/large?v=v2&amp;px=999" role="button" title="image-66" alt="image-66" /></span></SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Update Command to use Parameters</H2> <P>&nbsp;</P> <P>We now need to update our Example Sentences to make use of our new Parameter.</P> <P>&nbsp;</P> <P>We can assign where in a sentence the command should expect a parameter to be used by adding the parameter name in curly brackets at the given location in the sentence.</P> <P>&nbsp;</P> <P>A nice feature here is that, as soon as we enter a curly brace, we’re given our available parameters are options, so we don’t have to remember exactly which one we need.</P> <P>&nbsp;</P> <P>With that in mind, we can update our Example Sentence list to;</P> <P>&nbsp;</P> <UL> <LI>Turn&nbsp;{OnOff}&nbsp;the&nbsp;music</LI> <LI>Turn&nbsp;music&nbsp;{OnOff}</LI> <LI>Turn&nbsp;{OnOff}&nbsp;music</LI> <LI>Turn&nbsp;the&nbsp;music&nbsp;{OnOff}</LI> <LI>Turn&nbsp;the&nbsp;stereo&nbsp;{OnOff}</LI> <LI>Turn&nbsp;{OnOff}&nbsp;the&nbsp;stereo</LI> </UL> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-79" style="width: 824px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291334i6F7D0DA957FB5DA2/image-size/large?v=v2&amp;px=999" role="button" title="image-79" alt="image-79" /></span></P> <P>&nbsp;</P> <P>We also need to update the Done response to match the parameter that the user requests.</P> <P>&nbsp;</P> <P>If we click back into the “Done” section, we can click the Pencil Icon next to the Completion Rule Action we created to edit it;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-EditDone" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291335i02F3338FC7668724/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-EditDone" alt="SpeechStudio-EditDone" /></span></P> <P>&nbsp;</P> <P>As with the example sentences, we can insert the name of our parameter between curly braces where we want it’s value to be repeated back to the user.</P> <P>&nbsp;</P> <P>We’ll set the Speech Response to “The music was turned {OnOff}”;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-72" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291338iD644A97E18284205/image-size/large?v=v2&amp;px=999" role="button" title="image-72" alt="image-72" /></span></P> <P>&nbsp;</P> <P><SPAN>Hitting the blue Save button will update the Completion Response.</SPAN></P> <P>&nbsp;</P> <H2><SPAN>Test the Updated Command</SPAN></H2> <P>&nbsp;</P> <P><SPAN>We’re now ready for testing again, we can hit the “Save” button, followed by the “Train” button and finally the Test button;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-FinalTrain" style="width: 829px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291339iEEC33AF87A6A8B11/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-FinalTrain" alt="SpeechStudio-FinalTrain" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>We’ll then be returned to the testing environment where we can see our new parameters in action;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-73" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291340iCB7E4A5928C748EB/image-size/large?v=v2&amp;px=999" role="button" title="image-73" alt="image-73" /></span></SPAN></P> <P>&nbsp;</P> <H2><SPAN>Publishing the Command to the Percept DK</SPAN></H2> <P>&nbsp;</P> <P>Now that we’ve tested the updated command, we’re ready to publish it to our percept so we can give it a go on the device itself.</P> <P>&nbsp;</P> <P>If we close the testing environment, we can hit the Publish button to begin publish process;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SpeechStudio-PublishCommandButton" style="width: 956px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291341iE3C398AEA37742B7/image-size/large?v=v2&amp;px=999" role="button" title="SpeechStudio-PublishCommandButton" alt="SpeechStudio-PublishCommandButton" /></span></P> <P>&nbsp;</P> <P><SPAN>Speech Studio will then begin the process of Publishing the command to the Speech Project.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-74" style="width: 730px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291342i47ABEF995010F67D/image-size/large?v=v2&amp;px=999" role="button" title="image-74" alt="image-74" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>We’ll be notified when the Publish operation has completed successfully. As you can see, this process doesn’t take too long, with our example here only taking 22 seconds;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-75" style="width: 507px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291343iF000D716721B42B0/image-size/large?v=v2&amp;px=999" role="button" title="image-75" alt="image-75" /></span></SPAN></P> <P>&nbsp;</P> <P>Next we need to Assign our new command to our Percept.</P> <P>&nbsp;</P> <P>If we return to Azure Percept Studio at<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">portal.azure.com/#blade/AzureEdgeDevices/Main/overview</A></P> <P>&nbsp;</P> <P>We can then navigate to Speech using the menu item on the left and select the Commands Tab.</P> <P>&nbsp;</P> <P>If select the row of our new Command, in our case the “PlayMusic-speech” row, we can then hit the “Assign” button;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AzurePerceptStudio-AssignCommand" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291344iC0AA509069FE9E61/image-size/large?v=v2&amp;px=999" role="button" title="AzurePerceptStudio-AssignCommand" alt="AzurePerceptStudio-AssignCommand" /></span></P> <P>&nbsp;</P> <P><SPAN>We’ll then be shown the “Deploy to device” dialog, where we can select the IoT Hub and Percept device we want to deploy our new command to;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AzurePerceptStudio-DeployCOmmand" style="width: 792px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291345i2A3B20ADB322EBF3/image-size/large?v=v2&amp;px=999" role="button" title="AzurePerceptStudio-DeployCOmmand" alt="AzurePerceptStudio-DeployCOmmand" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>Hitting the “Save” button begins the process of deploying the new Command to the Percept DK;</SPAN></P> <P>&nbsp;</P> <H2><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-78" style="width: 738px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291346i87C1F3FCF883D959/image-size/large?v=v2&amp;px=999" role="button" title="image-78" alt="image-78" /></span></SPAN></H2> <P>&nbsp;</P> <H2><SPAN>The new Command in Action</SPAN></H2> <P>&nbsp;</P> <P>As part of a special <A href="#" target="_blank" rel="noopener">IoTeaLive show</A>, I create a Video of my experiences working with the Azure Percept Audio Module, which you can find on YouTube here.</P> <P>&nbsp;</P> <P>This is the section where I tested the new Custom Command:</P> <P>&nbsp;</P> <DIV style="position: relative; left: 12.5%; padding-bottom: 42.3%; padding-top: 0px; height: 0; overflow: hidden; min-width: 320px; max-width: 75%;"><IFRAME src="https://www.youtube-nocookie.com/embed/JqfceNUPqBo?controls=0&amp;autoplay=false&amp;WT.mc_id=iot-c9-niner" frameborder="0" allowfullscreen="allowfullscreen" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" class="video-iframe" title="Azure Percept Audio in action"></IFRAME></DIV> <H2>My Plans</H2> <P>&nbsp;</P> <P>Now I've figured out how to make Custom Commands, there's also a facility for the command to call out to a Web Endpoint. Next up I want to try calling into an Azure Function and perhaps to an Azure IoT Hub maybe.</P> <P>&nbsp;</P> <P>That way, I'll be able to control anything I like. If you know me at all, you'll know I'm a big fan of the Raspberry Pi as a fast reliable way to prototype IoT solutions. I've got plenty of experience in hooking Raspberry Pis up to an IoT Hub, and I'll have no trouble adding something like a Relay to control home appliances like lamps and so on.&nbsp;</P> <P>&nbsp;</P> <H2>My Thoughts</H2> <P>&nbsp;</P> <P>As I mentioned, I had a few issues trying to get the Percept DK to react to me from time to time. Sometimes I even had to turn it off and back on for it to react oddly. I've since trained a new keyword of "controller" and this seems to work really well for some reason. I've also tried to see how well the device can hear me try to wake it, and it's actually really sensitive, having no issues hearing me from over 15 meters away.</P> <P>&nbsp;</P> <P>I found the process of creating Custom Commands very easy. Having had the experience of creating Alexa Skills, the process was quite similar in some regards.</P> <P>&nbsp;</P> <P>There's some scope for improvement with Speech Studio, where perhaps allowing for a few more preset types for parameters would be good.</P> <P>&nbsp;</P> <P>I guess it would also be useful if we could deploy Custom Commands directly from Speech Studio maybe? That would save having to context switch away while developing, but tyhat's a small inconvenience only.</P> <P>&nbsp;</P> <P>Overall, a fantastic way to interact with the Azure Percept DK and very easy to use... Keep your eyes open for the next post in the series!</P> </DIV> Wed, 07 Jul 2021 23:53:39 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-audio-custom-keywords-and-commands/ba-p/2485199 Peter Gallagher 2021-07-07T23:53:39Z Windows IoT support lifecycle and upcoming releases https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/windows-iot-support-lifecycle-and-upcoming-releases/ba-p/2511888 <P><EM>“Windows has always existed to be a stage for the world’s innovation. It’s been the backbone of global businesses and where scrappy startups became household names. The web was born and grew up on Windows. It’s the place where many of us wrote our first email, played our first PC game, and wrote our first line of code. Windows is the place people go to create, to connect, to learn, and to achieve – a platform over a billion people today rely on.” – </EM>Panos Panay</P> <P>&nbsp;</P> <P><A href="#" target="_blank">These were the words</A> that Panos Panay wrote in the announcement blog for Windows 11 and just like him, the Windows IoT team is excited about the continued investments and developments around Windows. We want to use this blog to address questions and comments we have received over the past few days regarding Windows and the commitment around the support lifecycle.</P> <P>&nbsp;</P> <P>In <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/windows-10-iot-long-term-servicing-channel-upcoming-availability/ba-p/2139861" target="_blank">February</A> we announced that there will be release of Windows 10 Enterprise LTSC and Windows 10 IoT Enterprise LTSC in the second half (H2) of calendar year 2021. &nbsp;In that announcement we communicated that Windows 10 Client LTSC will change from a 10-year to a 5-year lifecycle, aligning with the changes to the next perpetual version of Office. We also stated that Windows 10 IoT Enterprise will maintain a 10-year support lifecycle.&nbsp; You can read more about their announcements&nbsp;<A href="#" target="_blank">here</A>.</P> <P>&nbsp;</P> <P>This has not changed with all the announcements around Windows 11, and we are still scheduled to release a LTSC version of Windows 10 IoT Enterprise in the timeframe specified in that announcement.&nbsp; We will also release IoT versions of Windows 11 and Windows Server 2022. The first release of Windows 11 IoT Enterprise will have a servicing timeline of 36 months from the month of release as <A href="#" target="_blank">described in our lifecycle documentation</A>. We will announce more information around these releases in the future.</P> <P>&nbsp;</P> <P>The needs of the IoT industry remain unique and for that reason Microsoft developed&nbsp;<A href="#" target="_blank">Windows 10 IoT Enterprise LTSC</A>&nbsp;and the Long Term Servicing Channel of Windows Server, which today is Windows Server 2019. Each of these products will&nbsp;<STRONG>continue to have a 10-year support lifecycle</STRONG>, as documented on our&nbsp;<A href="#" target="_blank">Lifecycle datasheet</A>.</P> <P>&nbsp;</P> <P>We remain committed to the ongoing success of Windows IoT, which is deployed in millions of intelligent edge solutions around the world. Industries including manufacturing, retail, medical equipment and public safety choose Windows IoT to power their edge devices because it is a rich platform to create locked-down, interactive user experiences with natural input, provides world class security and enterprise grade device management, allowing customers and partners to build solutions that are designed to last.</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> Wed, 07 Jul 2021 16:57:47 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/windows-iot-support-lifecycle-and-upcoming-releases/ba-p/2511888 95twr 2021-07-07T16:57:47Z Azure Percept Audio - First Steps https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-audio-first-steps/ba-p/2485000 <P><SPAN>Welcome back to another blog post about the Azure Percept DK!&nbsp;</SPAN></P> <P>&nbsp;</P> <H2><SPAN>Previously</SPAN></H2> <P>&nbsp;</P> <P><SPAN>In the previous post - <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-first-look/ba-p/2484867" target="_blank" rel="noopener">Azure Percept - First Look</A> - you'll remember that we had a first look at the percept and what it was all about. Well, i</SPAN><SPAN>n this post we'll take a look at the Azure Percept Audio Module, which allows for the recognition of Custom Keywords and Commands (among other things).</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="20210621_200106-Large" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291242i343BDDECBB93537D/image-size/large?v=v2&amp;px=999" role="button" title="20210621_200106-Large" alt="20210621_200106-Large" /></span></SPAN></P> <P>&nbsp;</P> <H2>What is the Percept Audio?</H2> <P>&nbsp;</P> <P>The Azure Percept Audio (sometimes called the Percept Ear) is a "System on a Module" or SoM, which is designed as the Audio Interface for Audio Processing at the edge for the Azure Percept.</P> <P>&nbsp;</P> <P>Along with the <A href="#" target="_blank" rel="noopener">Carrier Board</A>, <A href="#" target="_blank" rel="noopener">Azure Percept Studio</A>, <A href="#" target="_blank" rel="noopener">Microsoft LUIS</A> and <A href="#" target="_blank" rel="noopener">Speech</A>, the system can recognise keywords and commands to control devices using voice at the edge. This works both online and offline with the aid of the Carrier Board.</P> <P>&nbsp;</P> <H2>Azure Percept Audio Specifications</H2> <P>&nbsp;</P> <P>The basic specs for the Azure Percept Audio SoM are;</P> <P>&nbsp;</P> <UL id="block-1d902bc1-8385-453e-87bb-fe0a6c5f49ff"> <LI>Four-microphone linear array and audio processing via XMOS Codec</LI> <LI>2x buttons</LI> <LI>3x LEDs</LI> <LI>Micro USB</LI> <LI>3.5 mm audio jack</LI> </UL> <P>&nbsp;</P> <P>You can find the<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">full specifications here</A></P> <P>&nbsp;</P> <H2>Who's it for?</H2> <P>&nbsp;</P> <P>Like the Vision SoM, Microsoft clearly have a set of industries in mind for the Azure Percept Audio SoM;</P> <UL> <LI>Hospitality</LI> <LI>Healthcare</LI> <LI>Smart Buildings</LI> <LI>Automotive</LI> <LI>Retail</LI> <LI>Manufacturing</LI> </UL> <P>With applications such as;</P> <UL> <LI>In-room Virtual Concierge</LI> <LI>Vehicle Voice Assistant and Command/Control</LI> <LI>Point of Sale Services and Quality Control</LI> <LI>Warehouse Task Tracking</LI> </UL> <P>&nbsp;</P> <P>This becomes clear later when we look at the sample applications we can spin up in a minute.</P> <P>&nbsp;</P> <H2>Azure Percept Audio - Required Services</H2> <P>&nbsp;</P> <P><SPAN>The Azure Percept Audio SoM makes use of a couple of Azure Services to process Audio;</SPAN></P> <P>&nbsp;</P> <H3><SPAN>LUIS (Language Understanding Intelligent Service):</SPAN></H3> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">LUIS</A> is an Azure service which allows interaction with applications and devices using natural language.</P> <P>&nbsp;</P> <P>Using a visual interface, we’re able to train AI models without the need for deep Machine Learning experience of any kind.</P> <P>&nbsp;</P> <P>The Azure Percept uses LUIS to configure Custom Commands, allowing for a contextualised response to a given command.</P> <P>&nbsp;</P> <H3><SPAN>Cognitive Speech:</SPAN></H3> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">Cognitive Speech</A> is an Azure Service offering Text-to-speech, speech-to-text, speech translation and speaker recognition.</P> <P>&nbsp;</P> <P>Supporting over 92 languages, this service can convert speech to text allowing for interactivity with apps and devices.</P> <P>&nbsp;</P> <P>On the flip side, with support for over 215 different voices in 60 languages, the Speech Service can also convert Text to-Speech improving accessibility and interaction with devices and applications.</P> <P>&nbsp;</P> <P>Finally, the Speech Service can also translate between 30 different languages, allowing for real-time translation using a variety of programming languages, which I think is a really cool use case.</P> <P>&nbsp;</P> <P>The Percept uses this service amongst other things, to configure a wake word for the device, by default this is the word “<STRONG><EM>computer</EM></STRONG>“. (See Star Trek IV – The Voyage Home!).</P> <P>&nbsp;</P> <H2><SPAN>Azure Percept Audio - Sample Applications</SPAN></H2> <P>&nbsp;</P> <P><SPAN>If we navigate to&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">Azure Percept Studio</A><SPAN>, from the Overview Page we can select the “Demos &amp; tutorials” tab at the top;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-17" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291243iA0FA26E131C66173/image-size/large?v=v2&amp;px=999" role="button" title="image-17" alt="image-17" /></span></SPAN></P> <P><SPAN>If we scroll to the bottom of this page, we have some links to some Speech tutorials and demos.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-16" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291244iAA99BAAA9978BE44/image-size/large?v=v2&amp;px=999" role="button" title="image-16" alt="image-16" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>The first thing we’ll choose is “Try out voice assistant templates”. Clicking this link presents us with a fly out with a selection of templates to choose from;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-18" style="width: 568px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291245i60B91C1DCD8BACDC/image-size/large?v=v2&amp;px=999" role="button" title="image-18" alt="image-18" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>You can see here a selection of Sample Templates that speak to where Microsoft expect the Percept Audio to be used. All of these a limited to voice commands interacting with an environment. I'll speak later about some scenarios I'd actually like to be thought about that are outside of this use case.</SPAN></P> <P>&nbsp;</P> <P><SPAN>For now, we'll deploy one of these pre-backed samples and see how it works!</SPAN></P> <P>&nbsp;</P> <H2><SPAN>Azure Percept Audio – Hospitality Sample Template Setup</SPAN></H2> <P>&nbsp;</P> <P>Choosing the “Hospitality” option, agreeing to the terms and continuing on, we’re shown the resource creation flyout.</P> <P>&nbsp;</P> <P>Here we can select the subscription and resource group we’d like to deploy the various resources to.</P> <P>&nbsp;</P> <P>We’re also prompted for an Application Prefix. This allows the template to create resources with unique ids.</P> <P>&nbsp;</P> <P>We can then choose a region close to us. At the time of writing we can choose between West US and West Europe, but I imagine this will grow once the Percept starts getting towards GA. I was actually surprised at the choice of regions here with no East US, North Europe, and no APAC region at all.</P> <P>&nbsp;</P> <P>Moving on, the last item we need to select is the “LUIS prediction pricing tier, which we can leave at “Standard”, as the free tier doesn’t support speech requests sadly.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-19" style="width: 989px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291246i9D8A95E2BD963393/image-size/large?v=v2&amp;px=999" role="button" title="image-19" alt="image-19" /></span></P> <P><SPAN>Hitting the “Create” button, then begins the process of deploying the speech theme resources.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-20" style="width: 549px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291247i9A99976FE9524747/image-size/large?v=v2&amp;px=999" role="button" title="image-20" alt="image-20" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>We’re then prompted that this process can take between 2 and 4 minutes to complete…. This only took a matter of seconds for me nicely...</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-21" style="width: 748px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291252i4B6655185E3A53BC/image-size/large?v=v2&amp;px=999" role="button" title="image-21" alt="image-21" /></span></SPAN></P> <P>&nbsp;</P> <H2><SPAN>Azure Percept Audio – Hospitality Sample Template Demo</SPAN></H2> <P>&nbsp;</P> <P>Once the template has completed deploying we’re then shown a demo Hospitality environment.</P> <P>&nbsp;</P> <P>We should also now have 3 blue LEDs showing on the Percept;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="20210622_102254-Large" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291253iC9DA23FB965AADF2/image-size/large?v=v2&amp;px=999" role="button" title="20210622_102254-Large" alt="20210622_102254-Large" /></span></P> <P>&nbsp;</P> <P><SPAN>I found these LEDs to be super bright, such that I couldn't stare directly at them without then being able to see three (or is it 5? Ha) dots like Picard in that episode of Next Gen. They light my whole office up at night practically!</SPAN></P> <P>&nbsp;</P> <P><SPAN>The Percept Audio LEDs will indicate different statuses depending upon their colour and flash pattern;</SPAN></P> <P>&nbsp;</P> <TABLE border="1" width="100%"> <TBODY> <TR> <TD width="33.333333333333336%" height="29px"> <H3><STRONG>LED</STRONG></H3> </TD> <TD width="33.333333333333336%" height="29px"> <H3><STRONG>LED State</STRONG></H3> </TD> <TD width="33.333333333333336%" height="29px"> <H3><STRONG>Ear SoM Status</STRONG></H3> </TD> </TR> <TR> <TD width="33.333333333333336%" height="29px">L02</TD> <TD width="33.333333333333336%" height="29px"><SPAN>1x white, static on</SPAN></TD> <TD width="33.333333333333336%" height="29px"><SPAN>Power on</SPAN></TD> </TR> <TR> <TD width="33.333333333333336%" height="29px">L02</TD> <TD width="33.333333333333336%" height="29px">1x white, 0.5 Hz flashing</TD> <TD width="33.333333333333336%" height="29px"><SPAN>Authentication in progress</SPAN></TD> </TR> <TR> <TD width="33.333333333333336%" height="29px"><SPAN>L01 &amp; L02 &amp; L03</SPAN></TD> <TD width="33.333333333333336%" height="29px"><SPAN>3x blue, static on</SPAN></TD> <TD width="33.333333333333336%" height="29px"><SPAN>Waiting for keyword</SPAN></TD> </TR> <TR> <TD width="33.333333333333336%" height="29px"><SPAN>L01 &amp; L02 &amp; L03</SPAN></TD> <TD width="33.333333333333336%" height="29px">LED array flashing, 20fps</TD> <TD width="33.333333333333336%" height="29px"><SPAN>Listening or speaking</SPAN></TD> </TR> <TR> <TD width="33.333333333333336%" height="29px"><SPAN>L01 &amp; L02 &amp; L03</SPAN></TD> <TD width="33.333333333333336%" height="29px"><SPAN>LED array racing, 20fps</SPAN></TD> <TD width="33.333333333333336%" height="29px"><SPAN>Thinking</SPAN></TD> </TR> <TR> <TD width="33.333333333333336%" height="29px"><SPAN>L01 &amp; L02 &amp; L03</SPAN></TD> <TD width="33.333333333333336%">3x red, static on</TD> <TD width="33.333333333333336%" height="29px">Mute</TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P><SPAN>The LEDs are labelled as shown in the following picture, with L01 on the left of the SoM, L02 in the middle and L03 on the far right;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="LEDs" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291254i5414CAB343093F12/image-size/large?v=v2&amp;px=999" role="button" title="LEDs" alt="LEDs" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>Returning to the Hospitality demo environment. The screen is split up into several sections.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="HospitalityAreas" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291258i17485A37D819DBB4/image-size/large?v=v2&amp;px=999" role="button" title="HospitalityAreas" alt="HospitalityAreas" /></span></SPAN></P> <P>&nbsp;</P> <P>At the top of the demo environment we have an toolbar containing;</P> <P>&nbsp;</P> <UL> <LI>Create Custom Keyword</LI> <LI>Create Custom Command</LI> <LI>Get Started</LI> <LI>Learn More</LI> <LI>Feedback</LI> <LI>Troubleshoot</LI> </UL> <P>Just below that we have the current keyword and command and links to change them should we wish.... We'll actually be looking at all of that in another blog post, so keep your eyes peeled!</P> <P>&nbsp;</P> <P>On the left we have an interaction area where we can enter commands for the Percept to action.</P> <P>&nbsp;</P> <P>On the right we have a visual representation of the current environment, which reflects the actions our commands invoke.</P> <P>&nbsp;</P> <H2>Audio Output</H2> <P>&nbsp;</P> <P>Before we try executing any commands, the Percept uses the Speech Service to convert it’s command responses to spoken word.</P> <P>&nbsp;</P> <P>For us to be able to hear that, we’ll need to connect some speakers to the device.</P> <P>&nbsp;</P> <P>The Percept has a 3.5mm audio jack output for exactly that purpose… Hooking up some relatively low powered portable speakers to the line out jack will allow us to hear the responses to our commands.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AudioOutput" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291260iCCDB14C5DCFBAFBE/image-size/large?v=v2&amp;px=999" role="button" title="AudioOutput" alt="AudioOutput" /></span></P> <H2>Executing Commands</H2> <P>We can now try executing some commands. The Custom Keyword or Wake Word for the Percept defaults to “Computer” (Where's Scotty when you need him!), we can say that followed by one of a few commands which are applicable to this particular sample;</P> <P>&nbsp;</P> <UL> <LI>Turn on/off the lights</LI> <LI>Turn on/off the TV.</LI> <LI>Turn on/off the AC.</LI> <LI>Open/close the blinds.</LI> <LI>Set temperature to X degrees. (X is the desired temperature, e.g. 75.)</LI> </UL> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Azure-Percept-Screen-Time-0_19_1929-Large" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291261iCBF4E4172FC54CDE/image-size/large?v=v2&amp;px=999" role="button" title="Azure-Percept-Screen-Time-0_19_1929-Large" alt="Azure-Percept-Screen-Time-0_19_1929-Large" /></span></P> <P>&nbsp;</P> <P>I noticed, and perhaps due to my English accent, that it took a while for the Percept to recognise my pronunciation of “Computer”… I did try pronouncing it with an American Accent (and also asking it for the chemical formula for Plexiglass), but that didn’t seem to help.</P> <P>&nbsp;</P> <P>Eventually it did work, and I quickly learnt how to say the word for a relatively repeatable wake up. I did notice that often, it would take quite a while to "Wake Up" the first time I issued the wake up word, and after that it would work quite quickly. I also noticed that, if I'd left it idle overnight that, in perfect IT Crowd style, I'd actually have to turn it off and back on to get it working again. When I get some time, I'll raise some feedback with the team.</P> <P>&nbsp;</P> <P>Once I’d mastered the wake word, all the other instructions worked pretty well.... You can see one of my failed attempts at the top here;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-24" style="width: 601px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291262i07B5466C3C6C4A7B/image-size/large?v=v2&amp;px=999" role="button" title="image-24" alt="image-24" /></span></P> <P>&nbsp;</P> <P><SPAN>By instructing the Percept to turn on the TV, the simulation on the right would show the TV on, and so on through the commands.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Azure-Percept-Screen-Time-0_17_5329-Large-1" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291263i29A32B3C24E52318/image-size/large?v=v2&amp;px=999" role="button" title="Azure-Percept-Screen-Time-0_17_5329-Large-1" alt="Azure-Percept-Screen-Time-0_17_5329-Large-1" /></span></SPAN></P> <P>&nbsp;</P> <P><SPAN>The only command that didn’t work as intended was the “Set Temperature” command, which didn’t accept the actual temperature as a parameter to the command.&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN>It turns out that this was because I was trying to set the temperature too low, you can only set it within a few degrees of the set temperature, otherwise it just doesn't work.</SPAN></P> <P>&nbsp;</P> <H2><SPAN>My Plans</SPAN></H2> <P>&nbsp;</P> <P><SPAN>The first thing I'm going to try with this is hook it all up to a Raspberry Pi and recreate the Hospitality experience in real life. I think this would make a really cool demo for future talks... (Also, Microsoft, if you need somebody to come and create you a demo room in Seattle, just shout eh! ;)</img> Haha).</SPAN></P> <P>&nbsp;</P> <P><SPAN>I did ask the team about perhaps using the Percept Audio to detect things other than speech. As I mentioned in my previous post, I have a client in the Ecology and Wildlife ecosystem, and I'd love to perhaps train the Percept to recognise the sounds of wildlife maybe?&nbsp;</SPAN></P> <P>&nbsp;</P> <H2><SPAN>My Thoughts</SPAN></H2> <P>&nbsp;</P> <P><SPAN>Having spent time making Alexa Skills, Speech Studio is quite limited in comparison to the tools around Alexa, but it's got everything we need at the moment to make reasonable speech based interaction apps.</SPAN></P> <P>&nbsp;</P> <P><SPAN>I did find it frustrating that it would either not understand me, or be really slow to wake to the wake word... This makes demoing the unit a bit hit and miss and elicits the usual sniggers from attendees.... Ha.</SPAN></P> <P>&nbsp;</P> <P><SPAN>Those points aside, I found the experience worked well, with the Sample Applications being a great example of some of the ideas Microsoft have in mind for this side of the Percept.</SPAN></P> Fri, 02 Jul 2021 15:34:03 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-audio-first-steps/ba-p/2485000 Peter Gallagher 2021-07-02T15:34:03Z General availability: Azure Sphere version 21.07 expected on July 21, 2021 https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/general-availability-azure-sphere-version-21-07-expected-on-july/ba-p/2508794 <P><SPAN>The Azure Sphere OS update&nbsp;21.07&nbsp;is now available for evaluation in the <STRONG>Retail Eval</STRONG> feed</SPAN><SPAN>.&nbsp;The retail evaluation period for 21.07 provides 3 weeks for backwards compatibility testing. During this time, please verify that your applications and devices operate properly with this release&nbsp;before it is deployed broadly via the Retail feed. The Retail feed will continue to deliver OS version&nbsp;21.06&nbsp;until we publish 21.07&nbsp;in July.&nbsp;</SPAN></P> <P>&nbsp;</P> <P>The evaluation release of version 21.07 includes an OS update only; it does not include an updated SDK. When 21.07 is generally available later in July, an updated SDK will be included.</P> <P>&nbsp;</P> <H2>Compatibility testing with version 21.07</H2> <P>The Linux kernel has been upgraded to version 5.10. Areas of special focus for compatibility testing with 21.07 should include apps and functionality using kernel memory allocations and OS dynamically-linked libraries.</P> <P>&nbsp;</P> <H2>Notes about this release</H2> <UL> <LI>The most recent <A href="#" target="_self">RF tools</A> (version 21.01) are expected to be compatible with OS version 21.07.</LI> <LI>Azure Sphere Public API can be accessed programmatically using Service Principal or MSI created in a customer AAD tenant.</LI> </UL> <P>&nbsp;</P> <P><SPAN>For more information on Azure Sphere OS feeds and setting up an evaluation device group,&nbsp;see&nbsp;</SPAN><A href="#" target="_self"><SPAN>Azure Sphere OS feeds</SPAN></A><SPAN> and </SPAN><A href="#" target="_self">Set up devices for OS evaluation</A><SPAN>.</SPAN></P> <P>&nbsp;</P> <P>For self-help technical inquiries, please visit <A href="#" target="_self">Microsoft Q&amp;A</A><SPAN> o</SPAN>r <A href="#" target="_self">Stack Overflow</A>. If you require technical support and have a support plan, please submit a support ticket in <A href="#" target="_self">Microsoft Azure Support</A> or work with your Microsoft Technical Account Manager. If you would like to purchase a support plan, please explore the <A href="#" target="_self">Azure support plans</A>.</P> Thu, 01 Jul 2021 21:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/general-availability-azure-sphere-version-21-07-expected-on-july/ba-p/2508794 AzureSphereTeam 2021-07-01T21:00:00Z Azure Percept - First Look https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-first-look/ba-p/2484867 <P><SPAN>A few weeks back I was lucky enough to get my hands on the brand new Azure Percept. Us UK folks have been waiting patiently for the release date, but thanks to some behind the scenes work from the IoT Team, both Cliff Agius and myself were lucky enough to be given a pair of development systems to play with and write content around.</SPAN></P> <P>&nbsp;</P> <P><SPAN>This post is mainly my first impressions of the kit we received after spending some time unboxing it on our live Twitch Show - IoTeaLive.</SPAN></P> <P>&nbsp;</P> <H2>What is the Percept?</H2> <P>&nbsp;</P> <P><SPAN>The </SPAN><A href="#" target="_blank" rel="noreferrer noopener">Azure Percept</A><SPAN> is a Microsoft Developer Kit designed to fast track development of AI (Artificial Intelligence) applications at the Edge.</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="20210618_144243-Large-1" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291225i476F40B3465683E7/image-size/large?v=v2&amp;px=999" role="button" title="20210618_144243-Large-1" alt="20210618_144243-Large-1" /></span></SPAN></P> <P>&nbsp;</P> <P>After you've unboxed the Percept, you'll see the three primary components of the Carrier Board, the Eye or Vision Module and the optional Ear or Audio Module.</P> <P>&nbsp;</P> <P>This kit looks fantastic in silver, and I believe the production version will be black, which also looks great from the images I've seen.&nbsp;&nbsp;</P> <P>&nbsp;</P> <H2>What are the Specs?</H2> <P>&nbsp;</P> <P><SPAN>At a high level, the Percept Developer kit has the following specs;</SPAN></P> <H3>&nbsp;</H3> <H3><SPAN>Carrier (Processor) Board:</SPAN></H3> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="20210621_124348-Large" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291226iAD841FDE7AF21569/image-size/large?v=v2&amp;px=999" role="button" title="20210621_124348-Large" alt="20210621_124348-Large" /></span></SPAN></P> <P>&nbsp;</P> <UL id="block-3e7be42c-4b20-41c8-a4c0-604540eff2ac" class="block-editor-rich-text__editable block-editor-block-list__block wp-block is-selected rich-text" tabindex="0" role="group" contenteditable="true" aria-multiline="true" aria-label="Block: List" data-block="3e7be42c-4b20-41c8-a4c0-604540eff2ac" data-type="core/list" data-title="List"> <LI>NXP iMX8m processor</LI> <LI>Trusted Platform Module (TPM) version 2.0</LI> <LI>Wi-Fi and Bluetooth connectivity</LI> </UL> <H3>&nbsp;</H3> <H3>Vision SoM:</H3> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="20210621_124405-Large" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291227i2C35C204D1F5E564/image-size/large?v=v2&amp;px=999" role="button" title="20210621_124405-Large" alt="20210621_124405-Large" /></span></P> <P>&nbsp;</P> <UL id="block-ec1443b0-c4e4-460f-8789-9f8123801c18" class="block-editor-rich-text__editable block-editor-block-list__block wp-block is-selected rich-text" tabindex="0" role="group" contenteditable="true" aria-multiline="true" aria-label="Block: List" data-block="ec1443b0-c4e4-460f-8789-9f8123801c18" data-type="core/list" data-title="List"> <LI>Intel Movidius Myriad X (MA2085) vision processing unit (VPU)</LI> <LI>RGB camera sensor</LI> </UL> <P>&nbsp;</P> <H3>Audio SoM:</H3> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="20210621_200106-Large" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291228i9C8B625EB9168908/image-size/large?v=v2&amp;px=999" role="button" title="20210621_200106-Large" alt="20210621_200106-Large" /></span></P> <P>&nbsp;</P> <UL id="block-9c09a3cb-c1ff-4e07-8151-efc33541281e" class="block-editor-rich-text__editable block-editor-block-list__block wp-block is-selected rich-text" tabindex="0" role="group" contenteditable="true" aria-multiline="true" aria-label="Block: List" data-block="9c09a3cb-c1ff-4e07-8151-efc33541281e" data-type="core/list" data-title="List"> <LI>Four-microphone linear array and audio processing via XMOS Codec</LI> <LI>2x buttons, 3x LEDs, Micro USB, and 3.5 mm audio jack</LI> </UL> <P>&nbsp;</P> <H2>Who's it aimed at?</H2> <P>&nbsp;</P> <P><SPAN>The Percept is currently aimed a few target markets. We can see this when we start looking at a few of the models we can deploy;</SPAN></P> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-15" style="width: 534px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291229i62875F9350585DD1/image-size/large?v=v2&amp;px=999" role="button" title="image-15" alt="image-15" /></span></SPAN></P> <P>&nbsp;</P> <P>These models can be deployed to the percept to quickly configure the Percept to recognise objects in a set of environments, such as;</P> <P>&nbsp;</P> <UL> <LI>General Object Detection</LI> <LI>Items on a Shelf</LI> <LI>Vehicle Analytics</LI> <LI>Keyword and Command recognition</LI> <LI>Anomaly Detection etc</LI> </UL> <P>&nbsp;</P> <H2>Azure Percept Studio</H2> <P>&nbsp;</P> <P>Microsoft provide a suite of software to interact with the Percept, centred around Azure Percept Studio, an Azure based dashboard for the Percept.</P> <P>&nbsp;</P> <P>Once you've gone through the setup experience, you're taken to Percept Studio to start playing with the device.</P> <P>&nbsp;</P> <P>Percept studio feels just like most other parts of the Portal and is broken down into several main sections;</P> <P>&nbsp;</P> <H3>Overview:</H3> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-3" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291230i19D537B0ACC6EB13/image-size/large?v=v2&amp;px=999" role="button" title="image-3" alt="image-3" /></span></P> <P>&nbsp;</P> <P>This section gives us an overview of Percept Studio, including;</P> <P>&nbsp;</P> <UL> <LI>A Getting Started Guide</LI> <LI>Demos &amp; Tutorials,</LI> <LI>Sample Applications</LI> <LI>Access to some Advanced Tools including Cloud and Local Development Environments as well as setup and samples for AI Security.</LI> </UL> <P>&nbsp;</P> <H3>Devices:</H3> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-4" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291232iB71E4FC3855C1D36/image-size/large?v=v2&amp;px=999" role="button" title="image-4" alt="image-4" /></span></P> <P>&nbsp;</P> <P>The Devices Page gives us access to the Percept Devices we’ve registered to the solution’s IoT Hub.</P> <P>&nbsp;</P> <P>We’re able to click into each registered device for information around it’s operations;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-6" style="width: 577px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291233i33C403F224D183C3/image-size/large?v=v2&amp;px=999" role="button" title="image-6" alt="image-6" /></span></P> <P>This area is broken down into;</P> <P>&nbsp;</P> <UL> <LI>A General page with information about the Device Specs and Software Version</LI> <LI>Pages with Software Information for the Vision and Speech Modules deployed to the device as well as links to Capture images, View the Vision Video Stream, Deploy Models and so on</LI> <LI>We’re able to open the Device in the Azure IoT Hub Directly</LI> <LI>View the Live Telemetry from the Percept</LI> <LI>Links with help if we need to Troubleshoot the Percept</LI> </UL> <P>&nbsp;</P> <H3>Vision:</H3> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-8" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291235i025C8250C12F78FE/image-size/large?v=v2&amp;px=999" role="button" title="image-8" alt="image-8" /></span></P> <P>&nbsp;</P> <P><SPAN>The Vision Page allows us to create new&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">Azure Custom Vision</A><SPAN>&nbsp;Projects as well as access any existing projects we’ve already created.</SPAN></P> <P>&nbsp;</P> <H3><SPAN>Speech:</SPAN></H3> <P>&nbsp;</P> <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-9" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291238iC1A494968CA92835/image-size/large?v=v2&amp;px=999" role="button" title="image-9" alt="image-9" /></span></SPAN></P> <P>&nbsp;The Speech page gives us the facility to train Custom Keywords which allow the device to be voice activated;</P> <P>&nbsp;</P> <P><SPAN><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-12" style="width: 510px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291239i0B40FB40357D4BF5/image-size/large?v=v2&amp;px=999" role="button" title="image-12" alt="image-12" /></span></SPAN></SPAN></P> <P><SPAN><SPAN><SPAN>We can also create Custom Commands which will initiate an action we configure;</SPAN></SPAN></SPAN></P> <P>&nbsp;</P> <P><SPAN><SPAN><SPAN><SPAN><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image-14" style="width: 500px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/291240iE0C61614C42FC70C/image-size/large?v=v2&amp;px=999" role="button" title="image-14" alt="image-14" /></span></SPAN></SPAN></SPAN></SPAN></SPAN></P> <P><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN>Percept Speech relies on various Azure Services including&nbsp;<A href="#" target="_blank" rel="noreferrer noopener">LUIS</A>&nbsp;(Language Understanding Intelligent Service) and&nbsp;<A href="#" target="_blank" rel="noreferrer noopener">Azure Speech</A>.</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></P> <P>&nbsp;</P> <H2><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN>Thoughts on the Vision SoM</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></H2> <P>&nbsp;</P> <P><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN>During the Twitch stream with unboxing and first steps we did, we had a play mainly with the Vision SoM. I already had a really old CustomVision.ai model setup and trained from a talk I gave at the AI Roadshow a year or so ago.</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></P> <P>&nbsp;</P> <P data-unlink="true"><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN>This project was an image classification project, which was demoing the MIT image set called <A href="#" target="_blank" rel="noopener">ObjectNet</A>.&nbsp; This is a set of images with items in unusual orientations and locations, designed to fool AI. I was showing how a Hammer on a bed could be confused for a screwdriver by AI;</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></P> <P data-unlink="true">&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="hammer_on_bed.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/292513i742D53F263B366D9/image-size/large?v=v2&amp;px=999" role="button" title="hammer_on_bed.jpg" alt="hammer_on_bed.jpg" /></span></P> <P> </P> <P>It took a while for me to realise that I needed to create a new project to see the Percept Vision in full, where this model was of course only returning one object at a time and wouldn't let me identify various objects in a single image;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="PeterGallagher_0-1625041682170.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/292514iBE53F13739554C8E/image-size/large?v=v2&amp;px=999" role="button" title="PeterGallagher_0-1625041682170.png" alt="PeterGallagher_0-1625041682170.png" /></span></P> <P>&nbsp;</P> <P>The process of tagging and categorising images was very easy, if not a bit laborious, given that I need to at least 15 of each category to begin the auto training. But, once you have enough images, the process works a bit faster.</P> <P>&nbsp;</P> <H2><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN>Plans</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></H2> <P>&nbsp;</P> <P><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN>Now that I've had some time to play with the basics, my next port of call is the Audio SoM... I've played with Alexa Skills in the past, so I'm interested to see how the Speech Projects work in comparison. It'd be great to integrate other Azure Services as well as falling back to my mainstay of IoT Hub to perhaps drive some IoT workloads - Perhaps some home automation!</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></P> <P>&nbsp;</P> <P><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN>Onwards from that, I've spoken to a few clients, one in particular is in the ecology and wildlife ecosystem, and they are really interested in whether we can use the Percept to help in Human and Animal Conflict situations perhaps... Definitely a great use case!</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></P> <P>&nbsp;</P> <H2><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN><SPAN>Where can I get some more information?</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></H2> <P>&nbsp;</P> <P>&nbsp;</P> <UL> <LI><A href="#" target="_blank" rel="noreferrer noopener">Cliff Agius</A><SPAN>&nbsp;</SPAN>has created<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">an excellent blog post</A><SPAN>&nbsp;</SPAN>around his first impressions of the Azure Percept Developer Kit…. You can find that<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">here</A></LI> <LI>Myself,<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">Cliff Agius</A>,<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">Mert Yeter</A><SPAN>&nbsp;</SPAN>and<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">John Lunn</A><SPAN>&nbsp;</SPAN>recorded a<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">Percept Special IoTeaLive show</A><SPAN>&nbsp;</SPAN>of the unboxing and first steps on the<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noreferrer noopener">AzureishLive Twitch Channel</A>.</LI> <LI><A href="#" target="_blank" rel="noreferrer noopener">Official Azure Percept Page</A></LI> <LI><A href="#" target="_blank" rel="noreferrer noopener">Azure Percept MS Docs</A></LI> <LI>The<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nore